Just for fun: a reader on Mastodon pointed me to https://github.com/olofk/serv, which is an attempt to implement a RISC-V CPU using the absolute minimum number of... well, things. I think their goal is to optimize FPGA implementations with LUTs and flip-flops, but they also have a CMOS estimate of 2,100 gates.
This is within the ballpark of what's still realistically possible with relay logic, so one could build a dreadfully slow RISC-V relay computer.
You're half right - you caught a typo, thanks! But it should be (In_A XOR In_B) XOR In_C - i.e., it's a normal XOR if no input carry (as in a half adder), or inverted output if input carry.
Just for fun: a reader on Mastodon pointed me to https://github.com/olofk/serv, which is an attempt to implement a RISC-V CPU using the absolute minimum number of... well, things. I think their goal is to optimize FPGA implementations with LUTs and flip-flops, but they also have a CMOS estimate of 2,100 gates.
This is within the ballpark of what's still realistically possible with relay logic, so one could build a dreadfully slow RISC-V relay computer.
Shouldn't the first line here:
```
Out = (In_A AND In_B) XOR In_C
Out_C = (In_A AND In_B) OR (In_C AND (In_A OR In_B))
```
be `Out = (In_A AND In_B) OR In_C` ?
Otherwise Out for in_a = 1, in_b = 1, in_c = 1 would be 0, right?
You're half right - you caught a typo, thanks! But it should be (In_A XOR In_B) XOR In_C - i.e., it's a normal XOR if no input carry (as in a half adder), or inverted output if input carry.
Thanks for spotting this!