Threshold Logic Circuits
Collection
Boolean gates, voting functions, modular arithmetic, and adders as threshold networks. • 269 items • Updated • 1
The function that broke single-layer perceptrons. XOR is not linearly separable - no single threshold neuron can compute it. This circuit uses the minimal two-layer solution.
x y
│ │
├───┬───┤
│ │ │
▼ │ ▼
┌──────┐│┌──────┐
│ OR │││ NAND │ Layer 1
│w:1,1 │││w:-1,-1│
│b: -1 │││b: +1 │
└──────┘│└──────┘
│ │ │
└───┼───┘
▼
┌──────┐
│ AND │ Layer 2
│w: 1,1│
│b: -2 │
└──────┘
│
▼
XOR(x,y)
Plot the XOR truth table on a plane:
y
1 │ 1 0
│
0 │ 0 1
└────────
0 1 x
No single line can separate the 1s from the 0s. This is linear inseparability. Minsky and Papert's 1969 proof of this contributed to the first AI winter.
The solution: compute OR and NAND in parallel, then AND them together.
| x | y | OR | NAND | AND(OR,NAND) |
|---|---|---|---|---|
| 0 | 0 | 0 | 1 | 0 |
| 0 | 1 | 1 | 1 | 1 |
| 1 | 0 | 1 | 1 | 1 |
| 1 | 1 | 1 | 0 | 0 |
OR catches "at least one." NAND catches "not both." Their intersection is "exactly one."
| Layer | Weights | Bias |
|---|---|---|
| OR | [1, 1] | -1 |
| NAND | [-1, -1] | +1 |
| AND | [1, 1] | -2 |
| Total | 9 |
from safetensors.torch import load_file
import torch
w = load_file('model.safetensors')
def xor_gate(x, y):
inp = torch.tensor([float(x), float(y)])
or_out = int((inp * w['layer1.neuron1.weight']).sum() + w['layer1.neuron1.bias'] >= 0)
nand_out = int((inp * w['layer1.neuron2.weight']).sum() + w['layer1.neuron2.bias'] >= 0)
l1 = torch.tensor([float(or_out), float(nand_out)])
return int((l1 * w['layer2.weight']).sum() + w['layer2.bias'] >= 0)
threshold-xor/
├── model.safetensors
├── model.py
├── config.json
└── README.md
MIT
Totally Free + Zero Barriers + No Login Required