Just putting this here in case it’s not yet shared. Hinton’s preliminary investigations on his forward forward learning neural network. Positive and negative passes sound interesting.
Maybe it’s worth summarizing for those too busy to click links.
Hinton’s made a huge step to bridging the gap of ‘but brains don’t backpropagate’.
He’s devised a super-simple NN construct (ForwardForward) that learns without invoking backpropagation. Super-simple as in: you can code it up in PyTorch (say) in a few minutes.
If it can be shown that backprop can be ‘swapped out’ with ForwardForward, that significantly narrows the gap between MachineIntelligence and BioIntelligence.
They implement enough of the paper to see it working and get something to play with in terms of tuning.
However the ‘architecture’ is entirely standard. The interesting bits, like timing layer feedback (from the paper) are not demonstrated.