HTM superior to NN?

There are people who argue for the plausibility of backprop-like processes in the brain. Recently a couple papers [1][2] formulated some reasonable ideas in this direction.

But you don’t need backprop, you can get slow parametric updates using any kind of Hebbian learning as well. Whether it can adjust weights across entire hierarchies is unknown, but synaptic processes often require multiple presentations of a pattern in order to achieve increasing levels of permanence, and this usually means physically larger synapses as well. Whether that also means varying connection strength is an ongoing debate, and I know the thinking behind HTM assumes that the strength is binary, but this is not yet the consensus.

Anyway, I didn’t mean to strongly imply backprop as a learning rule in the brain, but just that even Hebbian learning processes are thought to be slow and parametric, e.g. the original STDP paper.

See a post I made a while ago (Complementary Learning Systems theory and HTM as a theory of the hippocampus) for details about the parallel parametric/episodic systems, largely based on [3].


[1] Lillicrap TP, Cownden D, Tweed DB, Akerman CJ. Random synaptic feedback weights support error backpropagation for deep learning. Nature Communications. 2016;7:13276. doi:10.1038/ncomms13276.

[2] Huh, Dongsung, and Terrence J. Sejnowski. “Gradient Descent for Spiking Neural Networks.” arXiv preprint arXiv:1706.04698 (2017).

[3] Kumaran, Dharshan, Demis Hassabis, and James L. McClelland. “What learning systems do intelligent agents need? Complementary learning systems theory updated.” Trends in cognitive sciences 20.7 (2016): 512-534.

1 Like