Complementary Learning Systems theory and HTM as a theory of the hippocampus

HTM is intended to be a theory of the neocortex. But in some ways the algorithm seems to apply more appropriately to the hippocampus. In particular, both HTM and the hippocampus learn from experience online, and very quickly. In many theories and experiments, the neocortex on the other hand appears to learn gradually.

I recently reviewed this paper [1] summarizing and extending Complementary Learning Systems theory. Briefly, the theory can be summarized as follows. The neocortex has relatively dense activity and connections with weights that are generally updated slowly, whereas the hippocampus has very sparse activity and connections with weights that are updated rapidly. The hippocampus learns episodic memories in a one-shot fashion, which are then replayed to the neocortex gradually during and interleaved with experience, and during sleep and rest, in order to gradually update the less-plastic synapses in the neocortex.

This is supported by the observed sparsity in the various regions (~10% in many upper regions of neocortex, ~4% in CA1 of hippocampus, ~2.5% in CA3 of hippocampus, ~0.5-1% in dentate gyrus of hippocampus), the highly-plastic synapses in the hippocampus compared to the neocortex, the severe learning impairments of hippocampus-lesioned animals, and theoretical problems with attempting to support both generalization ability and fast learning free from catastrophic interference (also called catastrophic forgetting).

In many respects the hippocampus can be seen as a primordial neocortex from which the true neocortex evolved, and there is preserved structural similarity between the two (at least in CA3/CA1) in terms of lamination and cell types and so on.

It’s clear to me as someone who does a lot of “traditional” machine learning that some sort of high-plasticity episodic memory needs to be combined with a low-plasticity generalizing memory, so I do expect intelligent online learning agents to eventually require both of these. But can a fast-learning system like HTM successfully generalize while preserving its one-shot capability? I’m leaning toward HTM being an effective theory of CA3/CA1, with slower-updating neocortex (while surely preserving some of the insights of HTM) ultimately having more in common with current deep networks.

Thoughts? I recommend reading the paper in any case, it’s a great modern look on a long-standing theory of the systems-level learning mechanisms in animals.


[1] Kumaran, Dharshan, Demis Hassabis, and James L. McClelland. “What learning systems do intelligent agents need? Complementary learning systems theory updated.” Trends in cognitive sciences 20.7 (2016): 512-534.

8 Likes

First, great post. My hippocampus has no rapid, one shot thoughts to add yet. Here’s a link to the cited paper for anyone else curious.

2 Likes

Thank you for the link to such an interesting paper.

I noticed one point worth highlighting: the hippocampal memories lack structure.
This point makes me wonder: could it be that the hippocampus is the way to quickly absorb yet-unstructured data (data to make sense of), and that the “replaying” of such memories does not only serve to solidify them to the cortex, but also to make sense of them?
This would match my understanding that the hippocampus is “called” when there is a piece of perception the cortex doesn’t manage to make sense of.

This might be useful because most current applications of HTM learn from a flow of the same type of data. In real life, this is true for sensorial areas but not for higher areas - often we have to learn from a single episode that happened in our life. This would allow cortical synapses to learn a lot from a single piece of sensorial data.

Looks interesting. That would explain a lot. I haven’t gotten a chance to read through the paper yet, but thanks for the reference. I’ve tried coming up with a hippocampal model myself, but I’ve been much more focused on the Basal Ganglia, and I haven’t read much on the exact properties of hippocampal neurons aside from that they seem to be similar to cortical neurons. What about the different pathways in the hippocampus though? Any ideas on what they’re doing?

I remember that at one point I had some idea that one of the regions may form something similar to a D-latch in digital logic (proximal inputs acting like the data input, apical acting like the write input), but it’s been quite a while since I’ve looked too much into it, and I’ll have to recheck how much of that lines up with the neuroscience.

Taking a quick look at some online hippocampal diagrams, it seems like the pathways generally go EC -> DG -> CA3 -> CA1 -> EC. Based on the sparsity measurements you mentioned, that would mean that the hippocampus converts the cortical output to an extremely sparse SDR right away, and then gradually creates more dense representations before sending it back to the cortex.

That’s precisely correct. The sparsity of the regions appears to correlate also with the plasticity of their synapses, so in addition to being re-densified in the return to the cortex, the representations get “re-generalized” by using less-plastic synapses that can structure the data in a more parametric (as opposed to episodic) way.

In addition to the pathway you mentioned, there’s a pathway that skips the DG, going right from EC to CA3. This is considered a slower pathway, so one idea of its function is that patterns in EC, if novel, will trigger a fast sparsification in DG and the forming of a new episodic memory sequence in CA3 (pattern separation). If on the other hand a pattern is familiar, the connection from EC to CA3 will activate before DG has a chance to respond, reactivating the familiar old episodic memory sequence (pattern completion). A detailed spiking model of this theory is presented in [2], which calls this a “race to learn”.


[2] Nolan, Christopher R., et al. “The race to learn: spike timing and STDP can coordinate learning and recall in CA3.” Hippocampus 21.6 (2011): 647-660.

Related: this new paper just got published "Building concepts one episode at a time: The hippocampus and concept formation” (unfortunately I forgot who brought it to my attention).

1 Like

hi
You should look at TVA theory by professor Claus Bundesen at the Copenhagen University Psychology. He has created exactly the race formulas for object foreground and background.

Dead link. Everything I could find online was behind a paywall.

Earlier this year @jhawkins reviewed that paper in a research meeting.

At 6:26 Jeff mentions another paper that gives an alternative to memory consolidation from the hippocampus. I wonder if this is the one he meant (by Alison R. Preston and Howard Eichenbaum):

1 Like