Autoassociation in the hippocampus and cortex

(sorry @rhyolight if similar to my other post… I promise its a bit different)

I’ve been reading a bit from Edmund Rolls and others on auto associative memory. Perhaps the most well known structure to perform autoassociation is CA3 in the hippocampus. The recurrent connections between the pyramidal neurons allow for all associated parts of cortex to associate together to form episodic memory. When parts of that memory are triggered the whole memory is recovered and the original parts of cortex are reactivated - essentially reliving the cortical activity.

As the cortex evolved from the hippocampus it seems safe to assume that autoassociation will carry on over into the cortex (in a more local fashion). Apparently it does.

Perhaps the most interesting thing I have learned is that layers 2/3 are recurrently connected - forming autoassociative memory. From the sparse inputs from layer 4, layers 2/3 can recover the whole pattern from noisy or incomplete patterns. This is an easy task due to the sparse nature of SDRs. HTM theory suggests that dendritic segments act to perform pattern completion. However, recurrent networks complete patterns that come from other parts of the cortex (like the entorhinal/hippocampus relationship, but via the thalamus) but which the patterns maybe very sub-sampled - to which the pattern has to be progressively recovered over n cycles.

I’ve also read that layers 5/6 are also recurrently connected. Possibly the cortical paramore to sensory feedforward input - motor output.

As there is a lot of neuroscience out there claim autoassociation is globally central to memory (in HTM that means both spatial and temporal). It could even be the case that associative memory is dimension-agnostic - in that there is no specific mechanism for spatial or temporal memory (or any higher dimension for that matter).

Last time I asked about auto-association the replies seemed to suggest that temporal memory essentially performs temporal autocompletion - which it does. However I feel organic/general autoassociation provides global functionality that gives arise to many useful properties that applied across all dimensions.

So my question - there is a lot in neuroscience about autoassociation, why is it not apart of HTM theory? Given that HTM is meant to be based upon biologically plausible cortical principles.


Autoassociation is definitely important, not only for disambiguating noisy patterns, but also to form better representations in general (see the success of generative and autoencoding models in traditional ML). And as you say, neuroscientists find it all over the place.

It is, but it’s implicit in the formulation. As you mentioned there is temporal autoassociation, although that could be seen as different because the “rolling autocomplete” is happening at a high rate. But consider the more stable representations at the level of temporal pooling, as you go up the hierarchy.

Although it would be a massive oversimplification to consider the cerebral cortex to be a strict hierarchy, the hippocampus is widely considered to be at or near the top. As a result, you would expect very high level abstractions to form there, and that is exactly what we see (e.g. place cells).

So when you’ve got temporally pooled high level representations, the key is that they change slowly enough for autoassociations to form (the pattern persists for longer than a few STDP windows). While a temporally abstract representation is active, it can form connections with not only the input that triggers and sustains it, but also recurrent connections that can help it complete itself in the face of ambiguity as more information comes in.

These recurrent connections onto temporally abstract representations might form on distal basal segments, that would gel with the existing theory. Then the autoassociation would be in the form of “predictive” depolarization that encourages pattern completion. But the exact mechanisms are definitely still an open question.

Implicit implementation of recurrent collaterals could be of disservice as they do more than just completion - (i.e decision making, short-term memory, etc.). This is what is meant by global/general-purpose functionality. A recurrent collateral network can serve different functions depending on the inputs. The difference in inputs could be the difference between spatial or temporal association functionality, or something else entirely for that matter.

Another way to think about it is that not all regions of the cortex encode space or time - so therefore there is no particular spatial or temporal encoding mechanisms in those regions. However, the general structure of recurrent collaterals are found everywhere - providing different functionality in different regions.

RCNs also don’t ‘care’ what they are associating. They can associate highly abstract thoughts and emotions that have nothing to do with space or time. The idea of a ‘failing democracy’ transcends the 3rd and 4th dimensions - but RCNs can still handle it.

Again, RCNs are not just about association. I remember reading description of in-vivo RCNs competing against each other for making a decision in perception (cortical feedback), and another group of RCNs competing for making a decision on action (partly feedforward amygdala).

There are other use-cases such as RCNs having larger dendrites in the prefrontal cortex to process short-term memory. Again, using a general purpose network to provide different functionality.

Anyway, I bring this up because it seems explicitly implementing RCNs could open up a whole level of perspective and emergent functionality. No directly suggesting it to Numenta, but just pondering the idea (given that Jeff first mentioned something similar at the beginning of On Intelligence).