Why choose TM over Auto-association?

At the beginning of On Intelligence @jhawkins explains that auto-associative memory is the basis of spatial and temporal learning in the cortex. I’ve found many neuroscientists agree that the cortex operates on local auto-associative (recurrent collateral) networks. But I haven’t really seen any adoption of auto-association in HTM.

I guess that temporal memory was considered a better alternative because it could deal with invariance better? Or maybe it had more desirable properties in temporal learning? Or maybe the plan is to eventually implement auto-association along with TM?

1 Like

Edit: if by auto-associative you mean the pattern completion property, then the segments of the TM are doing a kind of temporal version of this (completing next timestep rather than the current one).

This is only my own opinion. But I think what is meant here is the immediacy with which a specific datum can be accessed, and how a previous datum can act as the “key” for the next auto-associatively-accessed datum.

So for instance. If you have a list of memory locations in which data is stored, and you want to access a particular datum at a particular index - and you don’t know the index, you have to iterate over the list until you find the desired datum - AND - the index of that datum doesn’t hold any semantic meaning (other than a location in a sequence), itself.

With “associative” memories, your “keys” can have semantic meaning (be mapped to a “word string” or “sound” etc.) and it will immediately retrieve the desired data without having to iterate over locations or anything!

It is “direct” access and (more importantly), that “directness” is not impacted whatsoever by the amount of data being stored! No matter how much data is being stored, a specific mapping will “go directly” to the datum being accessed.

The above property is critical for large semantic data storages like our brains…

1 Like

Yes, if you think of auto-association as completing a pattern then the TM states contain enough information to complete an unambiguous sequence. It can do this after only seeing the first couple of elements even if the sequence is really long. If the sequence is ambiguous (i.e. has multiple possible endings) then it can usually complete the various endings.

This is the basis by which our k-step-ahead classifier works.

2 Likes