Question on temporal memory

I believe I have a good idea of how TM works but there is a detail I can’t figure out.

If I had a sequence of (ABCD) but I fed in (ABCDABCD) would TM learn the first ABCD then recognise the second? or would the second ABCD be learned in the context of the first? I would say the latter. In that case does the second ABCD have to be fed in separately after the first in order to be recognised?

If this is the case does data have to be chunked into pieces to be fed in? Or could it be streamed in as one big sequence (then subsequences to be recognised after)?

The second ABCD would be learned in the context of the first ABCD. In order to recognize ABCD as an independent sequence you should reset the TM by clearing out any current predictions and making sure no synapses grow to the currently active cells. You can see how this is done in NuPIC here:

https://github.com/numenta/nupic/blob/master/src/nupic/research/temporal_memory.py#L298-L306

So yes, you may need to chunk up your sequences if you can define when they start and end.

I believe that subsequences can be recognized within larger sequences, but they would all be identified differently depending on the larger sequence. Can someone else help answer this? Because I’m not certain.

2 Likes

Typically you won’t need the reset. If the algorithm sees ABCDABCD enough times to learn that 8 pattern sequence then it will make predictions and represent the second set of ABCD in the context of the first.

But if it has only seen ABCD (and not twice in a row) then when it first sees ABCDABCD it will not be predicting the high order A when it gets to the first D and will burst the A columns when the second A is presented. It will pick a learning cell to start learning the transition from D (in ABC context) to A (in ABCD context) but because all cells in the columns are active, it will be predicting the B representation from the first presentation and lock back in to the same BCD representations from the initial ABCD presentation even without a reset.

2 Likes

Hi there, this is my first post here, I’ve been following your work since last January and am really interested by your theory! Thank you so much for the opensource initiative and your active work!

After having red most of Numenta’s papers I am trying to implement the theory step by step. I got some trouble getting to the Temporal Memory because of the followings, I hope that someone might correct/help me there:
Taking the example of the sequences ‘ABCD’ and ‘XBCY’ without the reset, and as far as I understood the theory, the TM is supposed to build different contextual representation of C here. But if I send these sequences and repeat them in a TM for it to learn, the first time a B appears in the ‘XB’ context, it will burst and then put the C representation in the context of ‘AB’ (already learned) in predictive state because the prediction process looks at previous active cells. Then C will grow additional synapses on the same ‘AB’ context segment to recognize the ‘XB’ context making C ambiguous. So I can’t represent a high order sequence.

If I change the prediction process from looking at previous active cells to looking at previous winner cells, the TM will effectively learn as much long patterns as it can until ambiguous use of cells through additional segments growing makes it loop. But then I can’t use the no context prediction effectively with bursting and anyway it is not written in the BAMI pseudocode so I believe I missed something here…

Am I wrong somewhere?

3 Likes

Late answer but I just wanted to say that this is how it works currently without resets, for anyone reading. I work without resets too and observe the same exact thing so I think your implementation is not the issue.

You could try limiting number of winner cells into only one to prevent the loops connecting every cell in a minicolumn (which kind of defeats the purpose of bursting and kills sparsity) to the previous activation. Non-reset learning is challenging at the moment, it definitely needs improvements.

2 Likes