Question on temporal memory



I believe I have a good idea of how TM works but there is a detail I can’t figure out.

If I had a sequence of (ABCD) but I fed in (ABCDABCD) would TM learn the first ABCD then recognise the second? or would the second ABCD be learned in the context of the first? I would say the latter. In that case does the second ABCD have to be fed in separately after the first in order to be recognised?

If this is the case does data have to be chunked into pieces to be fed in? Or could it be streamed in as one big sequence (then subsequences to be recognised after)?


The second ABCD would be learned in the context of the first ABCD. In order to recognize ABCD as an independent sequence you should reset the TM by clearing out any current predictions and making sure no synapses grow to the currently active cells. You can see how this is done in NuPIC here:

So yes, you may need to chunk up your sequences if you can define when they start and end.

I believe that subsequences can be recognized within larger sequences, but they would all be identified differently depending on the larger sequence. Can someone else help answer this? Because I’m not certain.


Typically you won’t need the reset. If the algorithm sees ABCDABCD enough times to learn that 8 pattern sequence then it will make predictions and represent the second set of ABCD in the context of the first.

But if it has only seen ABCD (and not twice in a row) then when it first sees ABCDABCD it will not be predicting the high order A when it gets to the first D and will burst the A columns when the second A is presented. It will pick a learning cell to start learning the transition from D (in ABC context) to A (in ABCD context) but because all cells in the columns are active, it will be predicting the B representation from the first presentation and lock back in to the same BCD representations from the initial ABCD presentation even without a reset.