My analysis on why Temporal Memory prediction doesn't work on sequential data

You might be interested to look into the Sparsey model. I know that the author has pointed out weaknesses of the temporal memory as implemented in HTM which his architecture purports to solve and that he has acquired a patent about something (I haven’t studied in detail) reminiscent of this “resetting” situation.

1 Like

For what it’s worth, my implementation doesn’t have a reset function. I haven’t personally found a need for one. The function of bursting already takes care of unexpected switches from one sequence to another.

1 Like

@Paul_Lamb thank you for pointing this out! the TM pseudocode in BAMI says that the winning cells do grow additional synapses to previously active cells, up to SYNAPSE_SAMPLE_SIZE synapses.
This would make winning cells for “B” in step 5 to grow synapses to cell 13 in column 1 and cell 11 in column 2, right? This means that we shouldn’t observe any upredicted input from now on. In your demo though columns for “B” in step 8 burst. (then columns for “C” in step 12 burst, and finally the columns for “A” in step 16 burst)
Could you please explain why the bursting is hapenning in step 8? Cell #6 in column 3 and cell #3 in column 4 have synapses to cell # 13 in column 1 and cell #11 in column 2 which were grown in step 5. Seems like the bursting shouldn’t be hapenning…

1 Like

Sure, the reason is that when B is first learned in step 2 (lets call these cells B’), they grows connections to a random set of cells in the A columns (let’s call these A’). However, those A’ cells do not have connections to anything (since they were first in the sequence). So when A is learned the first time (in step 4), a new set of cells is selected (lets call this A’’). B’ is predicted in step 5 because the A columns were bursting (meaning A’ are active). However, in step 7 the active cells are A’’ (which B’ is not connected with). This results in the B columns bursting in step 8.

Oh, wait a minute, you are right :blush:

In step 5, B’ should grow some connections with A’’, and thus be predicted for step 8. I think you have found a bug in my implementation. I’ll have to investigate…

This isn’t a bug in your implementation. The BAMI pseudocode has the same behavior. In step 5, it won’t grow new connections to the previous winner cells, because it already has SYNAPSE_SAMPLE_SIZE (a.k.a. “maxNewSynapseCount”) active synapses.

If this logic used “number of synapses to previous winner cells” rather than “number of synapses to previous active cells”, then it would have the alternate behavior that you’re expecting. But that would have other bad effects: if the TM learns sequences “ABCD” and “XBCY”, it would assign the same SDR for both occurrences of C, and then it would always predict a union of D and Y afterward, regardless of whether it had seen “ABC” or “XBC”.

3 Likes

Ah, yes of course. I like topics like these because it challenges and reinforces my understanding.

I’m thinking I do still have a bug (or I just need to go back to the basics and refresh my memory), since now I would expect to see the behavior that @oiegorov described in the OP (i.e. representations never fully stabilizing for all elements in a repeating sequence)

Thank you for your response. Do you confirm that the problem I described in the OP is valid?

Also, what exactly prevents the highlighted cells in step 5 to grow synapses to cell #13 in column 1 and cell # 11 in column 2 if we had maxSegmentsPerCell = 1, maxSynapsesPerSegment = 4, maxNewSynapseCount = 4? Wouldn’t the highlighted cells try to grow 2 more synapses? I’m not sure about that cause I don’t remember seing any discussion about the necessity of growing additional synapses for a cell that was correctly predicted…

You’re correct that the TM handles repeating sequences poorly, by default. The only immediately available solution is to use resets. The Backtracking TM addresses this problem by, upon bursting, asking, “Would this have bursted if the sequence had ‘started’ more recently?”, although I can’t say for certain whether it handles this flawlessly. Another imperfect approach I’ve used is change the “winner cell” selection process so that it selects the same cell within the minicolumn every time the minicolumn bursts, within a limited timespan. The timespan would need to be at least as long as the repeating sequence.

You’re correct that if maxNewSynapseCount (which is poorly named – elsewhere I call this “sampleSize”) is greater than the number of active minicolumns (and hence the number of winner cells), then in Step 5 it will grow 2 more synapses, connecting A’’ -> B’. But if this sample size is ≤ the number of active minicolumns, then it won’t connect A’’ -> B’. Typically the sample size is less than the number of active minicolumns – otherwise it wouldn’t really be “subsampling”, and it would have the ABCD / XBCY problem that I mentioned.

6 Likes

I found the bug. I had written a loophole where a random sampling of up to maxNewSynapseCount previously active cells (which may not include the current connected cells) could end up forming connections to the currently active cells. I believe this is also the cause for my aforementioned issue with multiple cells in the same column representing the same context.

1 Like

Thank you for the clarification! So backtracking is used is to replace resetting?
I found out that NAB uses backtracking by default. Does it still use resetting? if yes, does it reset the sequence when a new week starts?

2 Likes

I hope this isn’t going off topic (I think it is relevant to the topic), this particular goof may hint at a possible direction to explore for stabilizing the representations in a repeating sequence. If the system is allowed to make a smaller number of additional connections (beyond maxNewSynapseCount) for some of the current winning cells (some number of them above activationThreshold) to a random sampling of the winning cells from T-1 that it isn’t already connected with, then the representations would stabilize after the second time through the repeated sequence.

This would then lead to the case you mentioned of ambiguity for the C in ABCD vs XBCY. However, this implementation would result in duplicate cells in a sub-set of the C columns for the C in XBCY. One of the duplicates would be the same cell from ABCD and would be connected more weakly than the other duplicate which is unique to XBCY. The learning step could be tweaked to degrade the weaker of the two, which would eliminate the ambiguity.

It is an interesting idea. I’ll have to think it out further, and I’ll let you know what I learn from it.

1 Like

Yeah, backtracking can act as a replacement for resets. I haven’t spent much time considering how effectively it replaces resets, but it definitely helps.

The “Numenta HTM” NAB detector uses backtracking, and it doesn’t use resets. The same is true of HTM Studio, HTM for Stocks, and it’s what we used in Grok. You can think of the Backtracking TM as a productized version of the Temporal Memory. It’s the pure algorithm, plus some non-biological stuff.

One quick note: the NAB README has a second table of results of different HTM variations. In that table, the “NumentaTM HTM” row is pure temporal memory, without backtracking. You can see that it does okay, but it’s better with backtracking.

3 Likes

That’s so interesting! Might I ask roughly how to implement Backtracking in Nupic? May there be any examples? I’m using it for my thesis and would be very curious to see if/how it might effect my results. Thanks again for all your guys thought leadership and guidance, I’m loving this thread

Here is the code for it.

2 Likes

Direct link to the backtracking code:

The doc string is pretty thorough

3 Likes

thank you for the explanations!
if I understand correctly, NumentaTM HTM is used when we pass “-d numentaTM” to the NAB’s run.py.
I can see in numentaTM_detector.py that it assigns tmImplementation=“tm_cpp” which makes NAB use the compute() method from backtracking_tm_shim.py. So, NumentaTM HTM still uses backtracking?

As I mentioned in the OP, I couldn’t find a way to use the “pure” TM implementation…

1 Like

The backtracking_tm_shim.py wraps the pure TM. It doesn’t use the Backtracking TM. It mimics the interface of the Backtracking TM. Here’s the line where this class wraps the pure TM:

1 Like

ok, I see. Thank you!