My analysis on why Temporal Memory prediction doesn't work on sequential data

Here is the code for it.

2 Likes

Direct link to the backtracking code:

The doc string is pretty thorough

3 Likes

thank you for the explanations!
if I understand correctly, NumentaTM HTM is used when we pass “-d numentaTM” to the NAB’s run.py.
I can see in numentaTM_detector.py that it assigns tmImplementation=“tm_cpp” which makes NAB use the compute() method from backtracking_tm_shim.py. So, NumentaTM HTM still uses backtracking?

As I mentioned in the OP, I couldn’t find a way to use the “pure” TM implementation…

1 Like

The backtracking_tm_shim.py wraps the pure TM. It doesn’t use the Backtracking TM. It mimics the interface of the Backtracking TM. Here’s the line where this class wraps the pure TM:

1 Like

ok, I see. Thank you!

I have to say that I am uncomfortable with backtracking and resetting when comparing to the biological cortex.
Considering that this model is supposed to be based on what the cortex is doing how is this acceptable?
I expect that the degree of prediction is not arbitrarily long and that the flow of information up and down the hierarchy should be seeding the neighborhood of a column with constantly updated samples of motion and sensation.

Is there anywhere where the model is given a “performance specification” that is based on testable predictions against real cortex?

More plainly - how well do we expect this to work to say that it is like the real thing? We have lots of artifacts in the human neural system that are not what an engineer might say are ideal - could this merging of time sequences be part of how cortex works?

4 Likes

We did not move ahead with this model for research. All ongoing work, including recent sensorimotor models, do not include this backtracking. It was only to optimize applications and tests we were building for anomaly detection years ago.

I think the solution to this problem requires answers about how attention works, and we are still trying to lay out the groundwork for object representation without attention. Attention must come soon, but then we start talking about behavior. And then it gets really interesting.

1 Like

Can someone mention the key-differences between backtracking and classic Temporal Memory algorithms?

Hi all,

So I’m trying to implement the BacktrackingTM in place of the standard TM within the opf for comparison. I was able to import the BacktrackingTM into my ‘run.py’ file, though I’m having trouble finding what exactly I should modify in the code to actually implement it.

The model type is ‘HTMprediction’ and the inference type is ‘TemporalAnomaly’, as set in the model_params file. In the iPython notebook walkthrough there’s point where tm = BacktrackingTM(…params…), though I don’t see an equivalent within the ‘…opf/clients/hotgym/anomaly/one_gym’ files I’m using. I tried looking in the ‘model.py’ and ‘model_factory.py’ files as well in case the change should be there, though they’re both read-only.

I also noticed this from a prior post, though I’m having trouble finding ‘tmImplementation’ within either the run or params file.

Any advice?? Thanks again!

Here’s a note from our model param docs that explains:

So the backtracking tm is the default.

Ok great, so long as

‘temporalImp’: ‘cpp’

the BacktrackingTM is in place, right? Last question on this, is there another ‘temporalimp’ that would implement the original (non-Backtracking) TM?

Thanks again

‘tm_py’ or ‘tm_cpp’

1 Like

What if when the system sees A for the first time in the first time step, since all cells in first two columns are activated, the cells active for B during the next time step will form connections with all the cells active for A(that is, the entire two columns)? And since entire columns representing B aren’t activated during the second time step, it shouldn’t lead to a lot of connections on every further timesteps.

Winning cells in at timestep T do not grow distal connections with random sampling of all active cells in T-1. Rather, they either strengthen their existing connection with (potentially non-winning) cells in T-1 if they were predicted active or above minimum threshold, or they form new connections with a random sampling of winning cells in T-1. This avoids the behavior that you described.

My proposed tweak is in the former case (predicted active or above minimum threshold) to also form some small number of new connections with winning cells in T-1 if they are not already connected with them, in order to eventually stabilize the representations for repeating sequences. I haven’t had a chance to test this theory out yet, but I’ll be sure to post an update once I have.

1 Like

I see.
But I am talking about the first time step in the case of a novel input without context(the first A). That’s when all cells in the active columns are active and can/could be called as winning cells. So the selected winning cells in B’s columns could connect to all cells of A’s columns in the second time step. And after the first C, A will lead to only a couple winning cells from its columns which will already have connections with B(in the context of A) and so those cells will be in the predictive state. Then once they get activated those connections will be strengthened again.

When a column bursts (including in the first timestep), you do not make all cells in the column into winners. Instead you pick a random sampling of cells with the fewest number of existing segments (one cell per column), using a random tie breaker. So in the first timestep that means one random cell per column become the winners.

1 Like

But why not make all of them winning in case of the first novel input? Anyways redundant connections will be lost because of synaptic decrement.

True, that may be more biologically feasible (though I have zero knowledge of neuroscience). In my implementation, my primary concern is continuous online learning and rapid stabilization (which is actually why I haven’t gotten around to even writing a reset function yet…)

I see. But this approach does remove the need for a reset function, right?