Yeah, backtracking can act as a replacement for resets. I haven’t spent much time considering how effectively it replaces resets, but it definitely helps.
The “Numenta HTM” NAB detector uses backtracking, and it doesn’t use resets. The same is true of HTM Studio, HTM for Stocks, and it’s what we used in Grok. You can think of the Backtracking TM as a productized version of the Temporal Memory. It’s the pure algorithm, plus some non-biological stuff.
One quick note: the NAB README has a second table of results of different HTM variations. In that table, the “NumentaTM HTM” row is pure temporal memory, without backtracking. You can see that it does okay, but it’s better with backtracking.
That’s so interesting! Might I ask roughly how to implement Backtracking in Nupic? May there be any examples? I’m using it for my thesis and would be very curious to see if/how it might effect my results. Thanks again for all your guys thought leadership and guidance, I’m loving this thread
thank you for the explanations!
if I understand correctly, NumentaTM HTM is used when we pass “-d numentaTM” to the NAB’s run.py.
I can see in numentaTM_detector.py that it assigns tmImplementation=“tm_cpp” which makes NAB use the compute() method from backtracking_tm_shim.py. So, NumentaTM HTM still uses backtracking?
As I mentioned in the OP, I couldn’t find a way to use the “pure” TM implementation…
I have to say that I am uncomfortable with backtracking and resetting when comparing to the biological cortex.
Considering that this model is supposed to be based on what the cortex is doing how is this acceptable?
I expect that the degree of prediction is not arbitrarily long and that the flow of information up and down the hierarchy should be seeding the neighborhood of a column with constantly updated samples of motion and sensation.
Is there anywhere where the model is given a “performance specification” that is based on testable predictions against real cortex?
More plainly - how well do we expect this to work to say that it is like the real thing? We have lots of artifacts in the human neural system that are not what an engineer might say are ideal - could this merging of time sequences be part of how cortex works?
We did not move ahead with this model for research. All ongoing work, including recent sensorimotor models, do not include this backtracking. It was only to optimize applications and tests we were building for anomaly detection years ago.
I think the solution to this problem requires answers about how attention works, and we are still trying to lay out the groundwork for object representation without attention. Attention must come soon, but then we start talking about behavior. And then it gets really interesting.
So I’m trying to implement the BacktrackingTM in place of the standard TM within the opf for comparison. I was able to import the BacktrackingTM into my ‘run.py’ file, though I’m having trouble finding what exactly I should modify in the code to actually implement it.
The model type is ‘HTMprediction’ and the inference type is ‘TemporalAnomaly’, as set in the model_params file. In the iPython notebook walkthrough there’s point where tm = BacktrackingTM(…params…), though I don’t see an equivalent within the ‘…opf/clients/hotgym/anomaly/one_gym’ files I’m using. I tried looking in the ‘model.py’ and ‘model_factory.py’ files as well in case the change should be there, though they’re both read-only.
I also noticed this from a prior post, though I’m having trouble finding ‘tmImplementation’ within either the run or params file.
What if when the system sees A for the first time in the first time step, since all cells in first two columns are activated, the cells active for B during the next time step will form connections with all the cells active for A(that is, the entire two columns)? And since entire columns representing B aren’t activated during the second time step, it shouldn’t lead to a lot of connections on every further timesteps.
Winning cells in at timestep T do not grow distal connections with random sampling of all active cells in T-1. Rather, they either strengthen their existing connection with (potentially non-winning) cells in T-1 if they were predicted active or above minimum threshold, or they form new connections with a random sampling of winning cells in T-1. This avoids the behavior that you described.
My proposed tweak is in the former case (predicted active or above minimum threshold) to also form some small number of new connections with winning cells in T-1 if they are not already connected with them, in order to eventually stabilize the representations for repeating sequences. I haven’t had a chance to test this theory out yet, but I’ll be sure to post an update once I have.
But I am talking about the first time step in the case of a novel input without context(the first A). That’s when all cells in the active columns are active and can/could be called as winning cells. So the selected winning cells in B’s columns could connect to all cells of A’s columns in the second time step. And after the first C, A will lead to only a couple winning cells from its columns which will already have connections with B(in the context of A) and so those cells will be in the predictive state. Then once they get activated those connections will be strengthened again.
When a column bursts (including in the first timestep), you do not make all cells in the column into winners. Instead you pick a random sampling of cells with the fewest number of existing segments (one cell per column), using a random tie breaker. So in the first timestep that means one random cell per column become the winners.