Hi there, this is my first post here, I’ve been following your work since last January and am really interested by your theory! Thank you so much for the opensource initiative and your active work!
After having red most of Numenta’s papers I am trying to implement the theory step by step. I got some trouble getting to the Temporal Memory because of the followings, I hope that someone might correct/help me there:
Taking the example of the sequences ‘ABCD’ and ‘XBCY’ without the reset, and as far as I understood the theory, the TM is supposed to build different contextual representation of C here. But if I send these sequences and repeat them in a TM for it to learn, the first time a B appears in the ‘XB’ context, it will burst and then put the C representation in the context of ‘AB’ (already learned) in predictive state because the prediction process looks at previous active cells. Then C will grow additional synapses on the same ‘AB’ context segment to recognize the ‘XB’ context making C ambiguous. So I can’t represent a high order sequence.
If I change the prediction process from looking at previous active cells to looking at previous winner cells, the TM will effectively learn as much long patterns as it can until ambiguous use of cells through additional segments growing makes it loop. But then I can’t use the no context prediction effectively with bursting and anyway it is not written in the BAMI pseudocode so I believe I missed something here…
Am I wrong somewhere?