New TP pseudocode?

Thanks guys! I know I’m up to date on the mechanisms of the Spatial Pooler and Temporal Memory as detailed in the BAMI papers and the nupic code. The reason I ask about the Temporal Pooler independent from that is that I saw it at work in this recent post showing animations of the HTM algorithms learning.

In several of the animations (post 10 out of 19 for instance) there looks to be another mechanism below the Temporal Memory that he calls the ‘TP’. The ‘TP’ seems to create representations that remain stable across pattern sequences of TM. So there are a set of cells in the TP that remain on over many time steps, as these cells have learned to represent that entire sequence. As he says, the sequence from TM has basically been collapsed down into a single spatial encoding (set of cells that recognize the entire sequence). This is really interesting to me because as he says, these more stable (less frequently changing) patterns could be passed into another region to find larger patterns still (maybe a bit like moving up the cortical hierarchy).

Basically if there is an established algorithm for these ‘TP’ cells to learn to recognize and represent entire sequences within TM I’m really curious to know how it works! Thanks again,

– Sam