Understanding Old Temporal Pooler (2011)

Hi, I hope you don’t mind answering a huge noob question during your trial run of the HTM forum! I’m confused on how the temporal pooler implemented, specifically how basal dendrite segments are stored and operated on. At a time step t, does the temporal pooler create a new dendrite segment by looking at t-1 cell activations and linking each cell to a synapse with cell address and permanence? If this list of cells has a number of activations above a certain threshold, then a pattern is recognized. Does that sound right?

As I understand it, neurons in the cortex have segments of basal dendrites which detect patterns from nearby previously activated neurons within a region’s layer. When a certain threshold of the basal dendrite segment’s synapses activate, a pattern is detected and it puts the neuron in a predicted state. Makes sense, but the disconnect is actually coding it.

I agree with Numenta’s approach to machine intelligence by applying neuroscience principles of the cortex. I’m currently writing my own implementation of the HTM algorithm with some OpenGL code to visualize the cell activations. I hope you don’t mind helping me out along the way.

Dave

1 Like

Hi Dave! Thanks for trying out Discourse. I want to make sure you are using the right paper as you are writing your implementation. The paper that looks like this is an outdated version:

You want to look at these papers.

My gut feeling said I was using an outdated version… Wow, that’s a lot more to digest so time to do my homework. Much appreciated!

If I get stuck I’ll come running.

Hi @ddigiorg,

The mechanism you’re describing is the “old TP” which is now known as Temporal Memory. This mechanism operates in the same layer as the Spatial Pooler and predicts the next SP SDR. Unfortunately (as Matt mentions) the term “TP” was used incorrectly in both the 2011 paper and in much of NuPIC’s code.

Temporal Pooling happens in the next layer above. Its input is the output of the lower layer, and its operation is a kind of extension of SP. The inputs come in on large proximal dendrites, and a TP cell learns to respond to a number of patterns from a given sequence or set which occur together in the input data (and therefore are predicted in the lower layer).

Thanks for the clarification! This is all really cool and I can’t wait to get my implementation working. Hmm, let me see if I have it right:

  • SP: Layer takes input data and finds the top % of columns that best overlap the input data.
  • TM: In Layer’s selected columns some cells are put into an active state if cells in those columns were in predicted states at the previous time step. All cells in the column are activated if no cells were in the predicted state. Finally, all cells in the layer either remain inactive or are put into a predicted state based on their basal dendrite segment’s connectivity with the current active cells.
  • TP: I’m still not clear on this so I will have to do some more reading. I actually had a question about the output of a lower layer to the input of a higher layer. Say our layer is 2D (columns x cells). Would the output of that layer be a 2D binary matrix of active and predicted states or does the output get compressed into a 1D binary vector of column activations where each columns cell states are ORed together?

Honestly, this is an old less-biological implementation that we are moving away from. I would not spend any time on it. The TM essentially replaces the TP.

I’m closing this topic to discourage continued discussion of the “Temporal Pooler”, which has been deprecated in favor of the more biological algorithm “Temporal Memory”. Of anyone wants to continue a discussion that started here, please use the “Reply as linked Topic” option:

1 Like