How do HTM learn a sequence of the same input?

So suppose I have 6 mini-columns, each has 6 cells. And my input is all the same. i.e, the sequence is just AAAAAA…
And because each cell can’t be in both predicted and active states, we will have to use at least two cells in each mini-columns to represent the sequence, is that correct??
Specifically, suppose the input pattern is just 1, 3, 5 mini-columns to be active, so something like this after seeing the first A:
image
Here, the yellow ones are active cells, blues ones are predicted cells. And when the second A comes in, we might see something like this:
image
And when the 3rd A comes in, we might see something like this again:
image

So because a cell can’t be active and predicted at the same time, even if the feedforward inputs are always the same, we need to use a different cell to be active (same for predicted) for each mini-column at each time-step? I guess this is helpful to keep high-order sequence information but just to make sure I understand it correctly.

Thanks!!

1 Like

You’re correct about a cell not being able to become both active and predictive at the same time, biologically speaking. But it’s just an engineering choice in practice. We often let cells assume both states simultaneously.
However, an HTM would not learn to switch between two states for repeating inputs, rather, it would learn to assign unique states to specific points in the sequence AAA… Otherwise, it could never learn sequences such as AAB, AAAC, etc…
There are many related topics on the forum if you search for “repeating inputs”.

2 Likes