The case of a single repeating input is an interesting topic that I have explored a bit. Not being a math person, I can’t give any concrete numbers, but I can describe the behavior. For simplicity, lets assume the parameters are set such that a transition can be learned in one timestep. There are a couple of different possibilities, depending on the implementation.

**Implementation where a cell can be both predictive and active**

The first three timesteps burst. This is followed by winners from T=2 being correctly predicted, then the winners from T=3 being correctly predicted. This is then followed by a burst.

This burst puts two cells in each minicolumn into predictive state (the winners from T=2 and T=3) which both become active in the next timestep. The winners from T=3 cause those from T=4 to become predictive, so winners from T=3 and T=4 become active followed by the winners from T=4 followed by a burst. This burst now puts three cells in each minicolumn into predictive state, which all become active in the next timestep. Then three then two then one then burst. Then 4, 4, 3, 2, 1, burst. 5, 5, 4, 3, 2,1, burst. 6, 6, 5, 4, 3, 2, 1, burst.

This pattern continues until all cells in the minicolumns become both predictive and active and then active cells per minicolumn decreases each timestep until one, followed by a final burst. This burst results in a random sampling of cells growing a second distal connection to winners from the final element in the sequence.

This is the point where things get interesting. Every column has now hooked up a random point in the sequence to the final element in the sequence. There is essentially a reshuffling of representations when each minicolumn at different timesteps reaches the end of their predictions and bursts, thus further saturating the connections. At some point enough connections are formed that there is no more bursting and every cell in the minicolumns always predicts every other cell in the minicolumns every timestep.

**Implementation like #1, but a cell can grow additional connections to previously active cells**

This is a wrong implementation, but hints at a possible way to stabilize without saturating the columns when an input repeats. The first three timesteps burst. This is followed by winners from T=2 being correctly predicted, then the winners from T=3 being correctly predicted. This is then followed by a burst.

This burst puts two cells in each minicolumn into predictive state (the winners from T=2 and T=3) which both become active in the next timestep. The winners from T=3 cause those from T=4 to become predictive, so winners from T=3 and T=4 become active. These second activations of the same cells are growing additional connections with the previous timestep, and so in very few timesteps, the representation stabilizes on two cells per minicolumn predicting itself every timestep. One of the cells in each minicolumn is connected more weakly than the other, so potentially a learning rule could be applied to thin the representation down to one cell per minicolumn (an implementation that I am exploring)

**Implementation where a cell cannot be both predictive and active**

Every timestep bursts, growing more and more connections until completely saturated, such that every cell in the minicolumns would predict every other cell in the minicolumns. However, because the same input is repeating, and every cell is active every timestep due to bursting, none of them are ever predictive.