Is there a way that an output can be "ON" for a long duration over which a temporal sequence is being recognized?

I saw a reference in a video talking about a mechanism in a temporal sequence memory that seemed to be saying that there would be a ‘slowly changing’ output which was ON for some long duration over which a given sequence was being recognized.

That sounded very useful as a kind of symbolic way to flag that a particular sequence was ‘active’ or partially completed.

Can someone explain how that works?

I think I understand the mechanism by which a cell in a column can indicate that it recognizes that it matches at some index in a sequence; for example for a sequence ‘A B C D’, there might be some set of cells which activate at the transition time when the ‘C’ is seen in that sequence. But is there some set of outputs that are persist over a longer period of time, which indicate that the whole ‘A B C D’ sequence is currently potentially occurring, i.e., would be activate as long as A, AB, ABC, or ABCD have been seen recently, or something?

2 Likes

My understanding is the temporal sequence memory only stores transitions, not the history of the sequence. Since there are usually more than 1 cell per column, the column can become active under the influence of distal connections in many ways. Thus the temporal memory is capable to learn high-order (non-Markovian) sequence.

I think it’s possible but difficult to recall the whole sequence currently occurring because of the one-column-multi-cell structure. If we only look at the active columns, backtracking the sequence is like guessing the past moves of an ongoing chess or go game. If we look at the active cells inside columns, there’s probably a deterministic backtracking path of the past sequence. But with online learning, the path may not be deterministic. Maybe I am wrong.

The concept I’m referring to was described in this web page, where a “slow changing output” mechanism is alluded to in some new version of a temporal pooling algorithm, but I don’t understand what that looks like in terms of cell activations that persist over time.

https://github.com/numenta/nupic.research/wiki/Overview-of-the-Temporal-Pooler

Thus when a previously-learned sequence is presented to the TP, we want the output to vary more slowly than the input since the TP pools multiple temporal instances in the input into fewer semantic classes represented in the output. When this happens, the slow-changing output can serve as a semantic label for the input sequence it receives.

Later in that page, the concept of “union” is discussed, whereby you would see a spatial pattern consisting of the entire sequence all at once.

But I became confused by this description, and couldn’t form a clear model in my mind of what the resulting activation behavior is when an entire known temporal sequence has been presented to the system. Where are the ‘slowly changing outputs’, and how would I intuitively understand what their relation is to the input sequence
being seen over time?

There’s no official consensus on how temporal pooling is done yet. But to get your intuition going, one example would be if your HTM network projects to another population of cells that become “stuck on” when the HTM state was strongly predicted, that’s the sort of thing that can cause slowly changing states in downstream regions.

One way you could imagine this happening is by strongly predicted cells firing a burst of spikes instead of just one due to their strong depolarization, resulting in more activity hitting the pooling population, that activates strong slow currents.

1 Like

The set of active neurons at each point in a sequence are unique to both the sequence and to the location in the sequence. We often denote this by using an apostrophe, A B’ C’ D’. A different sequence would have a different representation of B, C, and D, such as X B’’ C’’ D’’.

Because each element in the sequence is unique to the sequence we can classify any one of the elements and determine which sequence is occurring.

Now to your question. The “mechanism” you ask about is called “temporal pooling”. In its simplest form it works like this. We take another set of cells (another layer), randomly activate a sparse set of these cells and keep this set active during the sequence. The new layer receives input from the cells in the sequence memory. As the sequence is playing the new cells learn to recognize each of the patterns in the sequence. When the sequence is seen again in the future, the second layer will be stable while the temporal sequence memory is changing. You can think of the temporal pooling layer as the “name” of the sequence.

The same temporal pooling layer will work for sensorimotor inference. It forms a stable representation of changing input due to movement.

3 Likes

Ah I think I see now. That is quite elegant.