The coding of longer sequences in HTM SDRs

Right. I’m imagining we just push the update intervals down below the intervals between the (letter) driving interval, and we can simulate some kind of continuity.

The signal I’m expecting, to review, is something like what we saw in the Brazilian paper I keep referencing:

I expect the words of the input to cluster into (relative) lines in a similar way.

So I’m looking for lines in the raster plot which corresponds to groups of letters forming “words” in the driving prompt.

Our network is sequential, not mutually excitatory like the one they used, so the synchrony might not appear as easily. But I think it must be possible to tune it so that even sequential clustering reinforces itself and results in lines of some kind. If the clusters correspond to words, say because they are a cluster of multiple paths through the columns of the letter representations (different paths, not the same path, learned, as in HTM now) then the lines should correspond to words.

The first thing would be to make the update interval smaller than the input interval to simulate continuity (I still have to check if that’s in the code.)

Then we look at ways clusters could self reinforce. I don’t know how at this point. But if there is a cluster, that cluster should self reinforce if tuned in the right way. It represents a physical property of the network. We just have to tune the network so that it responds to that physical property. Like tuning a radio.

So how to “tune” it?

The “tuning” seems trivial for pulling the spike time of subsequent states back earlier than the external driving interval. Anything a spiking neuron is connected to sequentially, will spike earlier than it would have from the external driving signal (if the update interval is smaller than the external driving interval.)

And what I found was that the network naturally has feedback. If this feedback signal is strong enough, it might exert influence to delay the spike time of a preceding element too. Any feedback signal will take a longer time to arrive, because it has to loop around the whole feedback loop. So it might cause a neuron to spike later than it would have from the external driving signal.

So it seems reasonable we might get a “line” developing for previously observed sequences. Which we can equate with “words”.

For previously unobserved sequences it becomes more interesting. That is where HTM up to now has had nothing to say.

But it may come to the same thing. It may just be more paths through the cluster pulling the spike time of the next state forward. And self-reinforcing through feedback to delay the spike of the preceding state. The “cluster” this time will not be alternate paths through the cells of the letter columns making up a word. This time the cluster can be alternate paths through the cells of paths through different words. It would work the same.

I’m not sure if the driving signal from the “prompt” should be repeating. Perhaps it should not. So the only repeated signal is the feedback signal.

But all this should become clearer in practice than in any attempt at explanation. So I need to look at @complyue’s code to see what’s happening now.

2 Likes