Sequence learning and invariant representations

Hi there,

I recently read some of the material on HTM theory (the arxiv papers mostly), and wanted to get clarity on couple of points that i’m fuzzy about:

  1. how does one control the speed at which transitions are forgotten? as i understood it, if a synapse caused a predictive state but input didn’t match prediction, that synapse permanence will be decreased, and if input matches prediction then the permanence is increased by similar amount. So if input is say sequence like ‘BCBD’ it seems to me that after this sequence is read, the permanence along B-C connection will be 0, since the increase from ‘BC’ will be unset by decrease at ‘BD’.
    Is this kind-of correct or i am missing something?

  2. are there ‘time-invariant representations’ in the current one-layer HTM, and if yes by what mechanism are they achieved. I seem to have missed this part in my reading. It seemed that union of SDRs was mentioned as a way to combine time-step SDRs into a representation of whole sequence, but I’m not sure what part of the HTM mechanism is doing that union.
    by time-invariant i mean for example when looking at an image of a cat, eye is all the time moving to different locations of the image, but is there some neuron that ‘knows’ it is a cat and stays active throughout the sequence (after some initial recognition phase)

Thank you,

1 Like

Hello @jouj, its been 20 days since you asked this and maybe you got your answers but I just felt like these questions deserve attention.

1-

This is referred as distal segment decay and not part of the original HTM but rather an extension to get rid of false positive predictions and imitate forgetting. Normally, you only decrease the permanence of the synapses that do not participate in the activation of a successful prediction of a distal segment. So you reinforce the active synapses of a distal segment in an event of a correct prediction, and weaken the inactive synapses of that same distal segment. In other words, normally you do not touch the synapses of predictive but not activated cells. This distal segment decay is an extension to provide some sort of forgetting.

First of all, the permanence increase in this case and decrease when it was a false positive are not equal. The permanence decrease for decay is much much smaller than the increase at correct prediction reinforcement.

In the sequence BCBD, the set of active neurons at the second B would be different from the first B. Same inputs of a sequence are represented differently based on the context. So the second B would be the B coming after C which came after the first B. The representations at all these states would be different and naturally the activated cells. Therefore, the synapses would form on different cells and the ones you are modifying would be different.

2-
If I understood correctly, you are describing the temporal pooling/union pooling mechanism. This requires at least two layers with a pooling mechanism in-between. (For example between L4 and L3 in biology) What it does is, when a cat is seen, you look at a variety of places on a cat in varying orders. You can look at its tail, then its feet, then its eyes or you can do that in a different order looking at different places. Now if the layer above somehow pooled all these representations at the lower level, meaning if it learnt from the union of the output of the lower layer, the columns in the higher level would actually represent what we know about a cat. When the system observes some partial information about a cat, let’s say we saw its eyes and then maybe its mouth, these columns would get activated and stay active as long as we are looking at similar places we have looked at that cat previously.

This would be a nice place to learn more about temporal pooling:

4 Likes

Thanks for answering @sunguralikaan, these are good responses.

Yes, this is a powerful property of HTM and deserves some more explanation. The ability to learn high-order sequences, where “high-order” implies that knowledge of the current and previous states is required in order to accurately predict the next state. So in the example the SDR representing the first B is quite different from the SDR representing the second B because they appear in different contexts.

Temporal- and union-pooling are two different mechanisms of achieving a stable representation over changing inputs. Pooling in HTMs is a very active area of our research, so stay tuned :smile:

2 Likes