Temporal Memory: handling no previous winner cells?

I’m having trouble with my implementation of Temporal Memory and I’d like to get some feedback on how to best approach this issue. My example below explains the problem I’m running into, where the first run-through of temporal memory has no previous winner cells, which seems to cause an inefficient sequence recognition issue. My example:

Pattern “A” inputted. Spatial Pooler returns activate column 0. Column has no dendrite segments so it bursts. No matching dendrite segments so cell with least segments chosen at random becomes winner cell, let’s say cell 0. No previous winner cells so no matching segments added.

Pattern "A"
winner cell: [0, 0]
matching segments: []

Pattern “B” inputted. Pooler returns activate column 1. Column has no dendrite segments so it bursts. No matching dendrite segments so cell with least segments chosen at random becomes winner cell, let’s say cell 0. Matching segment added with connections to previous winner cells.

Pattern "B"
winner cell: [1, 0]
matching segments: [ [0, 0] ]

Pattern “A” inputted. Spatial Pooler returns activate column 0. Column has no dendrite segments so it bursts. No matching dendrite segments so cell with least segments chosen at random becomes winner cell, let’s say it’s different from the first run and is cell 1. Matching segment added with connections to previous winner cells.

Pattern "A"
winner cell: [0, 1]
matching segments: [ [1, 0] ]

Pattern “B” inputted. Spatial Pooler returns activate column 1. Column has dendrite segment, but it doesn’t activate with previous cells. Therefore, cell with least segments chosen at random becomes winner cell, let’s say cell 2. Matching segment added with connections to previous winner cells.

Pattern "B"
winner cell: [1, 0]
matching segments: [ [0, 0], [0, 1] ]

continues on until all cells in column 0 are handled by column 1’s matching segments.

Am I missing something in the NuPIC algorithms which alleviates this? My only solution that I’ve found is to make all cells in a column winner cells if there’s no previous winner cell, but this doesn’t seem like an elegant solution either…

I’d appreciate the help!

Dave

1 Like

When you compute a segment’s activity, make sure to include all active cells, not just the winner cells. The winner cells are only used for deciding which synapses to grow. Nonwinner cells can still activate a segment.

In your experiment, I’d expect the second B to be predicted (or at least matching, depending on your parameters). The second A will burst, predicting B.

Note: if you do a reset after all of this, then repeat the sequence, the second A will be predicted, so the second B will burst.

Marcus,

Thanks for the clarification. If I’m understanding the Temporal Memory algorithm correctly it works more or less like this:

  1. Pattern “A” column activates, bursts, and winner cell randomly chosen from cells with least used segments. No previous winner cells so no new segment created. No predicted cells.

  2. Pattern “B” column, activates, bursts and winner cell randomly chosen from cells with least used segments. New segment created listing previous winner cells. No predicted cells.

  3. Pattern “A” column activates, bursts, and winner cell randomly chosen from cells with least used segments. New segment created listing previous winner cells. Cell B0 predicted because A0 is active.

  4. Pattern “B” column activates and previous predicted cells activate. Cell A2 predicted because B0 is active.

  5. Pattern “A” column activates and previous predicted cells activate. B0 cell not predicted because A0 is inactive.

  6. Pattern “B” column activates, bursts, and winner cell randomly chosen from cells with least used segments. New segment created listing previous winner cells. Cell A2 predicted because B0 is active.

Continuing on is where “A” reliably predicts “B” and visa versa.

Does this seem right?

3 Likes

Yes, this looks correct. Nice figures!

This behavior might seem goofy to you. If you replay a sequence for a Temporal Memory without giving it a reset it between, it will keep trying seeing everything as unique, giving it more and more context. After lots of learning it won’t see “B” or “a B after an A”, it will see “a B after an A after a B after …”, and will inevitably burst again eventually. So, resets are often important.

Awesome, thanks! Resetting the TM variables between sequences makes a lot of sense. Is there anything specifically in biology that resetting is mimicing or is it just an artefact of the code optimizations?

Isn’t it possible to detect when its “a B after an A” or “a B after an A after a B after …”?
sounds much better than resetting the TM manually

If I’m getting this right, for example if I’m training on letters/words, each space should be a reset so it wont see “Hi there” as one word.?

What specifically is meant by “resetting”. Does that just mean removing all active and predictive states, or are there other actions involved as well?

In this NuPIC implementation resetting sets activeCells, winnerCells, activeSegments, and matchingSegments lists to empty.

https://github.com/numenta/nupic/blob/master/src/nupic/research/temporal_memory.py#L222-L228

1 Like

Got it, idea is all cells and associated structures become inactive.