When there are 2 or more predictive cells in a column that becomes active are both chosen as winner cells?

Is only one of them chosen based on which has the least amount of connections or are the learning rules such that there are never 2 or more predictive cells in a column?

Do both have their connections enforced and new connections formed?

1 Like

You definitely want to activate all of the predictive cells, because the prediction is ambiguous, so either of the two contexts might be accurate. Usually a couple time steps after this happens, the ambiguity will be resolved, and you’ll be confidently on one of the two potential branches.

One special case is repeating inputs. When you first implement the TM algorithm, this usually causes some head scratching at first. The vanilla TM algorithm will reach the end of the repeating sequence, burst, add one more element to the end, and then cycle back though it, reach the end, burst, and add one more, etc. Indefinitely. Each cycle through will add additional cells per minicolumn, until all the cells have been used. I wrote some analysis about this behavior on another thread here and some thoughts on how it might be addressed if it is a concern for you.

–EDIT-- Sorry, I misread the title, you were talking about which should be considered “winner”. I think you still have one “winner” from the perspective of the next input (i.e. if next input bursts, you would only form new connections with one of them). I haven’t looked at this in a while though, so let me look into it a bit and update you

1 Like

From BAMI, we can see that only one winner is chosen, based on the “best matching segment” from all predicted active cells in the minicolumn:

29. if segmentsForColumn(column, matchingSegments(t-1)).length > 0 then
30.   learningSegment = bestMatchingSegment(column)
31.   winnerCell = learningSegment.cell

From BAMI, it looks like only the “learningSegment” gets updated:

39.  if LEARNING_ENABLED:
40.    for synapse in learningSegment.synapses
41.      if synapse.presynapticCell in activeCells(t-1) then
42.          synapse.permanence += PERMANENCE_INCREMENT
43.      else
44.          synapse.permanence -= PERMANENCE_DECREMENT
2 Likes

So, if there are two or more predictive cells in a column only one is chosen as the winner. The one that is the most predictive meaning it has the most synapses and the higher permanences from previous active cells.

The winner cell is rewarded and the other predictive cells that lost are punished?

1 Like

If the winner cell is rewarded and the loser cells are punished:

  1. When predictive cells don’t become active they are punished in some versions of TM. Is the decrement of this punishment the same as the amount of decrement in predictive cells that lost to a winner predictive cell in the paradigm above?

  2. There might be synapses from previous active cells that haven’t formed connections to the winner cell or the loser cells. Do those synapses get permanences increased or decreases also?

I looked back through BAMI, and noticed something I had not previously. There are two variables which are easy to mix up – there is winnerCell and winnerCells. The former is the one I mentioned above (the one winner chosen for learning in case of a bursting minicolumn), and the latter contains all predicted active cells in a minicolumn (may be more than one) when it is not bursting. So in the case of 2 or more predictive cells in the same minicolumn that become active, they all are winners and all go through learning (and can be connected to in the next time step).

Sorry for the misinformation in my earlier post. I probably have written the algorithm incorrectly in my own implementation, will have to look and see. I highly recommend reading through BAMI a few times – you will pick up on little things like this each time you go through it.

1 Like

For anyone curious about where this scenario occurs, consider a TM layer that has learned the following sequences:

A B C D
X B C Y

If one were to do a reset and input “B” unexpectedly, the minicolumns would burst, and two different contexts for “C” would become predictive. When the next input “C” comes in, both contexts would activate, and (according to BAMI) both would be strengthened as correct predictions.

2 Likes

From BaMI:

  1. For each active column, learn on at least one distal segment. For every bursting column,choose a segment that had some active synapses at any permanence level. If there is no such segment, grow a new segment on the cell with the fewest segments, breaking ties randomly.

When a new segment is grown, how many synapses are connected to the previous active cells? 1 or more? Also, what are the initial permanences for those synapses? 1.0? A little above the threshold or a little bellow (meaning they are not actually connected)?

This is probably implementation specific, but I would try to connect to at least twice as many previously active cells as needed to activate a dendrite. If you randomly generate the permanences using the same probability distribution as before, you should get roughly half of them above the connection threshold. The remaining potential synapses allow for redundancy and give the dendrite a little flexibility to adapt to similar input.

2 Likes