Confusion in synapse count in TM

I am looking at the pseudocode of TM in BAMI. I have this confusion in line 22 about the meaning of numActivePotentialSynapses. Does it return the number of both ACTIVE synapses and POTENTIAL synapses connecting to active cells in previous timestep?

If thats the case, then we cannot say that the segment will definitely have SYNAPSE_SAMPLE_SIZE active synapses after growing segments since potential synapses are even part of the SYNAPSE_SAMPLE_SIZE.
My argument is that line 22 should be just newSynapseCount = (SYNAPSE_SAMPLE_SIZE - 23. numActiveSynapses(t-1, segment))

Please correct me if I am wrong.

The potential synapses would eventually learn to make a connection.
So in the end, the number of active synapses would be approximately SYNAPSE_SAMPLE_SIZE.

1 Like

This is the number of already-occupied synapses on a segment. All segments have a maximum number of synapses they can grow at one time (SYNAPSE_SAMPLE_SIZE). So the number of previouslyActiveCells that a segment will connect to this time (newSynapseCount) is max - existing (or SYNAPSE_SAMPLE_SIZE - numActivePotentialSynapses).

Thanks. I understood. Does the potential synapse in ActivePotentialSynapse point to only the potential synapses connecting to the active cells of the previous timestep or it can include any potential synapse? The latter seem to be the case from your reply. Just want to confirm. The term active confuses me here.

Right good question. For a segment the numActivePotentialSynapses is the number of synapses who’s pre-synaptic cells are active at the given tilmestep (t-1) as shown in line 23.

Here’s how numActivePotentialSynapses is populated from the pseudocode:

1 Like

Understood. Thanks. In that case, should line 22 be

newSynapseCount = (SYNAPSE_SAMPLE_SIZE - numActiveConnectedSynapses -numActivePotentialSynapses(t-1, segment))

We should even subtract the connected and potential synapses to get the number of new synapses to grow…right? The definition of SYNAPSE_SAMPLE_SIZE says the desired number of active synapses

No, see lines 63 and 64. Variable “numActivePotential” is incremented whether or not the synapse is above the connected permanence. Thus, this count already includes the active connected synapses (no need to subtract that number twice).


Thanks for the clarification. It helped me understand better. One final query:

Suppose a column (say A) bursts at time t. All the cells in the column are set as active. Assuming the algorithm is running for time (t), suppose that another column (say B) bursts in the next timestep(t+1). Then, the learning segment’s synapses in time t+1 will increase the permanence values to all the active cells in column A. But since the winner cell at time (t) in column A is the representation of the input, shouldnt learning segment at time t+1 just increase permanence values to the winner cell instead of all active cells in column A?
I mean line 41 should be
41. if synapse.presynapticCell in winnerCell(t-1). Please clarify.

Good question. This shouldn’t be a problem in practice, since the learning segment won’t actually have synapses to all the cells in any column. Within the growNewSegment function, new synapses are only grown to prevWinnerCells which are a subset of prevActiveCells. This level of detail isn’t in this pseudocode, but you can find it here in the _growSynapses classmethod:

1 Like

Not sure if you are interested in differences between implementations @baymaxx, but this is one detail that some implementations differ on (for example Etaler). There is an advantage of connecting with any of the previously active cells rather than only the winner cells – it allows repeating sequences to eventually stabilize with enough iterations. The drawback is that it can lead to more ambiguity, so when implementing it this way, you would probably also want to implement a decay of unused synapses over time.