Why not start from currently active cells instead of looking at every cell in the structure?

From BaMI:

  1. Activate a set of dendrite segments: for every dendrite segment on every cell in the layer, count how many connected synapses correspond to currently active cells.

Why not start from currently active cells and count how many segments have active synapses, then trace to which other cells in the structure those synapses lead to and make those cells predictive?

It sounds like a major optimization, not having to go through every cell in the layer.

1 Like

Yes, I think most implementations of HTM use this optimization. There is a similar one in the SP algorithm. It is a huge optimization because of sparsity (tracing forward from only the active cells is a much smaller list to process than sampling from the receiving side)

4 Likes

For each of these correctly active segments, reinforce the synapses that activated the segment, and punish the synapses that didn’t contribute (lines 16-20).

My interpretation:

For each cell that is predictive look at its active segments that caused it to become predictive. The synapses from previous active cells to the active segments of the current predictive cells (which made the current predictive cells become predictive) should be reinforced. The synapses from any cell in the region to the active segments of the current predictive cells that didn’t make the cell become predictive should be punished.

Is this the correct interpretation?

Unfortunately, “any cell” means going through every synapse, in every segment, in every cell, in every mini column in order to find which synapses connected to the current active segments and didn’t contribute.

I believe any cell refers to any synapses (potential or connected) on the activated dendrites. As far as I can tell, there is no compelling reason to modify any other synapses.

1 Like

“any synapses on the activated dendrites” includes synapses that don’t belong to the previous active cells, which from an optimization stand point is a nightmare.

The pseudocode in BaMI has segments only on the receiving side which becomes quickly very computational expensive. In my implementation and I guess almost any other implementation the segments have synapses that connect to somewhere else.

Now, I have to:

  1. Either look at every column, every cell, every segment, every synapse to figure out from which cells those other synapses that don’t belong to the previous active cells are coming from.

  2. Or abandon the strictly ‘trace forward from active cells only’ optimization and include two types of synapses in every segment: synapsesConnectedToThisSegment and synapsesConnectingOutwardsFromThisSegment.

There are some points in the BaMI document where it feels like its impossible to make this work without building it the way it was initially conceived. For example, when you have to grow a new segment on a random least used cell.

If indeed you interpret that as global rather than just on the single cell, couldn’t you have timestamp based global decay with lazy-processing (i.e. process delay the next time you happen to have to process it)

1 Like

@mad_stacks Yes, that is a good strategy for global decay (I don’t do global decay in my implementation, but if I ever do, I’ll keep this in mind). However, I think the problem @nick is calling out is related to the learning algorithm rather than global decay.

To optimize the learning algorithm, you can transmit from the active cells only (small list, due to sparsity). Give each axon a list of receiving synapses, and give each synapse a receiving segment. When transmitting from a cell, iterate its axon’s receipting synapses, and give them an active state, and remember them for later. Also remember the receiving segment for later. At the end of the time step. iterate the segments that have been remembered, and strengthen/weaken the synapses based on their states. Finally, reset the states of the synapses that you remembered, and clear the two lists.

There’s a misconception in BaMI.

For each of these correctly active segments, reinforce the synapses that activated the segment, and punish the synapses that didn’t contribute (lines 16-20).

It doesn’t actually do that. The synapses that activated the segment aren’t potential synapses, only connected synapses count for the activation of the segment.

The code in lines 16-20 reinforces both potential and connected synapses where the presynaptic cell belongs to the previous active cells while it punishes synapses where the presynaptic cell does not belong to the previous active cells.

Looking only at the receiving end works well in the first stage of the algorithm when choosing which cells to become active while applying learning rules.

In the second stage, when it’s time to determine which cells become predictive you have to inevitably go through every cell in the structure because there’s no concept of “giving side” only “receiving side”.

My optimization works like this:

Every Segment class has values that state its precise location and both synapses that connect from and to it.

int index
int cellIndex
int columnIndex
array ingoingSynapses
array outgoingSynapses

Every Synapse class has values which essentially mean connect this parent to this child, a permanence and a uniqueID.

int parentSegmentIndex
int parentCellIndex
int parentColumnIndex
int childSegmentIndex
int childCellIndex
int childColumnIndex
float permanence
string uniqueID (random number from 0 to 999999)

Example

If the goal is to connect the first segment (index=0) of the second cell (cellIndex=1) in the third column (columnIndex=2) to the first segment (index=0) of the first cell (cellIndex=0) in the 100th column (columnIndex=99) two identical synapses (that share the same uniqueID) are stored in their correct places.

exampleSynapse

parentSegmentIndex = 0
parentCellIndex = 1
parentColumnIndex = 2
childSegmentIndex = 0
childCellIndex = 0
childColumnIndex = 99
permanence = 0.3
uniqueID = “430589”

Is stored in both:

parentSegment.outgoingSynapses (the segment at 0 of cell 1 in column 2)
childSegment.ingoingSynapses (the segment at 0 of cell 0 in column 99)

When its time for this procedure:

For each of these correctly active segments, reinforce the synapses that activated the segment, and punish the synapses that didn’t contribute (lines 16-20).

The code in my implementation reinforces both the ingoingSynapses of the active segment (child) where their parent belongs to the previous active cells and the identical outgoingSynapses that belong to the corresponding segments (parent) of the previous active cells.

It punishes ingoingSynapses of the active segment (child) where their parent doesn’t belong to the previous active cells. Then by using the parent information in the synapses themselves and the uniqueID it traces back to the correct segment (parent) of a cell (that doesn’t belong to the previous active cells) and updates the outgoingSynapses there too without having to check every cell in the structure.

In the second stage, the ougoingSynapses in the segments of active cells predict the correct cells.

Now, that I think of it again the uniqueID probably isn’t needed.