New TP pseudocode?

I might as well have some holes in my understanding too that I am not aware of so I was kind of hesitant to provide a lot of information but here it goes.

Yes that is what I understand from the code.

As I pointed out these two lines are kind of redundant because unionSDR is strongly correlated with activeCells. But I still think adapting the synapses of both lists of columns serves a purpose.

At the start of every pooling iteration, you apply the default SP with the predicted and active cells of the layer below as input. While the active inputs increase the overlaps as usual, the predicted inputs have a higher impact input by using a weight.

# Compute proximal dendrite overlaps with active and active-predicted inputs
overlapsActive = self._calculateOverlap(activeInput)
overlapsPredictedActive = self._calculateOverlap(predictedActiveInput)
totalOverlap = (overlapsActive * self._activeOverlapWeight +
                overlapsPredictedActive *
                self._predictedActiveOverlapWeight).astype(REAL_DTYPE)

The overlaps of all the columns are calculated.

if learn:
  boostFactors = numpy.zeros(self.getNumColumns(), dtype=REAL_DTYPE)
  self.getBoostFactors(boostFactors)
  boostedOverlaps = boostFactors * totalOverlap
else:
  boostedOverlaps = totalOverlap

Then comes the boosting part as in the default SP.

activeCells = self._inhibitColumns(boostedOverlaps)
self._activeCells = activeCells

Then comes the inhibition, again as the default SP. In the end, the pooling layers activates the columns with the highest %X overlap. Until this stage, pooling layer does not consider any prior activation on the input layer, at least not directly. So this activation is named activeCells, actually columns in SP terms.

# Decrement pooling activation of all cells
self._decayPoolingActivation()

The overlaps of active columns also increase the pooling activation of the columns which is a seperate variable. This pooling activation decays in time. So this step applies the decay to the pooling activation variable of all the columns.

# Update the poolingActivation of current active Union Temporal Pooler cells
self._addToPoolingActivation(activeCells, overlapsPredictedActive)

This is the part where the current overlaps of the current active columns (activeCells) increase their own pooling activation with respect to the strength of the overlap.

# update union SDR
self._getMostActiveCells()

This part finds the top %X amount of columns according to the pooling activation, not the current overlaps. The implementation calls this list of columns unionSDR. The function should be named something like getMostPooledCells to prevent confusion.

In short;
activeCells → columns with the highest overlaps at the current iteration.
unionSDR → columns with the highest pooling activation at the current iteration.
So these two are different lists that are calculated at every iteration. They share some columns because the pooling activations of columns are effected by overlaps. Implicitly, unionSDR is updated by activeCells.

Now, back to those two lines. The implementation adapts the proximal dendrites of the columns in these different lists to the same current input. The adaptation of unionSDR seems obvious as we are doing pooling but I am not sure on the necessity of activeCells here. Hence it seems kind of redundant to me.

Still, adapting synapses of activeCells (columns) helps biasing the learning towards the recent activation. For example, if a column is in both lists than it learns a lot faster than the pooled columns only in unionSDR which may or may not be activated recently.

I am afraid the best way to grasp the whole thing is trying to implement it because it is not straightforward.

1 Like