Based on the Implementation details given in the titled paper (and also in 2016’s Continuous Online Sequence Learning Paper), I have the following two questions:
In equation (4) of the titled paper, what happens if there are more than one cells predicted in a single mini-column? Will all of the predicted cell’s active dendrites receive a reinforcement of their synapses connected to active pre-synaptic cells? If yes, then does this not negatively effect the storage efficiency of the cells of the network?
Equation (3) does not put any columnar inhibition on prediction of a cell, only equation (2) puts a columnar inhibition on the activity of the cell.
On a similar note to the first one, what happens if – in equation (5) – there are more than one dendritic segments (from all the dendritic segments on all the cells in a particular unpredicted mini-column) that have a maximum overlap with the previous time-step’s activity pattern?
1- All the predictive cells within the active mini-column become active. However, the reinforcement of the synaptic connections can be applied only to the learning cells.
If yes, then does this not negatively effect the storage efficiency of the cells of the network?
There is always a likelihood of activating two or more cells within the same mini-column, but it is very low! Thus, I would say ‘yes’ it negatively affects the storage efficiency, but it is negligible.
2- If you have more than one active dendritic segment within a cell, that cell becomes predictive. During learning, all the active dendritic segments will be updated (assuming the cell status is changed from predictive to active.).
Thank you for your response!
I’ll implement this in my ongoing HTM implementation for learning of Simple and Embedded Reber Grammars and see how the network evolves.