At 59:36, a diagram is shown with the different segments of the temporal pooler and how any (OR) of them might set the current cell in predictive state. We can see that each segment is of equal size. Let’s call the segments s1, s2, … sn, and each cell within them sn, sn, … sn[m] Q1: does s1 refer to the same cell as s2, s3, … and differ in whether the cell s is active or not, i.e. each segment refers to different activation patterns of the same cells?
I guess each segment refers to the same set of cells because of the way it is explained that new segments are added right before 1:19:51.
Q2: since segments are added dynamically like this, how and when are segments removed? If I recall correctly, HTMs are able to forget old patterns and learn completely new ones, so segments must get removed somewhere. Or is it done just via the “global, age-based decay”?
Here are answers according to my understanding.
Q1: Not quite. The dendrite segments don’t really contain cells. They contain synapses that connect to other cells within the same layer. So s1 is a synapse that connects to some cell within the layer, and s2 would be a different synapse to another cell.
You are correct that each segment is a set of synapses that connect to a subsample of the same set of cells.
Q2: I’m not certain how this is implemented, but your guess sounds likely to me.
Q2: The current implementation has a way to forget false predictions by decaying the synapses of segments. If a cell becomes depolarized by any of its segments (predictive state) but does not become active in the next step you decay its synapses. There could be a global decay too (I use one) but the described one above does the forgetting that actually helps learning.
Check out the _punishPredictedColumn function below.