“back-activation” is a term I think I made up to describe of activating the columns and their distal connections to produce an “output”. This is related to your third question, which I’ll answer next.
Our traditional use of SP/TM has been in a forward pass manner: data is encoded to bits, and the minicolumns check their distal overlap with the encoded bits to find their activation score; chosen minicolumns then strengthen the overlapped connections, that we’ve been referring to as “distal connections”. My idea is to do this in reverse… have the minicolumns activate their strongly connected bits in a clone of the input space. We would activate the predicted columns, as determined by TM. The bits in the “output encoding” space might be a little noisy, so it might be useful to have an activation score for each bit, so that “weak” bits (which only one column might represent randomly as noise) don’t make it into the final output encoding.
So if we activate the “state pool(s)”, it would look at which minicolumns in are currently predictive. The idea is that given a current situation represented by the First Level IO node, the state pool(s), which remember those patterns that cause desired change, would know which SDR in the First Level IO would advance its goal. So that pool would then take those predictive minicols, then tell those minicolumns to reach down their distal connections (into the first SP/TM node). Those columns would then be activated, which would produce our output encoding.
In this manner, the SP/TM combination, instead of only acting as a memory of input patterns (what we’re already familiar with), will also act as an output system.
Correct. The intent is for the modulator to associate a desired goal with the state pool(s). So if we use an inverted anomaly score (where correctly predictive pools are given the higher score), we can encourage the modulator to make this association between state, goal, and which pools are potentially capable of advancing it.