Approaching the TRANSFORMATION idea!


I was thinking the other day, how to bootstrap thinking of Jeff new discoveries for algorithmic solution.

I will use graphical 2D abstraction for illustration.
Let’s have 2 sensors “watching” a ball rolling on a table. Also lets take the easier case first where the sensors do NOT move.
And once again as simplification we will use single SP+TM combo.
So the X-sensor will provide X-coord and and y-sensor Y-coord.

What is the easiest way to make the system do prediction ?
We simply merge the two SDR i.e. before they go into SP we make a union of them.
Sparsity guarantees us that we should be OK.
One disadvantage of this approach is that the SDR coming from the encoders are most certainly not very good SDRs for merge, so how do we solve this … plus we get only Spatial relations with this.

What if we use two SM+TM combos one for every sensor !?
Now we extract SDR out of them this time there are well suited for UNION. So we merge them and pass it to 3rd SM+TM combo. Finally we get good forward prediction.
This is where it gets interesting … we would use the output of the 3rd (top) SM+TM for feedback. How ?

We “map” every input to every output at every time step. We don’t do full-SDR-to-SDR map, but more like output-SDR <=to=> input-bit-of-SDR (the way Nupic-classifier does will probably suffice).

OK. we got this virtual-classification-map, how do we “feedback it”.

Our first approximation would be to merge input-SDR with whatever comes up from this reverse-mapping. But this is crude. What instead we could do is the following …

the reverse-map-generated-SDR should be directly transmitted to neurons in the first two TM’s via a separate dystal segment (for every neuron on the first layer, but on column base, the map returns 1 bit per column, not per neuron), this way not directly mixing with the input, but instead reinforcing or weakening prediction. (btw, we have two “maps” for both TM’s)
It will work on the same principle initially “predict” (not activate) the whole column and learn over time to predict only specific neurons.

(or there could be forth state predicted-prediction :slight_smile: )
In essence we do reverse prediction or is it predict-prediction.

So this is it : union of the first-layer SP+TM before it goes up, plus feedback to every neuron but by separate distal segment for prediction.

This is again the case of non-moving sensors.

I have a hunch of how to approach moving sensors, but I will have to let the idea to germinate before I share it.

What do you think !?