I’m still trying to figure different ways to implement grid-cells modules.
But in the meantime what if I tell you that you can do the Location-Sense loop w/o grid cells !! Whaa…aa…aat ?
Yeah, I think it is doable. I will be talking for 1D space.
First let see what are the requirements :
- Unique locations
- Multiple separate maps
- Sense have to influence/nudge the Location
Here is my line of thought…
We start with the possible movements commands.
There are two of them forward and backward (or if you prefer left and right or up and down). There is no other possibility : 1D.
The coords system range is limited by the Sensor … and the metric of the Move cmd is 1 sensor-unit. (Every sensor has limited dexterity)
We have to pick representation range, let say 0 … 100 or 0 … 1000 sensor-units, …(the job of the Encoder is to encode/decode in that range INT <=> SDR).
if we want to cover large ranges we can use SCALE and nest different MAP (2), or use second CC.
The consequence of all this is that we can use Integer encoder to manage the MOVE =to=> LOCATION conversion.
The grid quality comes from the limited range. Once we reach the edge of the Sensor we have two options :
- roll over
- switch to a Scaled-up map
We can use the NaiveSkipEncoder I mentioned earlier :
where a new MAP (2) can be simulated by creating new NaiveSkipEncoder object, because the Encoder is based on initial lookup-table-SDR which is randomly generated i.e. every Encoder generates correct and sufficient but different mapping for Integer <=> SDR conversion, unique across NaiveSkipEncoder’s objects, fulfilling conditions 1 and 2.
Now comes the final step implementing the L4L6-loop.
We have a Translation module, which purpose is to calculate the new position :
POS_t = POS_t-1 + Move_cmd
and the second function is to adjust/nudge the position based on feedback from the Sense layer :
POS_curr = POS_adjusted
or Average or some other scheme
Here is the diagram :
The MERGE combines the predicted Location-TM with the Location generated by the Encoder (i.e. UNION of the two SDR’s) and passed it to Sense-TM.
Also SDR goes back to be decoded by the Encoder and the result passed back to do a correction of the current position.
So instead of Grid-modules we have Encoder and Location-TM
BTW in step 5 an SDR is passed from Encoder => Location-TM, so that it can learn Sense-Location transition/prediction.
What do you think ?