Why do we need so many grid modules?

I was wondering why would we need so many grid modules and possible simplification of Naive implementation.
What I have in mind ?

As you know every module represent a grid. The brain uses many overlapping grids to pinpoint the exact location.
When we get excitation of more grid modules we can be sure the position is more exact.
The more the better.

My speculation is the reason for this behaviour is because those are Sensor-grids not Enviroment-reality-grid.
And the sensors are sensors, not reality i.e. they are unreliable and fuzzy.
Thats why we need many overlapping grids.

If it was the case that we have virtual environment where everything is known with precision then what we need
is a single grid (scaled to our current scale) OR at most two one linear and one radial.

Now to get to my ask. If my thoughts are somewhat correct. I’m looking for simplification of TBT CC algorithm in virtual environment.

The idea is to use jut one/two grids with exact locations.
In addition the inner-loops L4<->L6b and L5<->L6b are also simplified, where the “kalman-filter”-like behaviour is unnecessary
i.e. I can work with EXACT predictions rather than with UNIONS of locations.

Can I make those assumptions for my goal : Initial naive implementation of the CC algorithm ?

This might work as an encoder for a use-case where you are encoding a position that is known precisely by the application. I think where it breaks down WRT the CC algorithm, is that the grid cell mechanism is used not only to encode physical space, but also high-dimensional spaces – and those are learned. One example I recall from a research meeting was something like the length of bird necks. There are dimensions which are learned from experience, and this mechanism where lots of fuzzy votes together represent something with higher precision is probably needed for that.

2 Likes

that’s cool… but what about adding dimension… in virtual env you know how many they are ?

How do you learn a dimension (non-spatial?) ? That will be game changer if we know the mechanism !

This encoding strategy would work in any case where the dimensions (however many) are known and measured precisely by the application. I think where it doesn’t work is when those dimensions are not known in advance and have to be learned by the algorithm through experience.

I don’t have the answer to that myself. My current line of thinking is that this is accomplished in some way by leveraging the topology of hex-grids. Grid patterns encoded by a collection of properties will transition in predictable/learnable ways as those properties transition smoothly.

2 Likes

what do we need to bootstrap a GRID ?

  • object, features
  • similarity | measurement

the process should be something like self organizing maps …

OR does it split from overused GRID…

If we get features one after another we can compare and place them next to each other … once we have enough of them, GRID should emerge !!!
Something like space-time vs mass … mass changes ST and ST influences how mass move !!

Can we find what may be the GRID that is closest to spatial or time GRID, but neither of them…??

One strategy is to start with any sparse activation. Give each active cell a ring of excitement some distance from the cell, filled in with a circle of inhibition. Run a competition, where areas with more overlapping excitation and less overlapping inhibition win out and form a new sparse activation. Repeat a couple more rounds, and a grid pattern will emerge.

Now adjust some of the semantic of the original SDR, repeat the process, and see how the resulting grid pattern changes compared to the first one. You’ll notice a topology emerges where bumps of activation move around based on the changing semantics of the input SDRs.

1 Like

You should learn about the (Kropff & Treves, 2008) hypothesis for grid cells. This model of grid cells does what you say: “If we get features one after another we can compare and place them next to each other…”. The (Kropff & Treves) model assumes that features which are adjacent in time are also adjacent in space and it uses this assumption to make grid cells.

I recorded a short video explanation of how the model works:

1 Like

what about non-spatial data, concepts?.. does your word test counts ?

In theory, yes, the Kropff & Treves model should be capable of modeling non-spatial concepts. However it requires motion in order for to learn the structure of that (non-spatial) data. The robot needs to be moving around, which can be tricky to visualize for non-spatial data?

The “word test” that I showed in that video does not make grid cells because that was not the purpose of the experiment, but in theory it could have. If I had applied the “fatigue” to that experiment it would have made 1-dimensional grid cells, which would encode the current position within a word.