Idea about 1D grid cell modules

I am thinking about 1D grid cell modules and how to best design rules to code it. I think 1D modules are right way to go, instead of 2D, because it is more general and could allow to have n-dimensional space, with for example one of basis vector representing twist motion forward etc… like jeff mentioned in one of the meetings.

What do you think about following:

Consider three grid cell modules:
GCM1 : [0 0 1]
GCM2 : [0 1 0 0 0]
GCM3 : [0 0 0 1 0 0 0 0]
Sizes are 3,5,8.

The three modules represent uniquely one position in 1D space that is 120 in size (it is least common multiple of the individual sizes).

For first position in space it is:
1 0 0
1 0 0 0 0
1 0 0 0 0 0 0 0
For moving in space just shift to the right by the distance. Rolling over.
Leading to
1 0 0
0 0 0 1 0
0 0 0 1 0 0 0 0
If you move by distance 3.

If you want to get distance between two positions, just start at first and keep shifting till it match. Number of steps you’ve done is equal to the distance.

If you would have more of these three packs, you could represent more dimensions. If you want to move in that space, you will update them individually depending on the direction that you want to move.
About scales: scale is just mapping of the internal representation to the real world, e.g. how much move your muscles etc.

Also unions would work same as with 2D Gcms, if you have large enough space, then chance of false positives are surpressed at low level.
Are there any flaws of this thinking?


I would guess that the brain is not using cartesian coordinates. That concept is recent (Decartes) and related to mapping algebra onto geometry. I would guess the space is represented in the neocortex in a relative fashion. Perhaps relative to grid-cells in many contexts. This is not intended to discourage you from trying your idea - I do not have a better algorithm to propose for you. You asked for critical feedback :slight_smile:


Thanks for reply, however, maybe i haven’t expressed clearly, i was not saying anything about cartesian coordinates. There are three GCM’s not because of 3D space or whatever. I could use four or more number, if i want to have the space size larger.

To combine more of these packs to represent more than 1D space you can combine them and perpendicular basis vectors are just one possible case.


I agree you could expand the number of axis, but I think that is also true of cartesian coordinate systems. An alternative co-ordinate system would be a polar representation which can also be extended (this might be useful). But I think the idea of objectifying euclidean space might not be the right paradigm.

Tracking an object relative to other objects would allow for multiple dimensions without an explicit shared coordinate system. But it is hard to imagine, given how natural coordinate space seems to us - but that is actually a fairly recent advance in how we think about space.

I do like the idea of a one dimensional distance measure as the building block. I’m less convinced about a neural net using that to build a co-ordinate space. But that opinion is not worth 2c :slight_smile: Obviously we can do that but it seems a late addition to our abilities.

I think there is a lot to focusing on how an animal deals with 2D space. This is the problem the neocortex has had the longest time to try and solve. It is probably leveraging that algorithm to deal with more dimensions. Big guesses…


But I think the idea of objectifying euclidean space might not be the right paradigm.
Tracking an object relative to other objects would allow for multiple dimensions without an explicit shared coordinate system.

I like concept presented by Matt Taylor here HTM school grid cells

For being able to relate object between each other, you need to have some metric space, to be able to do path integration, relate reference frames etc… right?
As you say i also think there is lot of focus onto 2D animal space, but one could expect that, since there is lot of experimental data.
As numenta suppose, every cortical column should have grid cells, working on more generalized principles than in hippocampus probably.


Scale-independent relative coordinates based on a hexagonal grid. Or that’s what I thought I read somewhere. And one of the uber-likes is using something similar, I heard.


You probably meant this Uber H3, Uber itself is using this, basically for bucketing geo-locations into hexagons for easier work with geo spatial locations and for database queries speed-up.

The point of this thread is meant to be about 1D. Considering that hex-grids will emerge from using system on 2D rat arena, with sampling particular “slice” of neuron cells.

I am applying this idea to the “2D object recognition project” right now, want to challenge issues what will pop-up.

Taking this further:
Consider we have motor input [dx, dy],
Then 10 1D grid cell modules, each initially randomly created with coefficients [kx, ky].
Koef. range <0.0, 1.0>
These coefficients will be multiplied with motor input and result is how much to “shift” module bits as in first post.

So there is a chance that two of module coefficients will be perpendicular to each other, like [1.0, 0.0] and [0.0, 1.0]. But there will be more mixtures then this case.
Sensor will move around, shifting grid modules in their specific way, connections will be created with L4, learning sensations@locations. Then inferring will use unions in these modules to express ambiguity and narrow down them when object begins to be recognized.

Some algorithm will have to ensure duplicity of modules and enforce diversity, like checking that particular module is not participating much, so could be reinit and use for something else. Module coefficient should never be modified, because that will lead to loss of information. Just keep them or discard. Probably.

Should work as 2D grid cell module, but the main difference is, that we do no define 2 dimensions explicitly, they come from feeding 2D motor input.