Jeff's Questions about how Grid Cells actually create metric space

I’ve a couple of thoughts about yesterday’s research meeting with Jeff asking some great questions about how grid cells work:

Probably best to watch and then come back here to discuss.

As suggested by Jeff and others during this meeting, phase precession of grid cells is definetily something to explore in more depth. The phase precession of place cell is already well documented, but I haven’t found much on grid cell phase precession.

Look at this video from May-Britt Moser’s lab (october 2019):

Would be interesting to understand this periodic smooth move of grid-cell expression during theta sequences.

What if the neocortex evolved to implement the simplex method of linear programming optimization in a hyperdimensional triangular grid space?

Quoting A Grid/Place Cell Model of Episodic Memory and Spatial Navigation in the Medial Temporal Lobe:

If no
memory is retrieved, a new hippocampal cell is recruited and
its weights are set equal to the activation values of the
entorhinal cells (i.e., Hebbian learning). Hippocampal cells
experience continual consolidation via small nudges to their
weights using a novel simplex-based δ-rule learning
algorithm (see figure insert). Thus, the weights of the most
active hippocampal cell adjust to become equidistant (using
the same activation threshold as Hebbian learning) from up
to k neighbors, where k is the dimensionality of variation (3
in the simulation, although the insert shows a k of 2, in which
case consolidation created equilateral triangles).

Considering your questions about wide area recorded activity, this post should give you much more to work with:
High-dimensional signal and noise in 20,000 neuron recordings


If I’m understanding this correctly, I wonder if too much emphasis isn’t being put on grid cells to form a unique metric space. Three modules of grid cells in a rats entorhinal cortex isn’t enough to form a unique representation of a large space of locations, that’s fine. But, what if it’s not being used to do that? If I’m moving around my room it would seem I am using a mental metric space of my room, but I’m also using a variety of other cues (visual depth cues, etc). So, three modules of grid cells is enough to give the rat an approximate idea of where it is, and to do some degree of path integration. To get a precise location, and to build a representation of many millions of locations, the grid cells are supported by other types of cells processing other types of information.

To me, the brain would have to be doing something like this, because I don’t see how the grid cells can stay so fixed in physical space. The determination that a particular grid cell should fire at a particular location must be determined by a combination of muscle movement (my body telling me I’ve moved 1 metre to the right for example), and other sensory clues (the shifting geometry of the walls as detected by my eyes, and the moving of my desk relative to the position of my laptop, for example). And, in the opposite way, the determination of my location in physical space is being determined by a combination of grid cells and other cells.

So, If I am imagining how I would get up and move to the door, I am partly visualizing it as “I move half a meter forward, turn slightly right, walk forward another two meters.” But, I am also visualizing it as “I move forward to get around my desk on my right, I shuffle around the front of my desk, I try to avoid hitting the lamp on my left, I move forwards towards the door.”

I might not be understanding the problem completely though, as I’m only getting back into following Numentas research now.