HTM School 14: Grid Cells

Another observation that I have made related to this, is that when I am exploring an object with my finger, for example, it feels to me as though I am projecting myself onto the finger and in essence “walking around” on the object and building or recognizing a model of it much like exploring a room and building a model of it. There is a sense of a position on the object and a sense of a heading, both of which changes fluidly as I move my finger around in different directions.

To me, at least, the sensations of exploring a room and exploring an object are extremely similar, and the logical conclusion (to me) is that they must be sharing the same mechanics behind the scenes. An “allocentric” space when exploring an object is essentially the same as an “egocentric” space when exploring a room. The switch between the two spaces, I would theorize, is at least in part a function of attention

If this is true, then understanding how grid cells and head direction cells are used when navigating a room are directly applicable to how they are used when exploring an object. We had a discussion on this thread recently where some excellent references were given related to the nature of head direction cells when exploring a 3D room.

As an interesting side note, this would also explain how we are able to almost effortlessly control an avatar in a video game (compared to an RL algorithm, for example, which requires a ridiculous number of iterations) – we can essentially project ourselves onto the avatar and explore the game world as though we were actually in it.


You’re correct that the poster only showed how to represent 2D locations, and so more is needed to handle 3D objects. I wrote the poster to give people an intuition for how grid cells can create a large space of unique locations, and it was easiest to illustrate this in 2D, and then people can extrapolate to 3D. Biology may simply extrapolate to 3D, using 3D modules, or it may have found a way to use 2D modules to handle 3D.

And yes, during movement, the representation of a set of grid cell modules moves through a finite-dimensional manifold (2D in the poster). And to handle 3D objects, I agree that we probably need a manifold that is at least 3D, or we need some other trick.


From an actual implementation perspective, do the grid cell modules have a fixed size on their input space once they are initialized? How would they handle being ported to a larger environment with new locations? Do they have a mechanism for dynamically adjusting to different sized environments? For example, what exactly would happen if the room that your mouse cursor moves within suddenly became larger or smaller? How would the grid cells react?


So, maybe it makes more sense to think about hippocampus and it’s grid cells as a compressed map of neocortex, and neocortex as primary and more detailed map of environment? Where both maps are representing all levels of generalization / conceptualization. In that case, primary spatial map would be egocentric one, in the dorsal parietal “where” stream. Of course, there is another map of neocortex in thalamus, it’s probably organized differently.

Sorry if that’s covered somewhere.


That’s interesting - a map of a map. The brain also has an interpreter that tries to make sense of what the rest of the brain has done - its a kind of rationalizer after the fact.

1 Like

My take is that the hippocampus remembers the key features of your experience; the most processed version of the day’s events. At night, during REM sleep, the contents of the hippocampus is replayed reinforcing the memories of the day.
Some hippocampus factoids in no particular order:

  • The hippocampus seems to hold about 1 days worth of experience before saturating. Symptoms of saturation include discomfort and hallucinations.
  • From a system approach - the buffered events of the day allow the amygdala and other sub-cortical structures to add emotional weighting to the memories before consolidation in sleep cycles.
  • It is likely that the hippocampus does good one-shot episodic learning as opposed to slower Hebbian learning in the cortex. It is possible that sleep spindles power an accelerated learning in the cortex.
  • The sleep cycles seem to normalize or reset the hippocampus for a new days learning. If the hippocampus is learning the “delta” between what is in the cortex and hippocampus I could see that part of the process is to “ring” both to test the response and drive learning to the cortex until they are in agreement.
  • The claims that spatial processing is done in the hippocampus seems off - patients with damage to the hippocampus see to be able to process space normally - they just can’t form new memories.

Damage isn’t always clear (H.M. had some hippocampus remaining for example), and there might be redundant regions which haven’t been studied enough to know that yet. The fact that there are place cells convinces me that it does spatial processing, although it’s not necessarily essential.

Maybe the brain processes things that aren’t real locations as if they were locations. It might be equally valid to conceptualize the function of the hippocampus as encoding events as spatial relationships, or encoding spatial locations as events. It can move between locations by behavior, and you can recall events at different speeds than they occurred, moving between each part of the event as if moving through space.

I don’t really understand it, but in the podcast Jeff Hawkins talked about objects built up from smaller objects by linking them and being able to follow the links using attention. That’s pretty similar to memory.

1 Like

Is this “remembering” in any way independent, or it is simply fresh memories in association cortices forming temporary connections in hippocampus, for potential reinforcement?

It is likely that the hippocampus does good one-shot episodic learning as opposed to slower Hebbian learning in the cortex. It is possible that sleep spindles power an accelerated learning in the cortex.

Do these spindles also happen in awake hippocampus, although less frequently?

If the hippocampus is learning the “delta” between what is in the cortex and hippocampus I could see that part of the process is to “ring” both to test the response and drive learning to the cortex until they are in agreement.

Isn’t this agreement between cortex and amygdala, with hippocampus as a mediator?

The claims that spatial processing is done in the hippocampus seems off - patients with damage to the hippocampus see to be able to process space normally - they just can’t form new memories.

Well, temporary hippocampal connections can also be with spatial memories in the cortex.
So, my “map” idea should be qualified in that this mapping is temporary.
Still, all association cortices must have tentative / potential connections with hippocampus, so it’s kind of a proto-map.

1 Like


1 Like

I don’t see it this way. There is a strong spatial component to what the hippocampus does. I think that the amygdala “colors” episodes but does not contain them.
This patient may help frame the behavior of the amygdala:

Note the preserved function which helps frame what the amygdala does and does not do.

This quora entry has some interesting pointers if you wish to explore further:


Thanks. So, I guess hippocampus mapping to cortex is indirect, via thalamus.
At least outside of EC and medial temporal lobe.
As for intrinsic function, that seems to be search for correlations between fresh memories.
Because they are conveniently localized in hippocampus but widely distributed in the cortex.
This covers spatial aspect too: the sources of short-term memories are likely proximate spatially.

1 Like

I have not been able to locate the exact mechanism/neural pathway that feeds images to the amygdala (and more importantly - parses) but it does seem to be aware of shapes, and to a lesser degree places, to activate emotional responses.

in general, this post has contributing amygdala content:

and in particular this link speaks to my point:

This link ties the amygdala to the hippocampus and hopefully - brings this post more in alignment to this thread:

1 Like

I think you are talking about blindsight:,
which is probably via LGN or Superior Colliculus:

1 Like

In general - yes.

I am very aware of sub-cortical visual awareness.

What I am less sure of is the perceptual mechanism. “We” have a pretty good idea of the general mechanism of the visual path and contents of the cortical visual stream.

I have not seen the same level of description and understanding of this sub-cortical visual perception.
This is important and related to grid cells in an indirect way.

As far as I am aware - nobody really knows how the visual, tactile, and somatosensory information is combined at the level of the hippocampus to form the response or activation pattern that is described in the Moser work. If a mouse navigates by vision or motion or whiskers it all seems to end up forming the same response to locations that have come to be known as grid and place cells.

Most of the work I have seen comes at this from the cortex side. @Gary_Gaulin put up a post today that analyses the relationship between the grid & place cells and if you read this it would seem like this is somehow happening in isolation from the rest of the brain.

My point in all this is that the cortex pathway seems to lead from the raw sensory areas through the WHAT and WHERE streams to the temporal lobe and on into the entorhinal cortex. This is a pathway that is full of learning and serial processing.

The sub-cortical structures seem to be shorter and more hardwired. They come to the hippocampus via the “other” direction and may well play as a significant pathway to forming the place and grid patterns.

This is the bit that I have been trying to learn more about: the hardwired visual system in the sub-cortical structures.

We know it has built in shapes and emotional coding for at least fear. We know that it can guide you around a room when you can’t see at all. It can guide your grasping when blind. It sees faces and expressions. There it good reason to think that it uses features to be sexually attracted - sex linked features that are thought to signal desirables in a potential mate. I’m sure I could name more but this is just a random list. Yes - the male and female brains could be wired/programmed differently.

That’s a lot of lower level processing and it’s tied directly to the hippocampus. It sure would be nice to know more about a few dozen dense cluster of nerves that are doing all this.

1 Like

My impression is that sub-cortical perception is much lower-quality, the only advantage is that it’s faster.

1 Like

I think this does well as an answer to your questions!


Thanks for that! This was a very interesting read! I learned very much about how the hippocampus and entorhinal cortex work together to represent location.

However, even after trying to pay close attention to the most interesting and salient information, I’m still unsure how to answer my own questions. The paper was mostly about hippocampal place field locations and place cell remapping, so I still don’t know much about grid cell dynamics or their relationship with sensory input.

1 Like

One thing I noticed was the huge emphasis on firing rate remapping and rate encoding. Such as this snippet

“the hippocampus can simultaneously convey information related to the position of an animal and to the cues present in the environment. During rate remapping, the integrity of the spatial code is preserved because place fields are stable, but the precise firing rate of neurons varies to encode information not related to the current position of an animal.”

This sounds like an important feature of the brain.


Did you get anything out of figure 3?

Section 3 does call out related papers - I have not looked at them but it may be a good point to dig deeper.

That said - I picked up on that you are looking to see how the senses end up forming these patterns in the EC.

Let me make it clear that nobody knows this.

My twitter feed is full of ads for post-doc positions to research this very question. Whoever does figure this out will prolly score on a Nobel prize for it.


Right I think. I just had these questions after watching Matt’s demo where you can see the grid cells firing when the mouse’s location enters their firing fields. I started to think about how those modules can create the hexagonal grid pattern to encode an environment as large and diverse as the world. I guess I would just need more info about the input to EC.