I just published a blog post today with several interactive grid cell visualizations. If you still don’t understand how grid cells map space, please read this blog post and it should help. I’m happy to also answer any questions here in this thread. No question is a bad question. It takes awhile to get these concepts mapped in your brain. It helps to approach from different angles.
Here’s a teaser, see the blog for actual interactive visuals.
This post does a great job of explaining what it aims to, how grid cell modules combine to determine location. Visualisations are top notch as always!
I’m still a bit mentally stuck on the “Rethinking Hierarchy” slide of the linked talk. Intuitively it would seem that by the time we’re combining grid cell modules to disambiguate location, we should be pretty high in the hierarchy. E.g. in the visual domain, the edge or colour of a chair probably doesn’t help with location, but the chair object as a whole would. I’ve got a bit of a grid cell reading backlog at the moment so I may have missed something that’s already been explained.
Note that in the brain the “grid cell firing patterns” are only observed at the higher levels of the local hierarchy. The entorhinal cortex is positioned around the temporal lobe where we process spatial information. We have been able to observe a few matches between an internal representation/coding of an outside perception. I am certain that there are many more codes to be learned.
- Head direction
- place in a room; we have good reason to think of this as a 2D map.
- Where are the experiments with grid cells and sound localization?
- Computer graphics VR spatial scenes?
This looks like a good one: http://europepmc.org/articles/pmc5492514
Same people behind the 2014 rodent VR experiment.
Nice write-up here tying them both together.
Also, in this video (bookedmarked to the relevant part) the researcher believes that the various sensory inputs (and differing scales within) are being translated spatially in the entorhinal cortex, exactly as you say.
A post was split to a new topic: Rethinking hiersarchy
IIRC there is quoted evidence somewhere, that some cells very early in the visual pathway would fire at a given edge only when it indeed is “the upper edge of a chair, right there”.
(I guess the actual ref. was the underbelly of some wild animal or something)
Also, if the visualization is anywhere close to the thing, there is probably a perceivable no-fire zone around the spot. Dunno if this could be used by neurons also… seems like I’m thinking about it in reverse after all (even if not currently thinking about the inverse function, that would be, anyway, provided we can process the ‘absence’ of signalling as pertinent info too).
Okay forget about this…
Yeah, I got it that it’s more of a fundamental (hence mathematical) property of the grid itself, than simply something visually accessible, but it jumped to my eyes that way. (That’s part of why I so much like your work for providing such good visual and interactive stuff for all this).
If only that was usable info for neurons listening to those grid cells… “it could reduce the required number of modules to achieve similar
precision confidence level”. Is what I was thinking.
But probably thinking about it the wrong way, anyway.