Grid cells - not necessarily all about space!

I am not surprised that much of the current literature on grid cells finds a strong correlation with encoding space. If you look at the tasks I can’t see how they would find anything else - “the spot lights up when the rodent is here in the cage.” If they looked at the critter in social settings they may have discovered very different correlations.
https://www.nature.com/articles/s41467-019-13760-8
“One view is that conceptual knowledge is organized using the circuitry in the medial temporal lobe (MTL) that supports spatial processing and navigation. In contrast, we find that a domain-general learning algorithm explains key findings in both spatial and conceptual domains. When the clustering model is applied to spatial navigation tasks, so-called place and grid cell-like representations emerge because of the relatively uniform distribution of possible inputs in these tasks. The same mechanism applied to conceptual tasks, where the overall space can be higher-dimensional and sampling sparser, leading to representations more aligned with human conceptual knowledge. Although the types of memory supported by the MTL are superficially dissimilar, the information processing steps appear shared. Our account suggests that the MTL uses a general-purpose algorithm to learn and organize context-relevant information in a useful format, rather than relying on navigation-specific neural circuitry.”

4 Likes

I agree that we should probably begin moving past describing grid-cells as solely encoding locations. I think the appropriate generalization would be something like state-specific context. By that, I mean that the grid cells are encoding a sort of SVD of the conceptual space. If I were to state it formally, I would probably invoke some form of eigenvalue/vector language. The eigenvectors would correspond to the Hebbian-learned filters that I’ve been describing and advocating recently, and the eigenvalues would be the sparse coefficients encoded by the grid-cell modules and/or the temporal sequence memory of the mini-columns.

Interesting that you naturally evoke eigenvalues for formalizing the grid cell “thing”.
Did you know that a theory explains grid cell patterns by eigenvalues?

Place cells can be modeled with Successor Representations (a technique from Reinforcement Learning). And surprisingly, grid cell patterns look accurately similar to the eigendecomposition of those Successor Representations.

A good and not too technical blog post on Successor Representations: https://medium.com/@awjuliani/the-present-in-terms-of-the-future-successor-representations-in-reinforcement-learning-316b78c5fa3?

Experimental grid cell fields:
image

Eigenvector from modeled Successor Representations:
image

More in Stachenfeld, 2017: http://gershmanlab.webfactional.com/pubs/Stachenfeld17.pdf

Compared to other models, this one reproduces the pattern irregualirities in non-squared environments with boundaries. So it seems more robust. But it can just be a coincidence…