Individual medial entorhinal cortex (mEC) ‘grid’ cells provide a representation of space that
appears to be essentially invariant across environments, modulo trivial transformations, in contrast to multiple, rapidly acquired hippocampal maps; it may therefore be established gradually, during rodent development. We explore with a simplified mathematical model the possibility that the self-organization of multiple grid fields into a triangular grid pattern may be a single-cell process, driven by firing rate adaptation and slowly varying spatial inputs. A simple analytical derivation indicates that triangular grids are favored asymptotic states of the self-organizing system, and computer simulations confirm that such states are indeed reached during a model learning process, provided it is sufficiently slow to effectively average out fluctuations. The interactions among local ensembles of grid units serve solely to stabilize a common grid orientation. Spatial information, in the real mEC network, may be provided by any combination of feedforward cortical afferents and feedback hippocampal projections from place cells, since either input alone is likely sufficient to yield grid fields.
This paper presents a promising computational model for grid cells. I hypothesize that the grid cell model described in this paper can be connected to an HTM with the effect of forming alocentric locations. The paper goes into some detail about the properties of the hippocampus place cells which serve as input to their model, and I think that the hippocampus input could be substituted for by the output layer of numenta’s sensory-motor integration model. This use of grid cells in the cortex would represent the space around and between the objects which the output layer is representing.
The grid cells can work with inputs which have the following properties:
- The input is a sparse distributed representation, ~2500 inputs with 2 - 12% density.
- The input represents specific locations, each cell can participate in representing many locations.
- The inputs change slowly. It is critical for this model that the locations which the inputs represent are large enough that motions across them take some time. The output layer of the two layer HTM also has this property because it contains stable represents of objects which persist across multiple sensations.
It is an interesting question. Another is whether the formation of grid-cell-like behavior is learned or pre-wired. I hope we can figure out how it is learned. It makes more sense to me if it is a consequence of exposure to a dimensional reality, not a requirement for perception.
Good news, everyone.
I’ve reproduced the results of this paper. It is implemented as a lightly modified Nupic Spatial Pooler.
It sort of works, not as well as the original, but IMO well enough to show that the core principals are sound and worthy of further investigation.
These core principals are twofold:
- Each mini-columns proximal excitement is put through a low-pass filter, which induces stability. My best explanation for this is that the excitement reacts slower than its inputs are changing which forces the grid cells (read: SP mini-columns) to learn large contiguous areas of input.
- The filtered excitement has a fatigue, which shapes the SP’s receptive fields into spheres. The fatigue also uses a low-pass filter. The fatigue slowly reduces the excitement over time, which effectively limits the amount of time which a grid cell can be active for. The competition then causes these spheres to be packed into the environment.
Both of these effects happen at the cellular level.
I have posted a more complete write up at:
In the future I hope to experiment with using a low-pass filter to induce stability in the output layer of Numenta’s two layer model.
Thank you for sharing this. I read the writeup and the code. The results have very nice implications for us to think about.
The low pass filter to the proximal excitement consists of only dividing overlap by the number of connected synapses right?
It may sound obvious but I read it a couple of times until I realized the grid cells (minicolumns) were using place cells as input. Figure 3 looks like figure 4 so it was confusing that one represents the input the other represent minicolumns, even with the explanations.
Is this global inhibition? If so, have you tried local and how do you think it would effect it?
No. The low pass filters to the proximal excitement consists of the equations:
R_act(t) = R_act(t-1) + b1 * (overlaps(t) - R_inact(t-1) - R_act(t-1))
R_inact(t) = R_inact(t-1) + b2 * (overlaps(t) - R_inact(t-1))
active_columns = self.inhibit_columns( R_act(t) )
Where R_act is the magnitude of the low-pass filtered excitement,
Where R_inact is the magnitude of fatigue,
Where b1 is a constant between [0, 1] which controls how fast the excitement changes,
Where b2 is a constant between [0, 1] which controls how fast the fatigue catches up with the excitement.
This is located at:
grid_cells.py lines 89-92 in method SP._fatigue.
Figure 2: Place cell receptive fields.
Figure 3: Grid cell receptive fields, untrained.
Figure 4: Grid cell receptive fields, trained.
The figure labels are beneath the images.
I used global inhibition and a rather small number of grid cells.
The authors of the original paper have played with topology. They vary the grid cell parameters across the length of their sheet of cells and then they observed that with recurrent collaterals the grid cells formed modules of grid cells with the same spacing.
Self-organized grid modules, Urdapilleta, Si, Treves, 2017. DOI: 10.1002/hipo.22765
Alright, those equations make more sense. I thought they were only about fatigue because of the function name that they were under.
Sorry, I should’ve said figure 2 and 3. Nowhere in the writeup it states that grid cells use place cell activation as the input explicitly. The coloring of the figures 2 and 3 look similar, hence my confusion (I even thought grid cells and place cells both refer to the same minicolumns at some point). Just wanted to give feedback as a random viewer, it may be just me though.
When you are referring to grid cell spacing, are you talking about spacing of the receptive fields or the spacing of actual grid cell activations? I think it is the former one. I am curious about the latter.
You’re correct, they’re talking about the receptive field spacing. They found that even though the constants which control the grid cell receptive field spacing vary along one of the dimensions in a uniform manner, that local recurrent connections could cause the grid cells to converge onto a small discrete set of grid cell RF spacings. They find that spacings converge to approximately multiples of 1.4 times each other. Unfortunately they haven’t done this with hebbian learning, they calculated reasonable weights to make this work.
Source: Self-organized grid modules, Urdapilleta, Si, Treves, 2017. DOI: 10.1002/hipo.22765