Excellent work! The visualizations are perfect â I have to admit that I wasnât grasping the overlap element before watching this.
Another superb lesson; a great presentation of grid cells and their relationships in space.
BTW: Awesome software behind this presentation.
Hexagons are real and the government doesnât want you to know the truth⌠Awesome job as always.
Thank you! I was waiting this for some time. This is an amazing solution that can be applied to so many thingsâŚ
Any plan on head direction cells? It was in the Cosyne poster.
Bravo!
Iâve been wondering what the new avatar was about. As a fan of Bucky Fuller Iâve seen close packed spheres as an obvious choice for years but I was hardly prepared for such compelling evidence or direct applicability to this problem.
Excellent video as always! Is there an unpaywalled version of the papers for the academically challenged peoples?
Thanks!
Yes - here.
Iâm not saying this video was great ⌠but it was.
Thanks for the kind words, everyone.
You can run the grid cell example code with Node.js only (no need for the NuPIC server). See How to Run HTM School Visualizations.
I saw some hexagons in a urinal yesterday. Coincidence? (I think not)
Yes. But for objects, think sensor orientation instead of head direction.
Thanks for an awesome video @rhyolight, your spatially rich explanation made it easy for me to start to understand how we think spatially.
The grid cell module concept was a bit of a surprise. I was expecting time to play a key role in resolving location ambiguity like in the rest of HTM theory, which kind of makes sense in a naive example of a person looking around and identifying landmarks. But when you apply it to mental spatial concepts like the day of the week it makes more sense that thereâs a more instant mechanism.
btwâŚwhich was more time consuming - building the grid cell visualisations or capturing the perfect âhexagonsâ scene?
Some amplification of details of this video.
The potential activation pattern sites can be visualized as a uniform grid across the entire map as shown in the video. Not all of this is turned on at the same time. Look at the embedded Moser video - the cell is part of the grid-forming ensemble but it only fires when that spatial location is being signaled.
In action, only part of the grid module at each spatial scale is driven into resonance forming a âspotâ of grid activation. For a visualization please see this paper, pages 3 & 4:
As the critter moves around this spot of activity is pushed around the map of grid-forming cells. This paper talks about this spot being influenced by both the head position cells and sensed self-motion resulting in migration of the spot across the map.
Note that strictly visual cues result in the same motion and locations activations in the rat.
In the grid literature, much is made of both self-motion, head direction, border cell, and location signaling. I donât see much on the vestibular system, barrel cells, or olfaction in navigation. I expect a lot more on this in the future. This is important as the mechanisms behind how the sensed information causes grids to form is still murky.
Studies in monkeys show the same shift in activation as the eyes scan a screen.
This is the best way to convey this topic. Very good video. I was reading recently that the grid cell area (and the entire hippocampus) represents patterns in a much sparser way than the neocortex. Their explanation was that the hippocampus can store episodic memories better because of this - the different memories donât interfere with each other. The neocortex though has more overlaps between SDRs, which is better for generalization.
Maybe the hexagonal arrangement for each place cell is due to a kind of even inhibition in all directions.
Maybe large pattern separation is needed in spatial thinking?
Do you have a link to this?
It is here: https://grey.colorado.edu/CompCogNeuro/index.php/CCNBook/Main
As a matter of fact, I think it was a paper you posted that led me to their site. They also have simulations that you can run. Its a great site.
Hereâs a relevant diagram from their book:
(I downloaded their book as a PDF, and this diagram is on page 75).
Note that both the neocortex and the Hippocampus have attractors, but the other areas do not. Also note the âSeparatorâ column - that has to do with inhibition.
I am not sure about that table. All the sources I read points to some sort of Temporal Difference Learning happening in ganglia. The argument is that ganglia learns via differences between the expected reward and actual reward (error) evidenced by the biological dopamine secretion levels. I never encountered any in depth studies claiming that reward is the actual learning signal instead of the error.
Well yes, but they have the theory you mention in their book, and also, what they are really saying in the table is this: (I give an example here from my own life)
Today, I shoveled snow (I was in the East coast snowstorm) and threw the snow over the edge of my driveway. If I had hit a watcher in the face with the snow by accident, my cerebellum would learn from the error - the snow didnât go where expected. My ganglia on the other hand, would see a dip in ârewardâ vs expectation, and would affect my goals (donât throw snow around in a hurry to get back to that great Seinfeld episode when there is someone standing around).
More seriously, they do go through the theories you mention, but they also talk about a model they created where synapses in the ganglia have a kind of Boolean-flag that says whether they were ON in the past X minutes. If a reward happens, and that flag is TRUE, then those synapses get stronger. The interesting thing is that this model works pretty well, even with conditioned stimuli etc.
Cheers.
What strategy might be the best way to encode location from multiple grid modules? My initial thought is a simple scaler encoder for each grid module â each having a reserved range of potential indices in the encoded space.
I donât think you need to encode anything. Each grid cell module, once established, is an SDR already. Simply concatenate them together. I did this in the video to show how a 2D space could be represented if you wanted semantically similar location representations (not certain we need that for how grid cells act in the brain yet).