How is a three dimensional cup mapped to a two dimensional grid cell array?
Have you seen this?
I think this poster is even more relevant.
There will also be a paper soon about how 2D grid cell modules can track N-dimensional variables.
@rhyolight thanks. Do you have a Version of this poster with higher resolution?
Did you click on it for the higher resolution version?
Aha, it is a problem with my iPhone… it works in my laptop as you mentioned…Thanks
This is a major step. Using the 3D case it explains why a grid cell ARRAY representation, it is a SDR, it really is HTM.
I would go further and say the point of grid cell arrays is not to make a homunculus of the world. It is not the one fully overlapping point that is useful, it is the SDR that is used.
I have a few questions on the article “A framework for Intelligence and Cortical Function Based on Grid Cells”. The questions are not on the basic idea yet, but on the grid cell theory behind it. You say that every learned environment is associated with a set of unique locations. So suppose you have two identical rooms, except one is colored blue and the other is colored green. You release a rat in one room, it learns its surroundings, then you release it in the other room, and it learns to get around in that room too. So it seems that the grid cells that are active at the left back corner of the blue room should be the same as the grid cells that are active at the left back corner of the green room. But it seems you are saying this is not true. If it is not, then why not? You also say that on entering a learned environment, grid cell modules anchor differently. Anchor means which grid cells are selected. Do you have a diagram that would illustrate this? Finally, in the example of the cup with the logo, why do two spaces exist - logo space and cup space? Are they both represented by the same modules? If they are, I would think there would be danger of overlap, unless you switch in time from one to the other and back.
Thanks in advance
This is an area of great interest in grid cells studies.
Google “grid cells remapping” and find that as a critter enters and orients to a space the response of the ensemble of grid/place/border cells seems to be reshuffled.
I have yet to read a paper that is able to describe the principles of how it works - only that it does.
There are promising signs that someone will sort out how location is coded but these are still “early day.”
Here’s a super relevant experimental result: https://academic.oup.com/cercor/article/25/11/4619/2367613
In this experiment, the two rooms have the same shape and size. They have different colors and odors. Grid cells’ firing fields are different in the two rooms. Interestingly, the difference is purely translational.
This was just published in Frontiers in Neuro Circuits, so I’ve updated the link in the post above (it was a pre-print server).
One of the most downloaded preprint science papers of 2018:
In figure 4, you say
To learn this behavior, the neocortex only needs to learn the sequence of displacement vectors as the top rotates.
But, in this case, there could be infinitely many displacement vectors between a location and the other. How does the theory cope with this? The theory apparently assumes that there is a discrete number of displacement vectors, which is not necessarily the case.
I would like to point out a few possible issues with this paper and theory:
Many proposals are highly speculative (e.g., the existence of displacement or even grid cells in the neocortex) and you do not even provide experiments which support your claims. What if these functions are not performed in the neo-cortex?
The use of the “modules” looks like a workaround.
You pair a displacement cell module with a grid cell module only for convenience (or because you assume that grid and displacement cells have complementary functions). Are displacement cells really needed?
You assume that there is a discrete number of objects. How does the neo-cortex know what should be represented as an object and what not (in terms of composition of objects)?
If every cortical column has its own model of the world, how exactly are these models then combined? It is often the case that we, humans, have several ideas of the same object, but most of the times we have just one idea of each object. How does the theory cope with these situations? Why do we have sometimes more than one competing model in our heads (i.e. we are confused or we actually know that there isn’t just a single model)?
There are an infinite amount of movements that will move a sensor from A to B, but that is not what the displacement represents. There is one displacement vector representing A to B.
Most theory is highly speculative. If these functions are not performed in the neocortex, then back to the drawing board.
In what way? We already know a lot about how grid cell modules work. I think this mechanism is very likely reused in other areas of the brain for other things. The modules emerged from our understanding of grid cells.
I don’t think so.
There’s no choice here. All objects are represented with the same mechanism. Either a feature of an object at a location is a collection of sensory input, or it is a displacement that places another object into the reference frame fo the parent. All objects representation works this way.
Lateral voting between cortical columns. We provide details about this mechanism in the columns paper.
As it is also shown in the corresponding figure of the paper, there can be arbitrarily many locations between locations A and B. In theory, you can have infinitely many displacement vectors which represent all possibly displacements from e.g. location A to any of the infinitely many locations between A and B.
But how does the neo-cortex know it has to represent say a nail has part of a finger or not? Is a nail an object or is it just the finger? In other words, what is an object?
I think you are misunderstanding the theory here. To link one object to another, you only need one displacement vector. You can use it at any location in object A’s reference frame to move into the location space of another object in memory. Each cortical column has its own representation of objects defined in unique spaces. They are coordinated via lateral voting. Re-watch my latest HTM School video for visualizations of this.
It’s a representation of sensory features and/or displacements representing other objects, all in allocentric space.
This theory thus assumes that to move the head of the stapler from A to B you just need one displacement vector, so it “ignores” all points between A and B.
What if we want to go say from city A to city B? In this case, we can’t ignore all intermediate locations. We need to take them into account. I guess that your theory would say that we would have as many displacement vectors as there are “relevant” locations to take into account.
A series of displacements can be linked together via temporal memory to represent the full movement of the stapler. This is how the TM is applied to object behavior.
If I am thinking about Chicago in my brain, and something reminds me of New York City, I don’t have to mentally travel through all the cities I know between the two to make the immediate jump.