Ah, so essentially consider the active cells of the grid as the SDR, and union them if so needed (or just grow distal connections into all of them)
Definitely consider the active cell(s) in the GCM the on bits in an SDR. It is hard to say how many GCMs are used together in brains, but consider 16 cells per module where one cell is active at a time. This is not as sparse as we use in the SP (6.25%), but it works fine for input. Now imagine 10 modules projected across one space. Now you have 160 bits. 100 modules? 1600 bits.
Thanks. To clarify why I was asking, I’m working on updating my 2-layer (soon to be 4-layer after differentiating “sense + orientation” vs “feature + location”), to implement the allocentric location signal with GCM’s. It’s an implementation question of whether distal input for this signal is coming from a single pooled matrix of cells, or from multiple separate matrices.
When you think about a Layer, you can’t really make assumptions about where the input is coming from, but you have to think about the process it is performing and what the input/output represents. I think you should be able to join representations from many GCMs into one input space for either proximal or distal input to a layer. You could probably shuffle up their bits too, as long as they are consistent across time.
Yes, the more I think about it, the implementation question really boils down to whether the output from the GCMs should be fed through the SP process (to fix sparsity) or used directly.
If the grids work correctly you should be able to use them directly.
I think GCMs could be useful that way, but I’m not sure that’s how they are used in neuronal calculations. We are still investigating. Hard to say more, still waiting for research.
I just watch the video about Grid cells. and then I read a questioncomment : “what evidence exists that our spatial concepts underlie our temporal concepts?” and the answer from HTM school that is 'temporal without spatial is nearly meaningless. it is like a 1D array. you must process more than that to understand reality. adding “spatial” to the mix mean each data point in the 1D array can contain a wealth of information. it’s just how reality is, and makes sence to me that 's how the brain represents it"
So i think about these comment by a example. that when I say a sentence. so every word will be present by SDR at each time point. and then I can say that every word is related to position of that word in sentence.
So can I say to realize sentence is the combination of spatial and temporal.
In the video, Matt illustrated grid cells modules that are projecting onto a 1D space encoding Z-axis running perpendicular to the grid cell modules that are projecting onto a 2D space encoding X-Y axis giving us the complete representation of 3D space. But based on a paper I read a long time ago, I imagined grid cell modules projecting onto a 3D space more like 2D layers of probable locations stacked on top of each other encoding all 3 dimensions.
I know that technically, it doesn’t really make any difference. But biologically, is the 3rd dimension being represented exclusively by some grid cells modules or did I misunderstand the whole thing?
I’ll let others comment on the biology, but one important point (in case anyone didn’t pick up on it already) is that the type of grid cells depicted in the video are not physically arranged in any particular pattern (they are different than the self-reinforcing physical grid patterns that have been discussed in other threads). Different cells represent different points in a space, but they themselves don’t have to be arranged in a way to match that space. So as long as you have enough of them you could depict points in any dimensional space.
I do expect that grid cells can sample a wide variety of inputs around each space and collapse that to a 2D space. Unlike the Matt’s visualization I expect that the grid may well be a small patch of activity on a given map.
I expect that the size of this patch of activity is related to how many cells forming the grid are cooperating in recognizing an input from the sensed dimension.
The grid output from a topologically related patch/space in the map will be active when those cells “hit” on a match in the input manifold - whatever dimension that might be. As the input moves along the input manifold I expect the resulting output grid pattern to change - either in spatial location in the map or in which cells are forming the grid in that spatial location.
One of the important properties of grids is that related grid modules that sample the input space at different spatial scales may project to an output area. Where these grids reinforce as described in Matt’s video you get a addition of activity that defines a precise location in the input space. This collapses a distributed activation pattern in input space into a very accurate point of activity in output space.
Excuse my English. I am not talking about the exact locations of the grid cells in the brain. I know they’re not physically arranged in patterns. I’m also not talking about all the other kinds of input that Grid cells can support.
My question is that in 3D spatial navigation, a single grid cell from a grid module operates on a 2D scale or a 3D scale?
If they operate on a 2D scale, the grid cells that represent the 3rd dimension can’t overlap with the cells that operate on a 2D scale and has to be represented separately. But if the grid cells operate on a 3D scale, the grid modules can overlap and pinpoint the exact location in 3D space without the need of 1D array.
Strictly 2D somewhere on a manifold.
The manifold can globally be in higher space but locally both the input and output are in 2D space.
Figure C: Hypothesis 1 - Lattice is what I had in mind.
Hypothesis 2 please.
Please keep in mind that this is being performed by a sheet of cells.
Figure 4 in your link shows the deformation of the space to deal with the wrap-around from flat to vertical on the single grid map. This deformation in the plane is sampling the intersection of this flat plane and the encoding mechanism onto the grid space. This is what is shown in the the right-hand image D in your picture. The “corner” is “further away.”
Not discussed but what should have been is the scaling on different grid modules as the critter transitions from flat to climbing.
Your brain thinks in 2D with “attributes” for local space. This is discussed at length in the “Vision” book by David Marr - he calls it 2.5 D.
I imagine you’re getting bored of hearing how amazing your work is… but, sorry, it’s still amazing!
What I (mis?)understood from this was that modules of grid cell-like populations are used to encode coordinates of relative positions of sensor / feature / object. How is this done, if the encoding is purely planar? Can someone please clarify?
I’ll try, but first:
What do you mean by this?
I don’t think of it like that. It seems to me that the manifold is dimensionless.