Ah, so essentially consider the active cells of the grid as the SDR, and union them if so needed (or just grow distal connections into all of them)
Definitely consider the active cell(s) in the GCM the on bits in an SDR. It is hard to say how many GCMs are used together in brains, but consider 16 cells per module where one cell is active at a time. This is not as sparse as we use in the SP (6.25%), but it works fine for input. Now imagine 10 modules projected across one space. Now you have 160 bits. 100 modules? 1600 bits.
Thanks. To clarify why I was asking, Iām working on updating my 2-layer (soon to be 4-layer after differentiating āsense + orientationā vs āfeature + locationā), to implement the allocentric location signal with GCMās. Itās an implementation question of whether distal input for this signal is coming from a single pooled matrix of cells, or from multiple separate matrices.
When you think about a Layer, you canāt really make assumptions about where the input is coming from, but you have to think about the process it is performing and what the input/output represents. I think you should be able to join representations from many GCMs into one input space for either proximal or distal input to a layer. You could probably shuffle up their bits too, as long as they are consistent across time.
Yes, the more I think about it, the implementation question really boils down to whether the output from the GCMs should be fed through the SP process (to fix sparsity) or used directly.
If the grids work correctly you should be able to use them directly.
I think GCMs could be useful that way, but Iām not sure thatās how they are used in neuronal calculations. We are still investigating. Hard to say more, still waiting for research.
Hi everyone,
I just watch the video about Grid cells. and then I read a questioncomment : āwhat evidence exists that our spatial concepts underlie our temporal concepts?ā and the answer from HTM school that is 'temporal without spatial is nearly meaningless. it is like a 1D array. you must process more than that to understand reality. adding āspatialā to the mix mean each data point in the 1D array can contain a wealth of information. itās just how reality is, and makes sence to me that 's how the brain represents it"
So i think about these comment by a example. that when I say a sentence. so every word will be present by SDR at each time point. and then I can say that every word is related to position of that word in sentence.
So can I say to realize sentence is the combination of spatial and temporal.
In the video, Matt illustrated grid cells modules that are projecting onto a 1D space encoding Z-axis running perpendicular to the grid cell modules that are projecting onto a 2D space encoding X-Y axis giving us the complete representation of 3D space. But based on a paper I read a long time ago, I imagined grid cell modules projecting onto a 3D space more like 2D layers of probable locations stacked on top of each other encoding all 3 dimensions.
I know that technically, it doesnāt really make any difference. But biologically, is the 3rd dimension being represented exclusively by some grid cells modules or did I misunderstand the whole thing?
Iāll let others comment on the biology, but one important point (in case anyone didnāt pick up on it already) is that the type of grid cells depicted in the video are not physically arranged in any particular pattern (they are different than the self-reinforcing physical grid patterns that have been discussed in other threads). Different cells represent different points in a space, but they themselves donāt have to be arranged in a way to match that space. So as long as you have enough of them you could depict points in any dimensional space.
I do expect that grid cells can sample a wide variety of inputs around each space and collapse that to a 2D space. Unlike the Mattās visualization I expect that the grid may well be a small patch of activity on a given map.
I expect that the size of this patch of activity is related to how many cells forming the grid are cooperating in recognizing an input from the sensed dimension.
The grid output from a topologically related patch/space in the map will be active when those cells āhitā on a match in the input manifold - whatever dimension that might be. As the input moves along the input manifold I expect the resulting output grid pattern to change - either in spatial location in the map or in which cells are forming the grid in that spatial location.
One of the important properties of grids is that related grid modules that sample the input space at different spatial scales may project to an output area. Where these grids reinforce as described in Mattās video you get a addition of activity that defines a precise location in the input space. This collapses a distributed activation pattern in input space into a very accurate point of activity in output space.
Excuse my English. I am not talking about the exact locations of the grid cells in the brain. I know theyāre not physically arranged in patterns. Iām also not talking about all the other kinds of input that Grid cells can support.
My question is that in 3D spatial navigation, a single grid cell from a grid module operates on a 2D scale or a 3D scale?
If they operate on a 2D scale, the grid cells that represent the 3rd dimension canāt overlap with the cells that operate on a 2D scale and has to be represented separately. But if the grid cells operate on a 3D scale, the grid modules can overlap and pinpoint the exact location in 3D space without the need of 1D array.
Strictly 2D somewhere on a manifold.
The manifold can globally be in higher space but locally both the input and output are in 2D space.
Figure C: Hypothesis 1 - Lattice is what I had in mind.
Grid cells on steeply sloping terrain: evidence for planar rather than volumetric encoding
Hypothesis 2 please.
Please keep in mind that this is being performed by a sheet of cells.
Figure 4 in your link shows the deformation of the space to deal with the wrap-around from flat to vertical on the single grid map. This deformation in the plane is sampling the intersection of this flat plane and the encoding mechanism onto the grid space. This is what is shown in the the right-hand image D in your picture. The ācornerā is āfurther away.ā
Not discussed but what should have been is the scaling on different grid modules as the critter transitions from flat to climbing.
Your brain thinks in 2D with āattributesā for local space. This is discussed at length in the āVisionā book by David Marr - he calls it 2.5 D.
http://kryakin.site/David%20Marr-Vision.pdf
I imagine youāre getting bored of hearing how amazing your work isā¦ but, sorry, itās still amazing!
What I (mis?)understood from this was that modules of grid cell-like populations are used to encode coordinates of relative positions of sensor / feature / object. How is this done, if the encoding is purely planar? Can someone please clarify?
Iāll try, but first:
What do you mean by this?
I mean it in the sense of the paper linked by @vamsi : what is encoded is only sensor / feature / object position on a 2D manifold.
I donāt think of it like that. It seems to me that the manifold is dimensionless.