Grid cells as meta data mapping

Is it possible that grid cells actually form a map for getting data about data. I would think that this would be an efficient way to access any data about any other data rapidly. In this way a hard hat as a concept would be the overlap of grid cells storing information about hardness, colour, use, tactile sensory data, previous interactions, geometry, …I think you get the idea…in this way mere impressions of things could be combined together to create an overlap of grids which would return a near certain result. Just a thought. I think in this way emotional state could also be mapped onto a concept or series of concepts like a subject area, and this would explain why people say things like I am not a math person or I am not an athletic type. I think this would point to why emotions play such a role in human learning.

Perhaps an idea more suited to psychology but that is my primary interest as all of this relates to learning. If someone knows someone who could discuss this with me please put me in touch.
I am also quite certain (although I lack the mathematical prowess to prove it) that the hexagonal grid applied at varying angles probably returns the highest degree of accuracy in meta data mapping at the lowest cost per square foot so to speak.

Perhaps some food for thought in the search engine field as well?

Could we make hexagonal sdrs?

1 Like

If you consider additional data about objects and thoughts, that can be combined with their sensory data, as meta data, then yes.

This “data about data” is the actual data that is stored and combined even from the sensory patterns. An object is stored in the higher order representation that combines patterns representing the features you mentioned. It isn’t just meta data but these features are the main data itself.
I am not sure if there is evidence of use of grid cells to store this feature data. They may be used to map each data point(data pieces) to locations or some other parameters, but I haven’t read that they are used in the generation of those feature representations.

I’m not sure that that’s what I was saying…I understand that I have a memory of a hard hat or a dog or cat or whatever that has as part of that memory all of the feature data accumulated from previous interactions with these objects or concepts. What I was getting at was is it possible that grid cells are used somehow to pattern and overlap new sensory information about a new hard hat or dog or cat. In this way a mere impression of colour shape size spatial orientation and even emotional context of an object or concept could be combined very fast to match to a memory with a high degree of probability. Essentially the grid cells overlap to produce an see for comparison to existing memory sdrs which are close in nature to the one currently being generated. Does that make more sense?

I believe you are asking about the H of HTM.
I think that the grid part is the signaling between higher-level “hubs” and yes - there is generalization there.

I offer my thoughts on this here

and here