Cortical grid cells and Abstract ideas

I think that grid cells encode a temporally periodic pattern. Spatial locations can be periodic in both space and time if you are moving. This way grid cells can encode locations in any sequence, rather than a strictly spatial location.


I’m asking about abstract patterns you mentioned earlier. For instance, such properties like color, transparency, temperature, weight, etc. and more abstract like danger, attractiveness, reliability… basically any adjective.


Now imagine that as a sense (say vision) streams though color, depth, texture, motion, extracted edges, ect, are decomposed into a vast field of properties all roughly topologically aligned and mixed together in the association area. This is what the SDRs that feed the hex-grid forming cells are sampling to fuse into a representation.
In my hex-grid post I postulate that both time and spatial properties are being sampled at the same time.
This is what competes to be the best pattern match to assert the formation of a grid to communicate to the hubs that make up the global workspace.


I totally see how any periodical structure can be used for space, time, and any ranges (including degree of manifestation for most abstract patterns). However, abstract patterns themself is basically a set, so I can’t get how periodical structure could be useful for any manipulations with them.


These are emotional coloring that are added to whatever objects that are being coded. This is clearly a sub-cortical contribution and not an inherent function of cortical processing. The cortex remembers the results of this processing but does not generate it by itself.

I have outlined how I think this works numerous places elsewhere on the forum:

This one touches on the emotional weighting in particular.

Look at this one and note the reference to “Elliot” in the Rita Carter book:

Please note that the place cells have been documented and there is starting to be some indication that there is some sort of operations that allow mapping between these places, including short-cuts.

I have read in some older work with frontal eye fields (that I have been utterly able to find again!) that the saccade actually is an activated path between two projected activation patchs in the frontal eye field.

I strongly suspect that most of what we think of as mental processing and manipulation is, in fact, done with the same basic mechanisms that drive external motor movements. These “motor patterns” push activation back into the sensory streams directing what we normally think of as searching higher order mental dimensions: connections between areas/maps.

Some of this higher dimension space is learned connections as we absorb the motor behavior we call language:

In all of this please never forget that the driving force for the cortex (sensing and acting) is the older lizard brain. The cortex by itself is inherently passive.


I offer that this repeating pattern only extends as far as recognition that bonds one hex-grid cell to the next in a pattern that they share. This may be a very small patch. I postulate that several may exist at the same time, co-existing in the same map.

I will work on making some sketches of what a complete global workspace as implemented in hex-grids look like in the next few days. I have been struggling to think of a way that makes sense without being too simple or too busy to just be confusing.

Sorry, I wasn’t accurate enough, involving emotional part wasn’t my intention. Let’s use another set of examples: clumsy, mysterious, lazy, silly - I hope you see what I mean.

But we’ve been talking about grid cells, right?

I’m eager to see it!


See the bit above about language and higher dimensions that are the joining of more concrete fragments.
Interestingly - the grounded semantic meaning of these sort of things all seems to be rooted in your body and built up from these primitives. I have posted this before but it seems to be the most direct answer you your question:


Of course - but they are only part of a much larger system.
As I said - the hex-grids sit at the tops of each lobe - the hubs. There is lots of things happening to support this.
One other detail that I find interesting - the mechanism that I postulate is the core of the hex-grid forming L2/3 layer depends critically on the ratio between the inhibitory inter-neurons and mututal-excitatory distances.
If the ratios are different, such as in VI, they work to form a Gabor filter. I would not be at all surprised if there are some other calculation tricks that pop out in tuning the ratio between these components.

I think that abstract patterns are processed by the cortex in the same way as sensory patterns. Edit: I also think that the magic step we call “abstraction” happens outside of the neocortex and passes into the PFC as feed forward input.

Could sensory patterns also be analyzed as a set? My analysis is that concrete objects are sets of sensory features, which change over time as you take actions.

Thank you for the link - I’ll read it.
Anyhow, I don’t think language should be involved in the discussion of the basic brain algorithm: Noam Chomsky claims the language is based on the thoughts which existed before its emergence, and it sounds completely reasonable to me.

I hope your sketches you mentioned before will give a holistic picture of you vision of this system, I’ll wait :slight_smile:

In general I agree, my focus on abstract patterns was just for the discussion purpose, to separate them from patterns which representation can involve some ranges.

Where? And why do you think so?

As I said, I used the word set only to focus on differences in some kinds of patterns. Nothing is represented in the cortex as a set per se - that would need to support unreasonable huge sets and to solve the problem of their semantic similarity separately, what doesn’t make any sense.

As long as you are satisfied with the cognitive performance of a mouse I would agree with you.

Much of the mental tricks that we attribute to human cognition do not form if we don’t learn a human language.

I think that this very basic fact is a strong counter-argument to many of Chomsky’s claims.
Look at the prior link posted above:
Language the base of conscious thought; you can see what is missing if a language is not learned.
Look for “Joseph …”


8 posts were split to a new topic: Language vs tools

I believe you should raise the bar a bit higher: language exists, let’s say, for 100,000 years, but the first tools (inventing and use of which requires a decent level of thinking) were in use more than three mln years ago.

It would be interesting to see a couple of examples to understand what exactly we are talking about.

I offered several above- please check them out.
There are many sad example that have been documented. Without language people are basically animals.

Before you get too proud of tool use keep in mind that octopus, crows, beavers, and bees use tools too. Ants farm aphids. Tool use in other primates is well documented.

I don’t know that anyone has proven conclusively that hominids did not have language as long as they have been making pots and points.


I assume you are talking about presaccadic remapping and whether RFs jump or spread.

I haven’t read this source fully. It’s on LIP rather than FEF.

There are a lot of possible complications though.

Methods are pretty indirect for remapping studies so there are other ways to interpret results. It might be a spread sometimes and a jump other times depending on the thing being studied (layer, region, cell type, time before the saccade, subthreshold vs. firing, etc.) since it’s different in different layers. On average although with a lot of variation by cell, cells projecting to superior colliculus lose their response to the current visual stimulus as they gain their response to the upcoming post-saccade stimulus, unlike some other layers. It could be a purely attentional effect, so possibly just revealing parts of receptive fields which are less influential normally but still are functionally important.

It might not even make sense to think about this system just in terms of receptive fields, which could make hypotheses about remapping more self-confirming especially because remapping seems very noisy. For example, if you recorded from HTM’s temporal memory, you could argue that all cells in the same minicolumn have exactly the same receptive fields so minicolumns are therefore for robustness to noise (but the noise is actually sequence context). You could find particular activation patterns (which are actually representations of places in sequences) and make arguments about what those activation patterns are for, such as correlations between the activity patterns and pretty much anything (since a lot of things produce different sequences) and claim a result.

That’s not to say that remapping isn’t interesting and useful evidence. It’s a big clue but hard to work with.


Bitking, thanks for this, it is giving me tons to think about.