Can sensorimotor activity happen with the cortex by itself?

Can sensorimotor activity happen with the cortex by itself, or does it need to involve other brain regions?

Here is the thought process where I get tied up in knots…

1.) According to TBT, “thinking is a form of moving” which I totally believe because I’ve experienced this subjectively.

2.) Moving seems to imply a form of agent-hood, i.e. goals, etc. and a whole bunch of mechanisms that go beyond the cortex alone.

3.) Subconscious movement happens, e.g. eye saccades, that doesn’t seem to be driven by a whole agent with goals but instead by the needs of the visual cortex alone. In a more extreme example, it feels like movement in the "thinking is moving” sense happens totally within the cortex. Like when I ruminate on a problem in the back of my mind and suddenly I see the solution, my cortex likely traversed a lot of ground finding the path to get there. But something needs to be directing those movements.

So, where is the weak-link in my thinking?

Is it that,
A.) in fact, moving isn’t connected to agent-hood at all? Or is it
B.) that there are other brain regions outside the cortex driving the process of ruminating on the solution to a problem? Or is it
C.) Cortical Columns have borrowed a lighter form of other old-brain structures they same way they borrowed grid cells and place cells from the ERC?

Any insights are appreciated. Thank you.

2 Likes

The brain creates an analog of the world that includes data from all the senses. You can then move around in this internal world and also time travel in it in the sense of rewinding a past experience or projecting an experience into the future. You can also autoscopically ‘see’ yourself in different scenarios and thus produce complex plans. TBT establishes the two key processes needed for this to be enabled. A mapping to the physical world as derived from moving through what we might call a sensory-somatic space and the time element. A dog moves trough a sensory space dominated by smell, other animals touch (tactile), etc. Humans, of course, have the most complex repertoire of sensory data to work with of all.

2 Likes

I have been promoting this model on the forum for a while now.

1 Like

The only insight I can offer is that introspection is not science. If you think you know something real but only because of your subjective experience, you’re almost certainly mistaken. Introspection might give you clues where to look (it probably won’t) but good science has never come from just thinking about stuff. All good science directly relies on experimental data.

So if your point (1) falls away, there isn’t much left, I’m afraid.

@BitKing: your explanation of “the basic loop of consciousness” and short-term memory makes a lot of sense to me. But I was still left with a question regarding sensorimotor activity: If each cortical column is capable of moving through reference frames, that means there are ~150K “perspectives” that need to be controlled and managed. Do all of them respond to projections from outside the neocortex? Where does the impulse to cause a move in a reference frame come from?

At the highest level, the Cingulate cortex is responsible for goal directed activity, but I got to thinking about SCA loops (Sense-Consider-Act) in very primitive organisms that don’t have brains at all. And also about some of the lowest level reflexes we have. For example, pulling your hand away from a hot object. In some cases these reflexes begin before any signal has reached the brain.

So that leaves me with a few follow-up questions:

1.) What is the simplest (neural) SCA loop that we (mammals) have? There must be some kind of neuron or collection of neurons that operates to keep some parameters in homeostasis. I bet it’s well studied, but I just don’t know the right search terms.

2.) Do those SCA loops exist entirely within cortical columns?

If so, this runs counter to the “The neocortex is map” idea, but it seems to be the simplest hypothesis that I can make fit all the observations.

Perhaps a better way to look at it is to say ‘The neocortex contains or stores a map’. The former is far too restrictive.

“Thinking is a form of movement.” What directs the movement?

The 2019 paper “Locations in the Neocortex” suggests a fundamental benchmark for macrocolumn research: a rat (or mouse) navigating dark 2-D environments (only features local to a given location can be sensed). The mouse is first introduced to a set of environments, say 20-50, and has the opportunity to explore and learn each of them. Then, it is dropped into a random environment at a random location. It orients itself by moving about in the environment and associating features it senses with what it previously learned. It eventually converges to a unique location within its environment: it is oriented.

A macrocolumn contains the learned information. As initial explorations take place, the macrocolumn learns spatial relationships among features belonging to each of the environments. A single macrocolumn can learn and hold multiple environments at the same time. This capability is demonstrated via simulations in the 2019 paper.

Say we have a macrocolumn that stores an environment as a directed displacement graph as proposed by Lewis in his “Hippocampal Spatial Mapping As Fast Graph Learning” paper. The environment is stored in the synapses as a directed graph with labeled edges that give the spatial displacements between two features. The graph is not complete (having a direct path between any two nodes would be costly). However, the graph should be connected so at a minimum there is a multi-edge path from any feature to any other feature.

This macrocolumn can support three basic tasks:

  1. Exploration: It can learn environments through exploration;
  2. Orientation: When placed in a learned environment, it supports the orientation function;
  3. Navigation: After orientation has taken place, it supports navigation through the environment.

Regarding 3: One of the features may be “cheese”, so if the mouse is placed in a learned environment in a “hungry” state, it can use the macrocolumn to navigate to find the cheese. There may not be a single graph edge from its initial oriented location to the cheese, however, so at a minimum it can use some simple trial and error method to travel along a series of edges until it finds the cheese.

In the modeling work that I am doing, I employ an explicit, architected agent that generates the movements necessary to achieve the three basic tasks. The efficiencies with which the three tasks are performed depend on the movements required to achieve them, and the movements are determined by the agent. The agent is as important as the macrocolumn; neither would work without the other. And overall efficiency is determined by the quality of the agent.

So, an agent is an essential part of the overall system and implementing a biologically plausible agent becomes a research project on its own. For example, one might use neurons to implement a plausible reinforcement learning method that can optimize (shorten) paths to the cheese. Or the initial exploration phase might be part of an overall optimization plan to reduce the path length. An agent might re-invoke exploration from time-to-time so the short path to the cheese can evolve.

In the biological brain where is this agent functionality performed? It is not performed by the macrocolumns as described above.

Q-Learning Analogy
Q-Learning is a classic method for implementing reinforcement learning. It is based on a data structure known as the Q-Table. During exploration, an agent moves through an environment, receiving rewards and punishments as it goes. The results of many exploratory episodes are duly recorded and processed by the Q-Table. During exploration, the Q-Table only takes in information that is provided to it; it does not affect the exploration path. Then, after sufficient exploration, the agent may consult the Q-Table to decide on movements. In the classic algorithm, the result of a Q-Table lookup is used. However, the agent can use the Q-Table in any way it sees fit and can even ignore what the Q-Table “suggests” in favor of some heuristic-based move.

Macrocolumns may play a role similar to the Q-Table in Q-Learning methods. That is, they are large (sophisticated) data structures that support functions (exploration, orientation, navigation) under the direction of an agent.

Space-Alien analogy
Say the metaphorical space aliens come to earth and examine a state-of-the-art microprocessor. Most of what they will see is SRAM – multi-level on-chip caches and predictors. And they may observe that the more SRAM, the better the performance (sometimes by huge amounts depending on working set sizes).
They would then endeavor to discover how an SRAM works, motivated by the belief that it is the most important part of the computer. It is an essential part, to be sure, but one can argue that the CPU is where the real magic takes place – it uses SRAM as a large data structure to support its operation.

What all this may mean
An overall research approach is to co-develop macrocolumn architecture and agent architecture.
Given a working macrocolumn as described above, there are (at least) two major research directions. One is to lash together multiple macrocolumns to form a region that can be used for achieving higher level objectives. The Numenta group uses lateral connections to implement a form of distributed consensus (“voting”) amongst groups of macrocolumns (see 04/05/2022 Numenta Research Meeting video).

The other direction is to pursue biologically plausible optimized agents, with emphasis on plausible reinforcement learning methods. This can give insight regarding the capabilities that macrocolumns should provide. And advanced agents will be essential for demonstrating the capabilities of human-engineered neocortices as they develop.

3 Likes

Jes, I just want to thank you again for this answer. You hit the nail right on the head.

As I’ve spent more time on my software the more I am convinced of the necessity to separate the behaviors you call the “agent” from the models (macrocolumns). It seems to make a mess any time I permit any autonomy into the macrocolumn models.

Thanks again for your thoughtful answer.

I must have missed this the first time around, but this is one of the best posts I’ve seen here.

What you are describing is an animal brain modelling reality: a physical layout, plus cheese. This is the thing missing from all the ANN-based solutions I’ve seen. A self-driving car taps into a vast library of visual images of situations and what to do, but it doesn’t build a model.

And yes, it seems entirely plausible that columns and SDRs are the means of constructing, consulting and updating that model.

But I would very much appreciate a bit more explanation of what characterises an agent, and where we’re up to in constructing one. Is an agent a simple thing (I need cheese!) or is highly complex? Are there working examples, even at an early stage?