Sharing an analogy I consider interesting

I’m here to share with you an analogy that came to my mind recently. It might be inaccurate or even wrong in any or all of it’s aspects but though it might not be useful I still consider it interesting enough to share. Besides, I’m a programmer, not a neuroscientist, and I lack that enormous amount of knowledge on the topic that you in Numenta have, so I can’t evaluate this analogy myself. With that said, below I present an analogy between early monocellular creatures and cortical hypercolumns and speculate about potential consequences of such a perspective.

The analogy came to my mind when I was reading the “Resynthesizing behavior through phylogenetic refinement” paper by Paul Сisek.

In the paper, among other things, author describes an early monocellular creature. It has some internal chemical state, some set of chemical sensors on it’s membrane and some mechanism to perform actions (for example, move or eat). Internal state changes in time, affected by both chemical sensors on the membrane and creature’s metabolism. There is a range of internal states that are “desirable” for a creature. When internal state leaves this range, action mechanism activates and the action is performed to eliminate the conditions that motivated the action. Author calls conditions that motivate actions an “impetus”.

We can look on such a creature from two perspectives:

From the observer’s point of view, creature senses it’s environment, moves, gets hungry and eats.

From a creature’s internal point of view however there are no such concepts as “environment”, “hunger”, “movement” and “eating”, there is just an internal chemical state and mechanisms that affect it. Whenever a creature’s state leaves desired range (creature gets “hungry”), some mechanism activates (creature “moves”/“eats”) and changes patterns of chemical signals from sensors (creature “interacts with environment”), eventually returning internal state back to a “desired” range.

Such a perspective gives an opportunity to completely abstract a creature from 3d environment. Chemical signals from sensors are now just some information, “movement”/“eating” are now just mechanisms that change patterns of this information and through this internal state. They can be simulated by some sophisticated machine and if it will be accurate enough in chemical signals it sends and changing their patterns according to creature’s movement attempts then from creature’s point of view there won’t be any difference. In such a simulation, creature’s environment is not physical anymore, but rather informational and “movement” is now performed not in a common sense, but within this informational environment.

In a way, evolution has developed a simulation machine of a described kind and it is a neural network. Neurons still receive their inputs as chemical signals through synapses, but those signals don’t originate in a 3d environment anymore, they are now some abstract information.

Now consider a cortical hypercolumn (HC).

HC’s “environment” consists of parts of sensory field and/or invariant representations originated in other HCs, depending on where it is connected to thalamus.

In order to be able to change patterns of signals from such an environment (or to “move” itself in it), HC will need two mechanisms: one for literally moving sensors and another one for “moving” other HCs (not necesserily same ones it receives input from but rather ones that are the most useful to affect those inputs). This “moving by affecting other HCs” mechanism can be a potential addition to / alternative interpretation of what you think of as “voting”

Following the analogy further, we now need a HC analogs for “desired” range of internal states and impetus. Numenta shows that if a HC perfectly predicts it’s next input then much less pyramidal cells will fire once it will arrive because of a local inhibition from basket cells. Sinse action potentials and their postsynaptic effects are metabolically expensive, predictive state may be preferrable from energy efficiency perspective and “desired” for an HC. If so then an analogy to impetus will be a state when prediction accuracy drops below some threshold, causing more cells to fire and increasing energy uptake. Again, following the analogy this will cause column to “move” to return back to a “desired” predictive state.

As opposed to a creature described above, HC can learn new patterns in it’s inputs. This is not required for this analogy but is useful because it helps to extend a range of desired states.

And here comes even more speculations.

If I got this right, when a HC fails to predict it’s input then the invariant representation it forms as its output becomes unstable, starts to “flicker”. Since “environments” of other HCs can include parts of this output, they might also leave their predictive states, making their outputs unstable too, spreading “unpredictiveness” further and eventually causing a significant number of HCs to switch to a different invariant representation than they have before, which gives a mechanism of bottom-up attention. Adding a mechanism that’ll allow HC to “move” other HCs that provide it with its “environment” or are most useful to affect it will allow the attention to spread in any direction, not just up the hierarchy.

Jeff said earlier in some video that there seem to be two predictive circuits in a HC. His hypothesis then was that one might be for orientation and another – for location. The perspective described here offers another option to think about: it can be useful to know how to return to a predictive state faster, so the second circle may be there to learn how to move sensors and other HCs more efficiently.

There are mechanisms in the brain that modulate neocortex activity. Increased energy consumption caused by leaving predictive state might be a clear signal for such a mechanisms that some modulation is needed in a given HC. Other mechanisms might force HCs to “move” under some physiological conditions like hunger or pain

For example, if some HC leaves predictive state it might be useful to strengthen its inputs and weeken influence of other HCs so that it got a better chance to learn new patterns. There is a brain region that might do just that: nucleus basalis. Wikipedia states that “When a new potentially important stimulus is received, the nucleus basalis is activated. The axons it sends to the visual cortex provide collaterals to pyramidal cells in layer IV […] where they activate excitatory nicotinic receptors and thus potentiate retinal activation of V1. The cholinergic axons then proceed to layers I-II […] where they activate inhibitory muscarinic receptors of pyramidal cells, and thus inhibit cortico-cortical conduction”. And you can tell that increased energy consumption caused by leaving predictive state is a good proxy for a “potentially important stimulus” since energy efficiency is important.

I’m sure that there are even more interesting consequences to be examined

So, how do you think, does any of this make sense?

6 Likes

Any thoughts?

Surprise and free energy minimisation?
You may want to read this:

There has been a lot of work in this area.

1 Like