The Thousand Brains Theory - Reviewed

I think the massive energy consumption of the neural hardware has driven aggressive elimination of useless features.

The functional power of the old brain has retained these functions as drivers of the cortex.

If you want to understand the functions of the brain you will have to leave the seductive regularity of the cortex and descend into the hardwired complexity of the lizard brain.

Evolution has had much longer to dial in the functions in the lizard brain and try (and adopt) weird experiments.

1 Like

I believe you are right about the role of the old brain, still, as for me, it’s the biggest retained structure we have (but not the functions of it).
Fortunately, its functions are not so complicated, if we don’t care about the reproduction of all weird stuff evolution has produced during millions of years of tinkering and adapting to the environment which is not actual anymore.
On the other hand, the general algo of the neocortex processing is were all good stuff is originated from, like, abstract thinking, complex vision, and language.

1 Like

This is the first time I saw someone say this, can you explain more? Does the “old brain” here refer to the brainstem or something else? Why is its functions not complicated?

1 Like

Hippocampus/entorhinal cortex, amygdala, hypothalamus, thalamus, basal ganglia, various other bits and bobs of the brain stem?
I have been studying this stuff for years and I can’t figure it out; I am SO ready for someone to explain all this to me.

Sorry, I had to be more specific: hippocampus / entorhinal cortex is an integral part of the neocortex, without it the neocortex is useless, so, even it’s an older part of the brain, I don’t relate it to the ‘lizard brain’.
About the rest, as I mentioned, I’m not saying that this structure is simple or well understood, my point is that is quite easy to get minimum set of functions of all of it, which are important to complete the cortex (with the hippocampus) to the full-fledged intelligence.

I respectfully differ on this.

I see that the Papez circuit loops through most of the limbic system (including the hippocampus) and that the hippocampus is the main mediator between the two systems (limbic and cortex) - it combines the episodic experience and what we felt about it to drive judgement. We do NOT think through what is good and bad about things, we feel it first and remember it with the experience. This ranges from simple objects to complex social settings. Our judgments of good and bad are the accumulation of a multitude of impressions, good and bad, about everything we have ever experienced. The few cases of humans without a functional amygdala (a huge contributor to efferents to the hippocampus) show a profound lack of judgement. You do NOT want a powerful AGI that is not able to exercise good judgement. Some artificial replacement would likely be so alien that humans would have a very hard time understanding why it did anything.

LIkewise, the lower-frontal-lobe is driven with efferents from the hypothalamus. You can say a lot of things about cortex but one of them is NOT that it initiates actions. Cortex is passive. The sensory stream and the motivation/action drives all originate from outside the cortex. The sensory part is self-evident, the forbrain part is the hypothalamus after it experiences whatever is signaled up the brainstem and by taps in various parts of the cortex - primarily the temporal lobe.

The cerebellum does coordinate action going down the brainstem to the body but for humans, serves the very important function of driving the unfolding of activation in the forbrain that is routed back to the upper levels of the WHAT/WHERE streams, the bit we normally call thinking. This is also critical to the learned motor actions we call talking, closely related to this thinking bit.

I know that newbie AGI experimenters think that they will just copy the cortex and voila - they will have a functioning AGI - or at the very least - that these subcortical structures are little stubs that they will tack on the fancy cortex. I say that this shows a profound lack of understanding how much of what we are is tied up this these subcortical structures.


I agree with the factual part of what you said but differ in the emphasis of interpretation.
Actually, I exactly do not want an human-level intelligence with all this hardcoded mess people have in the old brain structures, which evolved in many cases for not even currently useless, but harmful environmental context: all these biases, fears, uncontrolled reactions and other dirty shortcuts we have from ancient times.
Our judgments of good and bad is an illusion, another shortcut for a weak mind, everything depends on the context, just not everybody can (or have enough time) unroll it to get the whole picture.
The subcortical structures are not a stub, it’s an interface to the homeostasis control. It can be implemented in a human-like manner as a semi-independent legacy structure with opportunity/tendency to be reduced to the purely animal behaviour, or it can be a very thin API with all needed processing in the core in a cortex-like manner. I definitely prefer the latter.


I agree with your idea. Do you have a list, what features should an independent intelligent algorithm have? In other words, what tasks it can do, compared to human intelligence?

As soon you have cortical-like processing you get it all. With proper input and training, obviously.
Even old brain stuff like emotions: if you have anything to reflect, you can add it your model of the world and associated it with yourself.

I don’t know at all what role the old brain plays in intelligent behavior (such as reasoning, planning, etc.). I am very skeptical about the power of the neocortex. After all, the neocortex looks similar everywhere.:man_shrugging:

You can be skeptical about its power, but you could not write about it without it :wink:

1 Like

And it’s the best part of the story :slight_smile:

Bro, you should tell it if you knew …

That is kind of Numenta’s thing!

All roads lead to Rome!
Or in this case, lizard brains.

Don’t be to hasty in throwing out a system that has proven it’s worth in creating HGI.

The built in programming had to build in fear of lethal aspects of the environment and a healthy fear of things with poisonous bites and stings, strangers, deformed members of you own group, (collectively- the other) heights, and the like make a great deal of sense.

You can retain the instincts that would work to implement the three laws, curiosity, social interactions and perhaps throw in a few new ones for good measure.

But the pallium is birds’ cortical-like structure, even it’s old :slight_smile:

Again, I’m not saying these functions are useless or aren’t worth studying. I’m just saying that it would be a non-efficient (and dangerous) way to get these functions by the reproduction of the old brain.
We need to get cortical-like processing in any case, and because it’s capable to work with any patterns, why would we make this project 100 times more complicated by adding reengineering the legacy part?

Out of curiosity- how to you intent to manufacture judgement imbedded in the data?

Do you intend that your AGI figures everything out from some sort of first principles? If so - I would be further intrigued in how you come up with them.

1 Like

By the way, some more info about them being well equipped in that area.

Fig. 3.

Neuronal densities and relative distribution of neurons in birds and mammals. ( A–C ) Neuronal densities in the pallium ( A ), cerebellum ( B ), and rest of the brain ( C ). Note that neuronal densities are higher in parrots and songbirds than in mammals (for statistics, see SI Results ). ( D–F ) Average proportions of neurons contained in the pallium ( D ), cerebellum ( E ), and rest of the brain ( F ). Note that increasing proportions of brain neurons in the rest of the brain in parrots are attributable specifically to increasing numbers of neurons in the subpallium (Fig. 5). Data points representing noncorvid songbirds are light green, and data points representing corvid songbirds are dark green. The fitted lines represent RMA regressions and are shown only for correlations that are significant ( r 2ranges between 0.389 and 0.956; P ≤ 0.033 in all cases). ( G ) Brains of corvids (jay and raven), parrots (macaw), and primates (monkeys) are drawn at the same scale. Numbers under each brain represent mass of the pallium (in grams) and total numbers of pallial/cortical neurons (in millions). Circular graphs show proportions of neurons contained in the pallium (green), cerebellum (red), and rest of the brain (yellow). Notice that brains of these highly intelligent birds harbor absolute numbers of neurons that are comparable, or even larger than those of primates with much larger brains. (Scale bar: 10 mm.) Data for mammals are from published reports (for details, see Methods ). CL, pigeon; DN, emu; GG, red junglefowl; TA, barn owl.