A layman’s Description of the Thousand Brains Theory

I am definitely no expert, (i’m actually a Train Conductor lol) but I found both Jeff’s books fascinating, but I would be fibbing if I didn’t say somethings went over my head, so I want to give a metaphor about how I think the 1000 Brian’s theory works and can someone let me know if I am way off base or somewhat close.

So when I think of 1000 Brains Theory I think of a hologram of say a dog, I can move the image around and see a dog at different 3D Orientations . If I were to cut the hologram into a bunch of small pieces, and then look at only one square, I would still have a complete picture of the dog, the information is there but my “field of view” is much smaller, so I would have to really move the square around examining all the details until I can determine it’s a dog. But if I combine all those squares together back to one image, my field of view is large enough that I can see a dog with one look.

So the cut up squares are how I am thinking about each individual cortical column, and then all linking together and “voting” is the entire hologram picture all together. They all have all the data of the dog in each piece but the “field of vision “ is much larger when they are working together.

With that discription, I see the classical “ Hierarchical” structure in play but with these “squares of full objects’ ’ combining features into more abstract data rather than simple features (like basic Neural network would do).

Is this a good way of thinking about things?


sounds like it… small correction … every square have slightly different model… in some cases may be even totally different. F.e. two cols may recognize dog, but one have ear-model and the other tail-model

Which means it is not exactly like hologram …

1 Like

I think the intuition sounds correct, but the details may need a little adjustment. The main one I would point out is that each column is learning the patterns it receives as proximal input, whether that be directly from senses or from other brain areas. In addition, there is also contextual information present that helps the column disambiguate it’s input. The distal input utilizes temporal context (i.e. how the proximal input is changing in time with respect to it’s neighboring columns activations) and the apical dendrites provide hints or expectations of the input based on predictions made elsewhere in the brain (e.g. voting).

The key thing to realize is that the column doesn’t know where it’s input is coming from, only that it appears to be coming from someplace that is generating patterns of activations that are somewhat consistent and persistent in time and space (otherwise it would be impossible to learn anything meaningful). It is this learning and sharing of cortical column models that allows the activity of the cortex and subcortex as a whole to compose higher order models of objects and behaviors, and to formulate and execute plans to interact with the environment in an intelligent manner.