The insight that each column of neurons in the neocortex independently recognizes sensory sequences and advertises that recognition to motor and communications layers neatly solves the heretofore essentially intractable problem of how to give a machine “common sense.”
From my understanding of the Thousand Brains Theory, as a sensory pattern develops, the expected-next sensory values are continuously predicted at the level of the individual column. Thus, each column is effectively predicting the expected “behavior” of its local view of the “object” being recognized.
Just as the individual columns signal their recognitions of their portions of a macroscopic object which then becomes, when integrated, the recognition of the whole object, they also very likely (I’m no neuroscientist) signal their expectations of the next states, which then becomes, when integrated, the expected behavior of the whole object.
The expected behavior of an object handily defines our common sense about that object and what might happen to it or what it might do. In other words, the “hard” problem of artificial common sense simply falls effortlessly out of the Thousand Brains Theory.
Nicely done, Theory of a Thousand Brains!
I see at least a couple remaining difficulties:
-
The small matter of training.
-
The fraction of common sense that refers to expectations about how humans will behave. The scaffolding necessary to build thousands of HTM brains into psychological agents is not yet clear to me, although I’m sure that many human neurocolumns recognize “emotional objects” and their relationships and predict those behaviors. (Worth, trust, affinity, and self are examples of what I mean by emotional objects.)
-
A sufficient implementation. Ideally the hardware would permit all columns to be pre-loaded with information gathered from previously acquired experiences, so only the first machine needs to be trained. (Or maybe existing hardware is enough?)
Others?
I’d appreciate any comments.
[Edit: clarity.]