Common sense and the Thousand Brains Theory


#1

The insight that each column of neurons in the neocortex independently recognizes sensory sequences and advertises that recognition to motor and communications layers neatly solves the heretofore essentially intractable problem of how to give a machine “common sense.”

From my understanding of the Thousand Brains Theory, as a sensory pattern develops, the expected-next sensory values are continuously predicted at the level of the individual column. Thus, each column is effectively predicting the expected “behavior” of its local view of the “object” being recognized.

Just as the individual columns signal their recognitions of their portions of a macroscopic object which then becomes, when integrated, the recognition of the whole object, they also very likely (I’m no neuroscientist) signal their expectations of the next states, which then becomes, when integrated, the expected behavior of the whole object.

The expected behavior of an object handily defines our common sense about that object and what might happen to it or what it might do. In other words, the “hard” problem of artificial common sense simply falls effortlessly out of the Thousand Brains Theory.

Nicely done, Theory of a Thousand Brains!

I see at least a couple remaining difficulties:

  1. The small matter of training.

  2. The fraction of common sense that refers to expectations about how humans will behave. The scaffolding necessary to build thousands of HTM brains into psychological agents is not yet clear to me, although I’m sure that many human neurocolumns recognize “emotional objects” and their relationships and predict those behaviors. (Worth, trust, affinity, and self are examples of what I mean by emotional objects.)

  3. A sufficient implementation. Ideally the hardware would permit all columns to be pre-loaded with information gathered from previously acquired experiences, so only the first machine needs to be trained. (Or maybe existing hardware is enough?)

Others?

I’d appreciate any comments.

[Edit: clarity.]


#2

I see common sense as arising from the shared experience with other humans with similar bodies in a common culture and environment; they just know stuff. This creates a common frame of reference to build decisions compatible with all these known factors.

Having lots of computing units will not automatically generate this shared experience.


#3

Bitking: I believe we have started from different definitions of “common sense”. I mean something like, “Sound judgment not based on specialized knowledge; native good judgment.” (American Heritage). You appear to have defined it something like, “culturally agreed standards.” I believe most etymologists would be on your side, as “common sense” derives from Latin, “sensus communis,” which translates to “common mind.”

I appreciate your attempt to explain the definition of common sense, but it doesn’t help me make the point I really want to make: TBT appears to have enabled a solution to a very hard problem in artificial intelligence.

Getting a computer to understand cultural norms will almost certainly remain a hard problem in computer science for a very long time, even with TBT. (See my #2 remaining difficulty.) Getting a computer to correctly answer something like “What color is the apple I left on the table for six months?” is hard enough.

That I am aware of, there have been no successful efforts to give a computer system enough knowledge about the real world that it would have the “common sense” to resolve such humanly trivial questions. Efforts by computer scientists to date have sought to write enough rules (thousands and thousands!) to cover the expected situations in limited environments (sporting matches, for example). Humans (and animals!) have this rudimentary, real-world common sense so effortlessly (“natively”). Unless you think we (and animals!) come pre-programmed with thousands and thousands of such rules, the “list of rules” approach doesn’t seem to fit with biology very well.

Numenta’s brand new Thousand Brains Theory (released Oct 13) provides an elegant and robust answer for how to effortlessly generate this rudimentary form of common sense about objects in the world. (“Brown!”) I have not seen the extension of TBT to solving the computer science problem of basic real-world knowledge previously proposed, and I hope computer researchers take note.

Your final comment is of course correct for any non-biological collection of computing units yet conceived, however, I’m afraid it is absolutely false for humans (and maybe dogs and other “smart” animals, but let’s leave that aside). We contain, according to TBT, thousands of computing units in the neocortex alone. Thanks to the environment making other humans available to allow us to communicate with, we “automatically” do generate shared experience (after sufficiently training our thousands of neural computers), and that fits even the ambitious, etymological definition of common sense.

[Edit: clarity and typos]


#4

We are mostly on the same page.

Much of what you are saying revolves around training and the related database that becomes embedded in the system.
You lead with it in #1, and describe it in #2, and follow with replicating it in #3.

You are correct that the brain is composed of a vast number of processing units. Nothing there explains any of the three points you raise.

This is some innate learning that is generally described as instincts but I see that as the tiny starting seed for what will become a vast store of information.

I did not make a clear connection between the common experience and the internal formation/training of the database of common experience. A lifetime of experience builds this database. Yes, this database is distributed over a huge network of simple processing units but that does not alter the fact that the system is building a database as you experience the world. Experts in child development have extensively documented how this experience develops and is manifested.

This information does not arise from calculation but from experience.
In the response that I linked above this paper shows how semantic development is widely distributed to the parts of the brain that experience the related concepts. fMRI studies show that this distribution varies somewhat by individuals but is generally the same for all of us. This is the grounding for our common experience; we build on this.

The same neural hardware “filter” that you use to parse your perceptions to the “correct” place in the network is also the same key used to access and recall that part of the database. You are using these neural hardware “access keys” at the same time that you are building the database - this structure of key and factoids is perfectly matched during formation and recall. Everything that you learn is in some way added to what is already known/recalled as a form of delta coding. The closest match to the current computer theory of coding is data compression.