Jeff's new article in IEEE Spectrum

Jeff was invited to write an article for an IEEE Spectrum special report on brains. His article, “What Intelligent Machines Need to Learn From the Neocortex,” published today. In it, he breaks down the three key features of biological intelligence that are essential for building intelligent machines. One of the key features involves the sensorimotor integration work that Numenta has been focused on. While you’ve hopefully seen the video of Matt and Jeff talking about the sensorimotor research, aka the “HTM Chat with Jeff Hawkins,” this is the first piece in which he has written about it.

You can read Jeff’s article here.
I also interviewed Jeff about the article, so if you’d like to hear him talk about it, you can listen here.

5 Likes

Hacker News has a discussion of the article at https://news.ycombinator.com/item?id=14473077.

2 Likes

This article is fantastic and inspiring. I’m really glad that Numenta exists. It’s weird that there isn’t more competition in the biologically accurate intelligence-duplication field. And I completely agree that machine intelligence is the route which will actually end up solving lots of humanity’s problems. So, thanks to everyone who’s dedicating themselves to answering these questions. And thanks to Numenta for sharing so much along the way.

1 Like

Is there any discussion [elsewhere too] on how the brain deals with multiple models it formulates (with some data to back it up, not just a philosophical argument). What about inconsistent [dual belief] models? Does NuPIC keep around two models along with a weighting of which one is more likely? I remember looking at some of the (HotGym) prediction examples having multiple predictions and seeing a percentage factor associated with them. Does that mean that in certain scenarios the prediction was one value and at another scenario it was another? Is there a cut-off at which models are no longer considered, or do they just linger in the noise? Are there good examples of formulations which sink below a noticeable level but come back? Or do those come from a fresh new formulation. Does HTM addresses higher level concepts, or maybe I am just guessing here?
I was listening to Ben Goertzel talk on OpenCog and was wondering if it is possible to do predictions on his formulations via NuPIC. Maybe this is just too much of an integration nightmare. The first problem would be how to assign values to Goertzel’s monitored quantities, and ask if they are necessary and sufficient.
Years ago, I use to work on Appel’s Vector Hidden Line Removal algorithm. I would formulate a quantity which would represent the current algorithmic estimate of the values that were carried around (an idea borrowed from Fuzzy Logic). I initially seeded the values with my best guess and hoped that the algorithm would converge to the correct estimates (of the truthfulness of those). BTW, this approach goes against my grad math training which strongly discouraged using anything which did not have a substantiated basis. The downside is that if you don’t make certain assumptions, you will spend too much time trying to prove things which are too hard instead of focusing on analyzing the output of your predictions.
I have read Michio Kaku’s “The Future of the Mind,” but got turned off on the unsubstantiated speculations. Maybe it was a fun read, but became annoying at a certain point. Even listening to Jeff talk about Kurzweil is funny. I hope I do not take that route.

1 Like

I don’t have data to back this up, but the union property of sparse distributed representations seems like an obvious candidate for multiple simultaneous models.

I believe that there are multiple concurrent representations. I was mostly interested in knowing how do they get resolved, at least within the current model? Or maybe they do not? What happens if the two predictions differ or what does that mean? Does the HTM hedge its bets in such a situation, choosing an option somewhere in the middle [doubtful], choose one (maybe by additional criteria, e.g. instinct or experience) [seems most plausible], or not make a prediction [possibly].
BTW, this was something I did like in “The Future of the Mind”, in the discussion of how the left brain and right brain see the world differently, and the questions as to how decisions like this are resolved.

We have definite view on the answer to this question. We have described it in detail for the HTM sequence memory, but we also have a more general explanation which I don’t believe we have discussed much outside of Numenta.

First, HTM sequence memory
The HTM sequence memory is just one layer of cells. In the HTM sequence memory, a single input is usually ambiguous, like the first note in a song. So the layer will form a union of predictions based on a single input. As subsequent inputs arrive over time the union of predictions gets narrowed down to be consistent with the recent sequence of inputs. In the case of a melody, usually just a few notes are sufficient to remove all ambiguity. The mechanism for this has been well documented. Basically, each new input activates a set of mini-columns and only those predictions consistent with the input/active mini-columns are kept, all others are eliminated.

Now for the more general case
(it will be easier to follow this if you are familiar with our new sensory-motor theory)
Each layer of cells in a cortical column represents something related, but different. E.g. L6a represents a location on an object, L4 represents a feature at a location on an object, and L2 represents the entire object, a pool of location/features. There may be uncertainty in each of these layers. The layers are linked associatively. E.g. 6a ↔ L4 ↔ L3 <->L2, etc. Uncertainty in any layer results in a union of SDRs, and will lead to a union in an associatively linked layer. However, if any layer is able to resolve some of its uncertainty that will also propagate to other associatively linked layers, reducing uncertainty there. Each layer has its own method for eliminating uncertainty. L4 gets new sensory inputs over time and, like the HTM sequence memory, can use these inputs to eliminate uncertainty. L2 makes associative links to L2 in other columns and uses these to eliminate uncertainty. L6a gets motor related input that eliminates uncertainty about location. As a sensory organ moves, the system gets new sensory inputs and new movement inputs. Each new movement eliminates uncertainty in one or more layers and the changes propagate through all the layers.

A song might take 2, 3, or 4 notes to completely resolve uncertainty as to what melody is playing. Similarly, if you reach into a box and touch something with your fingers it might take several touches before you know what you are touching. Vision can also take multiple saccades to recognize something. However, in many cases you can recognize an image in a single impression. In this case each column may be uncertain, but there are so many columns simultaneously sensing the object that horizontal connections between columns (in L2), that the uncertainty can be eliminated without a second visual fixation.

4 Likes

I think that uncertainly is resolved using the feedback from upper levels. In fact, when the feedback is “strong” enough, you don’t need event to identify the incoming data [1].

[1] R. M. Warren, “Perceptual restoration of missing speech sounds.,” Science, vol. 167, no. 917. pp. 392–393, 1970.

That too. The cells in each layer are pyramical neurons. Pyramidal neurons have 3 integration zones, proximal synapses, basal distal synapses, and apical synapses. In essence, each region has three different inputs that project to the three synaptic integration zones. In the case of L2/3 cells, they receive input from L4 onto proximal synapses, input from adjacent columns on basal distal synapses, and feedback from a higher region onto apical synapses. I failed to mention the latter, but any combination can work to eliminate ambiguity.

Intuitively feedback should use quite stable synapses. Nevertheless, according [1] LTD is stronger in apical dendrites. (I couldn’t find similar paper beyond CA1…)

[1] B. Ramachandran, S. Ahmed, and C. Dean, “Long-term depression is differentially expressed in distinct lamina of hippocampal CA1 dendrites,” Front. Cell. Neurosci., vol. 9, no. February, pp. 1–10, 2015.