Numenta Research Meeting - April 26, 2019


I’m right there with you Marcus. Over the past couple of weeks, I’ve started to think in almost exactly the same way with what you presented in the meeting. I’m just surprised that Jeff couldn’t see it as well. Or maybe he was starting to get there towards the end of the meeting when he mentioned voting between the columns.

The concept of a cup is a very high level construction. At the lowest level there are simply persistent features that have observed relationships to other features, and potentially predictable behaviors with respect to one another and with respect to the actions/movements of the observer. It is these low level features and their orientations with respect to a specific sensor that each column needs to represent. Each column then builds up a representation of the expected behavior of that feature regardless of what object it is attached to. We have a lifetime of experience with observing these features (in all of our sensory modalities). Objects with persistent properties and predictable behaviors can then be inferred and eventually recognized by the combination of features that are observed over time, or are observed simultaneously from multiple sensors.

In the same way that we can classify groups of objects by their similarity (in appearance and/or behavior), columns can also learn to classify groups of features by their similarity. That means it is no longer necessary to observe every object and all of its features from all possible orientations and/or distances. A column should be able to generalize the appearance and behavior of the properties of an unknown object from the previously learned appearance and behavior of other objects with similar features.


I am also in agreement with you @mrcslws .

A collection of low-level local features makes a great deal of sense to me. Local voting and sequence come into play at this local level; this makes sense as this is what can be seen locally.

Instead of thinking ot it as a CAD system with rigid spatial relationships of geometry - think of it as a loose cluster of features. A palimsest of these features addressed sequential. After all - this is what TM is all about. The lateral voting ties these together from fragments into local features.

We know that the brain has WHAT/WHERE streams. I see whole object recognition happening at the higher level of the WHAT/WHERE streams and lobe hubs.

I suggest that you think of the object as a cluster of micro features (WHAT) and the mental manipulation (WHERE) is done in separate maps. Note that this addresses Jeff’s (?) point that you can see something and then close your eyes and recognize it by touch or sound. How would local learning be able to do this?

While you are at it, pull the grid cell out of the mini-column and push it up to the column assembly level in the lobe hubs. This has been observed in the hubs but I have not seen it elsewhere. Please note that the lobe hubs are also the places where grid type patterns are observed outside the HC/EC complex.

I see this as the lower levels use the TBT model to assemble these micro features to be further assembled into objects at higher levels.

I will add that I am a firm believer that the brain does the same thing everywhere. Jeff has utterly sold me on this and I see this as an immutable principle. This applies to the eye. That means that the saccades are building up a palimpsest of features as your eyes dart around an object. This object can be the room that you are in. Somewhere in the brain, you are assembling a collection of objects that are the room you are in and the relation of the objects within. It is assumed that this is in the hippocampus but there is ample reason to assume that this is not strictly true. Patient HM did not have deficits that would show that his sense of space vanished; only his memory of episodes. I place this in the WHAT/WHERE conjunction in the temporal lobe. This is the point to address the challenge that was posed.

I think that the laser focus on columns has trapped Numenta into trying to solve every problem at the column level and ignoring the work of higher level assemblies of columns and maps working as ensembles. I would frame this as thinking about computers with a laser focus on transistors. At some point, you could try to make the transistor be the whole computer, after all there are transistors inside. But, as useful as transistor are, they can not do the whole thing by themselves. Yes, they could have some computing capability but the only real way to move forward is to think of how bunches of transistors work together.

I suggest that it is time to take off the training wheels and see how the column fits into the larger computations. There will be consideration that inform the functions at the mini-column level and the mini-columns will put constraints on the functions at the map level. Considering both levels will push the development of both at a faster pace.