Composite objects in HTM theory

I wrote a blog post about Numenta (among other topics).
I have two questions:

Connectionist theories often have the drawback that they don’t represent hierarchical concepts or composite objects.

Before the sensory-motor work you did recently, I would have thought that HTM-theory could not represent such concepts. Reading your material, I could understand that your inputs could handle an encoder for a date, or a scalar or a GPS coordinate, but it seemed you could not represent a composite object.
But now that you can represent any object as a kind of 3D CAD representation, maybe composite objects (that are composed of simpler objects) can be represented as well as concepts that are made up of simpler concepts.

There are two requirements of composite objects:

  1. You would have to be able to decompose an SDR that represents a chair to several SDRs, one of which might represent a chair-leg (for example). Likewise you can decompose an SDR that represents a sentence to several SDRs, one for each word (for example).
  2. SDRS of similar objects would be more similar than SDRs of very different objects. From what I can see of your material, that is true, since they share more active columns.

The reason I ask this is that my article contrasts two research approaches, yours and a group at the University-of-Waterloo group, and this is one possible comparison.

Recently I’ve been reading “free-energy-minimization” theories of how the brain works. Basically, these theories deal with the issue of the “unexpected” input, or “surprise”. Some theories say the brain is a hierarchy of predictive levels. If there is a “surprise” at any level (for instance, you listen to a melody and hear an unexpected note) that surprise is forwarded up to a higher level, to see if it can be explained away. Perception is supposed to be (in this theory) a balance between prediction and sensory input. How would HTM theory fit into this template?
Any help is appreciated.

2 Likes

Hi Craig,
Our new work on sensory-motor inference does move closer to your “composite” objects goal. Basically, in our model, objects are defined as a set of features at different locations on the object. The “features” are just SDRs and could in theory represent anything, such as another object.

So far we have been modeling one or more columns in a single region, that is, no hierarchy. In these models the only “features” that can be associated with a location are pure sensory features. I think we would need to have at least two levels in the hierarchy to achieve a composite object as you envision them to be. But the mechanism supports compositional objects.

HTM is a theory of how the neocortex works. We view biological details as constraints. If we can solve a problem in a way that can’t be understood at a detailed biological level then we don’t include that solution in HTM. HTM doesn’t model every biological detail, but we only leave something out if we think we understand what that detail does and determine it isn’t essential for the information processing of the cortex. This adherence to biology is, as far as I know, unique to HTM.

The brain, and HTM theory, builds a predictive model of the world. Because HTMs are always predicting the next input, when an unexpected input occurs it is noticed. In our models an unexpected input results in many more cells briefly becoming active. This has two effects. One is,the burst of activity causes multiple new hypotheses to generated. In HTM theory, multiple hypotheses are represented by a union of SDRs (sparse representations) in the same set of neurons. HTMs can propagate multiple predictions forward in time in parallel. Most machine learning algorithms don’t have anything like this.The second effect of the burst of activity is that it can cause the unexpected activity to travel up the hierarchy. In the brain this is believed to be one of the functions of the thalamus. As we are not yet modeling hierarchy we don’t have this in our models. But the idea is established in neuroscience and we expect to add it eventually. In this way it is like the free-energy-minimization theories you mention. BTW, I wrote about this effect of the unexpected moving up the hierarchy in my book On Intelligence.

Hope that helps.
Jeff

2 Likes

Thanks. I should read “On Intelligence”. In fact, I’ll order it now.
I think the “Neural Engineering Framework” theory that I also discussed in my post is also quite true to the brain - they test their models and predict actual neural spiking patterns, and they are usually right. In one case I read of theirs, they actually showed that what people thought was one firing pattern was two, and they were right about that too. I paste from their website below:

Nengo is a graphical and scripting based software package for simulating large-scale neural systems. The book How to build a brain, which includes Nengo tutorials, is now available. This website also has additional information on the book.

To use Nengo, you define groups of neurons in terms of what they represent, and then form connections between neural groups in terms of what computation should be performed on those representations. Nengo then uses the Neural Engineering Framework (NEF) to solve for the appropriate synaptic connection weights to achieve this desired computation. Nengo also supports various kinds of learning. Nengo helps make detailed spiking neuron models that implement complex high-level cognitive algorithms.

Among other things, Nengo has been used to implement motor control, visual attention, serial recall, action selection, working memory, attractor networks, inductive reasoning, path integration, and planning with problem solving (see the model archives and publications for details).