HLC 2020 12 01 Prediction of novelty

In our discussion yesterday (HTM Learning Circle, HLC) @Bitking raised the issue of HTM not being able to predict previously unseen inputs. In terms of input - the HTM needs to learn features and then it can in theory compose those features to recognize objects. It makes sense to me that novelty in the input direction is a trigger for learning. In the output direction (behavior) the neocortex seems to be able to generate novelty. If the output were somehow based on recalling inputs then novelty would perhaps not occur.

From the BAMI document “The early work on HTM focuses on these kinds of problems, without a behavioral component. Ultimately, to realize the full potential of HTM, behavior needs to be incorporated fully.” … “HTM starts with the assumption that everything the neocortex does is based on memory and recall of sequences of patterns.”

Is there theoretical work on the generation of behavior by Numenta?

We can in theory “reverse” the HTM algorithm, so after learning it can generate outputs e.g. the category ‘one’ can be used to generate a prototypical ‘1’ SDR and that can be decoded into an image. But this seems limited (lacking the ability to generate novelty). For example if it had never seen a bold 1 digit but learned about bold letters, then how to generate a bold 1 without ever having experienced that previously.

Perhaps this comes down to modulating the output sequence in a particular way e.g. taking the prototypical ‘1’ and transforming it through a ‘bolding’ function. In that case it would need to learn the concept of bolding as independent from letters. Maybe that is possible, if it already knew unbolded letters, then bolding could be like a new category. But I do not see how HTM could apply bolding in a generic way - or maybe it means the bold ‘1’ would be bolded using the bolded features of letters.

Taking this full circle - back to the input path. If the category of bolding has been learnt on letters, then perhaps the feature of bolding could be identified for the number ‘1’ without that ever having been experienced. The HTM could then generate the new category “bold number” and that would be predicting a novel input?

Perhaps this comes down to whether/how HTM deals with composition.

2 Likes

I think we were referring to novelty in the sense of an object/property and/or behavior that has not been previously experienced. If I’ve seen text characters before, and I’ve seen the allowed behavior of making them bold face (corresponding to increasing the thickness of the lines of the character), then it is not necessarily novel for me to be given a previously unseen character and then told to imagine the result of applying the behavior of making that character bold face.

The current working theory at Numenta and on this forum, is that such properties and behaviors can be traversed in the same way we traverse a physical space. I don’t have to visit every single point in a room to know that I can traverse to or through any point that is not already occupied by another object.

On the other hand, I have a very difficult time imagining what it would feel like for my body to dissolve in to a liquid and be poured into a bucket. That would be a novel sensation.

1 Like

I’m not sure you need such an extreme definition of novelty - if novelty can only refer to an experience that you can’t imagine, then you generate no novelty by definition. People demonstrate creativity and I think novelty can be part of creativity. We could distinguish novelty in the sense of a new output and novelty in the sense of a new input. Imagination could be imagined as the feedback loop - so if we can generate novel output then we can generate novel input.

I agree that for an adult identifying a number as bold for the first time does not seem very novel. For a young child it could be a major achievement. For HTM this already seems a very high bar. So maybe we can explore a “simple” analogy of novelty - the boldness property or something you prefer.

Could we explore the idea of a boldness property and imagine how HTM would model this? How would an HTM extract the property of boldness and be able to apply this to generate the bold ‘1’ it has not previously learnt? The concept of “applying a property” does not seem to fit with the HTM theory - but I am far from understanding the theory.

Regarding the spacial analogy, it makes sense that a spatial analogy would be one of the basic analogies the neocortex uses, another would be the body (we’ve had one of those since before the neocortex evolved). Lakoff and Johnson pushed the idea of embodied metaphor in the 80’s.

Perhaps it is asking too much of HTM at this stage to model boldness of a character - that is more abstract. It seems the focus at Numenta is on 3D object recognition - right?

Perhaps bolding is just a concept of scaling an object in a particular way. The transform might map to some transform the neocortex has generalized for navigating 3D space. Is this the type of approach that best fits with this forum?

Pardon me for asking, but my understanding is that HTM models a proposed brain mechanism whereby a sensory input in some encoding is converted into a representation in some standard form (SDR). Then a set or sequence of SDRs is either learned de novo, or is recognised as the prefix of a known sequence and used to predict the next. If prediction is correct the sequence should be reinforced, if not the novel sequence is learned.

I don’t see how concepts such as ‘number’ or ‘bolded’.could exist as sensory encodings. Surely they would have to be SDRs that were generated by algorithms operating on other SDRs? And if I’m not mistaken, those algorithms are currently unknown?

I am a beginner here, my understanding is that the SDR can represent a collection of features. The system can then do object recognition based on the features. In unsupervised learning if will identify features based on the encoding. So if numbers and letters had specific encodings it could learn that feature. Another (more realistic) way to do this is use supervised learning and teach the HTM how to classify the SDR - so it learns a labelled category like number vs letter. A feature like bolded could be learnt for the category of letters. It is not clear to me how that learning would allow for the recognition of a “bold number” in that case.

I would encourage you to expand your view of the scope of SDRs in regards to what and how they encode things.
The examples that are usually shown in HTM are toys - one cell and a dozen or so dendrites. The focus is the mechanics of SDR formation and is on small easily grasped models. With a handful of cells, we see toy models like hot gym or navigation monitoring.

The brain is bigger. Much bigger.

At any given time there are millions of cells working with dozens of SDRs each. Depending on where the cells are in a map in the hierarchy the cells are participating in either the thousand brain voting or forming Calvin tile coding. All at the same time.

The possible symbol-coding space is staggering.

As far as higher-level concepts, if not the cortex in SDRs - then where would these concepts be represented?

The brain is immensely complicated and working out how collections of shapes that we come to know as named letters and relative thickness of these letter shapes combine with general spatial learning to build the concept of “bold,” there are a lot of moving parts to consider. But we know that we do it so it is really a matter of working out how we do it with cortical columns…

It is not too much of a stretch to see how sensory streams register in the sensory cortex to form SDRs. It is even fairly easy to see how the streams up the hierarchy could be parsed in some way to form object recognition and spatial relationships. It takes a little more work to realize that the subcortical structures also project animal sensibilities to the cortex to be processed like external sensations. These sensations include all the animal things like fear, desire, social cues, hunger, thirst, inquisitiveness, exhaustion, and the initiative to initiate actions. This last bit is the key to starting motor actions in the cortex.

The collection of objects, spatial relationships, and the very intimate relation to our personal relationship to our own bodies are all collected in the temporal region, to be acted in the loop of consciousness.

This is the clearest exposition I have ever seen to show how we fit external sensations into our bodies to form higher level semantic constructs and concepts, it’s even short and to the point:
How neurons make meaning: brain mechanisms for embodied and abstract-symbolic semantics - Friedemann PulvermĂźller

Much of my posting on this forum have been to expose various aspects of this larger picture. I don’t feel like repeating them all here but if you are interested I can point you to relevant portions.

1 Like

I don’t think people are assuming this in HLC. We have a document showing implementations by participants, for example Etaler with 32x256=8192 cells

Certainly the brain is much bigger, there are certainly functions that require many mini-columns/macro-columns/regions. In this particular thread the discussion is not how to realize an AGI. I’d like to limit the question to whether HTM can predict previously unseen patterns, if it is configured in a certain way.

The example of classifying bolded characters is not to imply the HTM has a concept of bolded characters like a human. It is meant to be a “toy” example to see how/if a relatively simple HTM could learn a category and then use that category to classify previously unseen input. This does not need to imply abstract-symbolic semantics (we provide that by defining the labels).

I would like to try and discuss these in the context of small HTM so we could in theory test the ideas. Small could be considered something that could run on a single computer in a matter of minutes.

It can be fun to imagine what a few million mini-columns might do, but there is little practical use right now unless you have access to the hardware to run tests at that scale.

1 Like

Agreed, so the best starting point is to frame the questions correctly to fit the known properties of the hardware.

Starting with fuzzy thinking about the basic premises of how the hardware works is unlikely to lead to satisfactory answers.

The question was raised about the novelty of seeing a bolded character and I can see how to fit that into the overall structure of representation. Once you do that you can see that seeing a symbol that is somehow different than the learned set of shape would register as novel and trigger learning. At that level in the hierarchy where shape recognition occurs variation from learned shape will be the right level of representation.

Understanding that there are these levels helps to place where the novelty is registered.

The larger concept is that novelty can occur at any level of representation.

The history of AI would tend to indicate otherwise. Many useful things have come from ideas that are not directly bio-inspired. Consider Numenta’s latest work on speeding up deep learning. While these are obviously not going to directly lead to AGI they may be stepping stones.

Even things like making the HTM algorithm more efficient seem very reasonable and certainly are not bio-inspired. HTM is very far from a biological system.

To tie this back to the biology in my “fuzzy” way, as I understand it, there are feedback loops in the neocortex. These are typically not modelled in HTM that are used for object classification or anomaly detection. I suspect that feedback is essential to more general learning - like predicting as yet unseen input. That is where I hope this thread might head.

I agree, and will add that this is so far beyond current state-of-the-art hardware as to be untestable at this time. modeling capability will have to expand by about 3 orders of magnitude to make this possible.

This is not correct, I think 3 people in HLC have run experiments with feedback loops. Martin’s project report provides some details of how that was done with current hardware.

Again, you have to put the novelty in the right place and the answer falls out automatically. The concept of bold is a spatial concept.

We are very good at combining concepts and terrible at thinking of them conceptum novĂŚ.
I have commented on this before:

To phrase it more carefully: there are some very large projects that combine many maps and connections between them.

These have run times in the order of thousands of hours of CPU time and are beyond the reach of most experimenters in the field. I do not have ready access to the technology to experiment with these concepts is any meaningful way. The people that I know that do similar experiments are not able to build these large models and experiment with the model properties to learn how they work.

See:

That said, even with these massive resources the model is at the level of a single event of a single ball passing behind a post.

Where do I get 3 orders of magnitude? to be something I can do as a private experimenter I need to get from thousands of hours of CPU time to tens of hours for any practical experiments.

I am not claiming that it is not a spatial concept. I am not telling you that you are wrong about humans not being able to imagine unseen things. In this thread it would be interesting to explore the idea of how an HTM could predict unseen things (probably through feedback of earlier learning).

I have read extensively about mental imagery.

I can assert with some confidence that this mental manipulation is the combined effort of many systems in the brain and are the result of activation of internal representations that already exist. For the explanation to make any sense you will have to do a lot of learning in how the brain represents spatial concepts.

I can point you to the reading if you are truly interested in getting a through understanding of the concepts. I should warn you that this is likely to take weeks, if not months, to master this material. There is no formal course work on this so you have to pick the concepts out of many only tangentially related texts.

For a very superficial survey of the field:

And an odd, but telling condition that is strongly related to the question at hand:

This thread is not about mental imagery. It is about the HTM algorithm and how it might be modified. By “unseen” I mean the exact same input has not been learnt.

Regarding the concepts of mental representation etc, these are building on very shaky psychological and philosophical foundations. If you want to discuss, we could do that elsewhere, here I’d rather try and keep this thread on the subject of the HTM algorithm.

I really do think you have to be more careful about use of words. What is a ‘collection of featues’? What is the ‘system’? What does it mean to ‘do object recognition’? All we have to work with is SDRs that represent sensory inputs or sequences of other SDRs. You can’t leap from there to objects and features unless you have algorithms to show how.

Getting rather picky here,

  1. SDRs do not encode, they represent (it’s in the name)
  2. Size and architecture don’t matter, algorithms do.

The key computational unit is at the column/micro-column level, size just means you got more of them.

Using words like dendrite and synapse is not helpful. We’re writing code to execute algorithms and we really don’t care how the brain does it biologically.

Introspection is not helpful, we’re stuck at the really low level of algorithms to derive SDRs from other SDRs. We need those small models to find and test those core basic algorithms. Whatever they do is done with an amount of computation easily achievable on desktop computers. IMHO.

What the SDR represent depends on how the program (the HTM system) has been trained. To get labeled categories out of HTM it is trained on labelled data or the output of the HTM is classified by another algorithm.

The HTM algorithm learns spatial and temporal patterns. Similar patterns generate similar SDR and those similarities could be thought of (by the human operator) as features of the inputs.

When relating a set of features with a label this can be considered object recognition. For example, feed a sequence of digits and the feed the output to an SVM decoder that classified the HTM output as a digit.

The HTM algorithm can also be trained to classify, then a simpler decoder can turn the classification represented by an SDR into a label that represents an object.

Regarding introspection (in your next post) check out the video with Hawkins talking about introspection at the beginning.

Thank you, those words are far more carefully chosen.

The video is a tough watch. There is a brief mention of introspection (he likes it, others don’t) with some examples, but they’re stated ‘as fact’. I don’t see that ‘grid cells remapping = sense of being in a different room’ is any different from any other data and hypothesis: how do we measure it? How do we test it?

I was also troubled by the mixing of introspection, theoretical ideas and terms, with some neuroanatomy thrown in.

I tried to follow it, but the sound quality is poor and it skips steps. This is a chat between members of a team, not easy to follow if you’re not already on the team.