Prediction versus input in the HTM model and biological brain

Biological neurons have apical dendrites and basal dendrites. These are not modeled in the HTM model which uses: (1) an input, either from a sensory organ or another neuron (2) The predictive state of the neuron, which says if the neuron is expacted to fire based on previous neural states (I will call that the context).

Now let’s do a little thought experiment:
A cube is moving a constant speed on a flat table. As said in the HTM model, at each time t, my brain predict the next state of the cube and the neurons encoding that next state are put in a predictive state. As the cube moves at a constant speed, this state is validated by the next visual input, no problem. Now, in the real brain, if I close my eyes (cut the visual input), I can continue to predict what is the current state of the cube and when I open my eyes again, it is very likely that the state I predicted is almost exactly the state of the cube since its motion is quite simple. Despite the lack of visual input, my brain was able to keep the predictive states of the neurons rolling without the need for these states to be validated by a real visual input.
But in the HTM theory, that is impossible. In HTM model, only inputs can trigger a neuron to fire, no matter how certain the system is of the next state of the input. It can never fire based on prediction alone.

That feels like a shortcoming of the HTM model. The brain ability to hallucinate the world by constantly mixing real sensory inputs with expected input and context feels like a major part of the thinking process.

Do you guys have papers I could check out on that subject ? Or do you know if this topic has been discussed at Numenta ?

4 Likes

Current HTM theory postulates a location signal that is transmitted in the feedback pathways. The derivation of this signal has not been defined but there is much interest in grid cells. One of the known grid cell population in the HC/EC complex is time cells. I can see how time and direction could result in motion.

2 Likes

Hey @DrTaDa, welcome!

Interesting thought experiment. Its true that a current HTM region (doing SP & TM) needs continuous inputs to keep pace with the changing world. I think to achieve this eyes-closed scenario, the system would need to keep using the last seen value as the current input. So if its eyes were closed from say time step 100 to 110, it would need to use what it saw at step 99 for its input at time 100 - 109. If it were doing this, and use the deltas (or % changes) instead of the raw input values, I think it should still correctly predict the constant changes.

As I understand the TM of a single HTM region is mimicking basal dendrites, by forming Hebbian links between current winner cells and prior winner cells. As for apical dendrites, there are mimicked in multi-region and multi-column HTM models, such as those discussed in:

&

and implemented in:

In these larger models segments are learned between regions, so a cell in one region of one macro-column can become predictive based on activity its monitoring in another region of another column. I’m not a neuroscience expert, this is just my understanding of how apical segments are used within HTM theory overall.

Getting back to your original point tho, I think you’ve pointed out an important limitation! I’d say its invaluable to have people thinking about the practical limitations of the theory, especially with tangible examples like yours. Thanks!

6 Likes

Hello @Bitking, I feel like I shouldn’t have used a motion based visual input as it is a very specific case. I feel like the issue that I am adressing is more general and happens in any thought process, that is the ability to predict the next state without needing confirmation from observation.

@sheiser1, thank you for the clarification about basal and apical modeling the HTM model, I got a bit mixed up while reading the papers. If I rephrase my idea based on Fig. 1B of the paper Hawkins2017, that would be: context/basal input is able in the human brain to induce an AP, while in the HTM model, a feedforward/proximal input is always needed. Therefore, in the human brain, cascades of activity can arise from prediction only, while the HTM brain will always need sensory input to keep running. However, I am not a neuroscientist and I don’t know if basal dendrites are enough to induce an AP in for example pyramidal neurons. If not, that would invalidate my current idea.

1 Like

Does this help in any way?

Some reflection should show that a sheet of HTM columns is part of a larger system. Cortex by itself is purely reactive and incapable of initiating any actions. All drive must come from either the senses or subcortical structures. The cerebellum is all about sequences and combinations of sequences. If your concern is the evolution from state to state the cerebellum would act as guide rails, stepping though familiar sequences. It could also guide active recall, the bit that we call recognition. This is very closely related to the cancellation of self-motion that we all take for granted but would miss greatly if it was absent. (See handheld video for an example)

2 Likes

Thank you for the link. It seems indeed related, I will have to read more about the role of the cerebellum.

On the same thread, @bkaz mentionned his blog where he proposes an idea quite similar to the one I am discussing here, that is, how feedback, prediction and feedforward are mixed:

It should be possible to modify an HTM model such that it hallucinates by allowing neuron to fire just from their basal/context input when expectation is strong enough. For example, when trained on the sequence ABCD. It will then be showed the sequence ABCE. When the letter E arrives, instead of a perfect representation of the letter E, the SDR for the letter E and D will appear.

2 Likes

If you don’t mind having your mind explode from information overload this thread may steer you in the right direction. I would strongly advise reading the thread BEFORE reading the paper linked in the first post.

Really, read the thread first.

2 Likes

In theory this sounds just right. But algorithmically, input sources don’t matter in HTM algorithms, so one can simply feed these “closed eyes” inputs by aggregating them with the sensory inputs, this case they’re absent. This is just to point out that HTM algos are highly generalized.

4 Likes

Thanks. My blog post is very high-level, I didn’t have a specific proposal on how such feedback is implemented. It may well be decoded, threaded through thalamus, and then fed into the same basal dendrites that normally receive feedforward. On the level that HTM is exploring now, it may not make much difference if the input is real or imaginary.
Edit: just noticed that @Jose_Cueto said pretty much the same thing :slight_smile:

1 Like

Perhaps @FFiebig might have some insights into this thought experiment. It seems to me as though the activations for cell populations resulting from direct stimulation would correspond to the “eyes-open” portion. Then, the subsequent echoes of activations after the direct stimulation has been removed might play some role in continuing the propagation of the input stimulus for a brief time during the “eyes-closed” portion.

Welcome @DrTaDa

HTM ≠ NuPIC (I don’t use NuPIC, but I think it includes only Spatial Pooler and basic Temporal Memory algorithms)

As @sheiser1 says there is a Numenta algorithm which models a cell population with apical input. This algorithm, originally called Extended Temporal Memory, was used in the experiment on which the “columns paper” is based, and also for “Untangling Sequences: Behavior vs. External Causes” (preprint). Perhaps, in your thought experiment, the prediction without visual input (an “external cause”) could be produced by the “behavior” of imagining the movement?

A HTM-scheme version of the “untangling sequences” experiment is an attempt at a HTM-style model with more or less biologically-plausible parameters (cell and minicolumn numbers and connections).

3 Likes