Does HTM create an internal model?

Mark,
The question you are asking applies to a higher level assembly of the current HTM model.
I have collected what I think the individual columns are doing in HTM models here:

Numenta has started to branch out from this basic model in two directions: one is in the interactions with the thalamus, and the other is what happens when you add lateral connections. Both of these lead to higher-level behaviors.

The TBT (Thousand Brain Theory) is an exploration of the lateral connection concept. I don’t recall seeing a paper based on the interactions with the thalamus.

I have not seen additional published explorations of either concept from Numenta, but I am not privy to what goes on there, and there have not been a lot of newly published research meetings lately.

That said, I am coming to the conclusion that at a higher level (the H of HTM), each map/region/area of the brain works on the principle of pattern completion using these local lateral connections. Each fiber tract carries the ‘output’ from one map to another. The computation aspect is when two or more tracts impinge on each target, so the computation is the pattern that best satisfies the partial patterns arriving at that target.

In one direction, you have the senses that flow from the parietal lobe to the temporal lobe, where these senses are registered as ‘awareness.’ As time progresses, these are assembled at the highest levels as episodes and registered in the EC/hippocampus.

Going the other way, starting with projections of needs from the subcortex into the forebrain, you have the connection streams flowing the opposite way. These projections into the sense stream serve to drive attention and recall. These interactions also flow up the same pathways as the senses and are also registered in the temporal lobe/EC/hippocampus, converting ‘awareness’ into ‘consciousness.’

There are about 100 maps/regions/areas, with the arrangement of the many connecting fiber tracts being explored in the connectome project. The thalamus plays an important role in awareness (gating surprise) and spreading activation. The conjunction of maps into a single activation roughly corresponds to the global workspace model. I think of this as symbols being assembled into a word or sentence. A given global pattern roughly corresponds to the instantaneous ‘contents of consciousness.’

I see this bidirectional interaction (using local pattern completion) as the basis for speech production, and I see this as having much the same dynamics and properties as large language models.

I have skipped over the role of the subcortex, episodic training in sleep, how the palladium does the routing of activation, the sequential nature of processing in the system I am describing, and numerous other details, but this post is already at the TLDR point, and adding more will lose more readers.

In summary, HTM may well be part of a model that could learn language like a LLM, but as a sub-component and not at the level most of the people on the forum are currently working at.

4 Likes