Has anyone implemented convergence with the current Nupic HTM? I’m scanning the literature and can’t find anything other than a passing reference to it.

Typically, each HTM region represents one level in the hierarchy. As you ascend the hierarchy there is always convergence, multiple elements in a child region converge onto an element in a parent region.
Numenta, 2011. HTM including HTM Algorithms.

1 Like

I think Numenta’s view of the hierarchy has changed.
They used to think it worked like the ones of today’s deep learning models do but now they know it doesn’t.
Now when they say hierarchy, they mean the hierarchy within a cortical column.
Turns out, a cortical column is a lot more capable than they thought it was.
They’re not saying there’s no information flowing up and down between cortical columns but they say it’s not strictly hierarchical.

@barnettjv I started out thinking about convergence much like you are describing - that somehow things come together somewhere “higher” in the brain. This is what much of the literature suggests and it is somewhat how most complex systems are configured by humans. Humans are the viewer of the information and pre-processing and consolidating to make the human more efficient just makes sense.

This begs the question - what looks at this higher-level information in the brain?

Also - if the is some sort of funnel does that end up making a weak spot in the brain?

After much reading and reflecting the realization set in that it is possible to retain a distributed representation through the pathways through the cortex. What forms in the lobe hubs is a distributed representation!

Once you make that mental shift it frees you to think of cortical processing in new ways.

What can you add to a stream by micro-parsing the features in each stream?

What happens when you compare these micro-parsed streams against each other?



In computer programs we take streams of tokens (letters) in groups (words/numbers) and parse these into useful semantics. The quanta are very coarse grained.

The brain starts with a much finer grained representation distributed over both time and space.

There are at least 4 stages of parsing between V1 and the first association hub.

This parsing is through at least two streams (WHAT & WHERE) and there is good reason to think there is a third stream.

To make thing more interesting, this parsing is both bottom up and top down.

It took me a long time to wrap my head around this way of looking at things but it does such a better job of explaining what I read in the literature that I can’t see things a different way now.


Thank you Bitking for the thoughtful reply. I’m just somewhat surprised to not find any published discussions about it.

Thank you hsgo for your reply. This is somewhat disappointing as I’m planning on using HTM in my dissertation.

1 Like

I think the shift in interpretation has been documented in the papers published by Numenta and in their public presentations. The “Thousand Brains Theory” paper would probably be a good place to start. However, that theory is itself is grounded in the several previous papers where they laid out the fundamentals of SDRs, temporal sequence memory, and grid cells modules.

Thank you CollinsEM. I’m familiar with most of the HTM documents that have been posted along with the videos. I’ve been studying HTM theory since ~2005 when Hawkins published his “On Intelligence” book. I think you are right wrt “Thousands brain Theory” paper. I’ve skimmed over it before, guess I need to review it more thoroughly. Will update the forum with my dissertation proposal once its accepted (should be this week).


@barnettjv - While I am currently in the distributed camp I would add that there is room for a hybrid approach to the data format. I could be tempted away with the right data from biological research.

We (@rhyolight , and other forum members) have explored this question in some depth in this thread:

If you do move forward with your dissertation this may provide some good background.

1 Like

It split into another thread, where @jhawkins chimed in with his take on convergence, or “sensor fusion” here:

1 Like