Love you Matt.
Hey, Matt, great job. Thanks for all the work that must go into these episodes.
Comment on my YouTube response:
Hmmmm. So do I understand this?.. Any input firing patterns or sequences that seem similar generate a new ‘hierarchical layer’ representation, which is just another input to another layer or column, so this recursive “bundling of similarity” becomes a hierarchy of consensus – which generates columns that respond/vote on abstractions to “model” the world in more predictable ways (because predictions preserve connections). The utility of this to a biological species can go many ways, because the general mechanism/algorithm of a sparse hierarchical feedback loop (HTM) can be applied with different inputs and connections to do different things. How broad or deep the recursive bundling goes might be structural/innate or very different for different individuals or species. Recursive bundling of similarity becomes a hierarchy of consensus which leads to the ability to predict which is a key function of learning responses and motor skills, and memory of abstractions.
I want to understand and express the details of the hierarchy. Here are some ideas I have.
When I say in the first sentence “Any input firing patterns or sequences that seem similar generate a new ‘hierarchical layer’ representation…” I mean that a higher-level layer (a set of columns representing a lower layer input) becomes stimulated to fire (i.e. it will recognize) a specific input pattern or sequence. It is as-if a given input firing pattern generates a representation layer, but it is not really causal that way. Lower-levels don’t cause higher-level representations. Higher-levels in the hierarchy emerge or develop when lower-level firing patterns similar to prior patterns happen at a receptive time. Emergence is required because there must be multiple representations of the same input vectors, counterfactuals, etc. and they must not be automatically generated, or there would be no learning.
To break it down further: You can only recognize something if you have seen it before when you are receptive to it. Sensory or low-level neocortical patterns don’t cause higher-level representations. Higher-level representations emerge if/when salient patterns recur. Hierarchical layers columns might be random neighbors, or more distant connections, as you have shown in previous episodes (and papers), and that is another interesting question.
I’m willing to go out on a limb here and claim (without evidence) that there is an evolutionary argument that higher-level neocortical representations (aka layers) must emerge from a large neocortical environment with lower-level inputs, they are not caused or by lower-level inputs or pre-wired. The generic reusability and flexibility of neocortical columns in mammals demands emergence as a requirement of the hierarchical object modeling theory.
Key neocortical hierarchy questions I have are:
- How does a naive neocortical system create hierarchy with columns in layers?
- How does the neocortex use hierarchical layers in a thousand brains network?
- What testable hypotheses can we make to challenge the theory?
Episode 16 communicates: 1) Cortical columns (which can be thought of as units) are local neighborhoods of neocortex with several standardized layers, types of inputs, outputs, etc. They are the main units of intelligence via modeling sensory input, movement, objects and concepts in the brain. 2) Cortical columns are organized in a hierarchy. 3) The hierarchy is not necessarily very deep. 4) the hierarchy is broad, messy, sparse and elastic.
I believe the Thousand Brains Theory is tenable. It is a framework that seems to work.
However, Episode 16 was not a satisfying explanation (to me) of the current state and key questions and strengths of the theory.
My first reaction was that the visual examples didn’t help as much as you hoped. The visual scale metaphor with the straw is a good first example, but why or how the cortical hierarchy is important was not explained well. Perhaps there are gaps in terminology or theory that need filling in to make the story hold together better. I will try to come up with ideas for better metaphors for examples of why Hierarchy and “unitarity?” is key to Thousand Brains theory. Good job writing and delivering, but I think the role of the Hierarchy in the theory is not explained well enough yet.
The major resolved and unresolved questions of the theory were not nailed in the examples. Why are cortical columns at every hierarchical level and every region performing the same computation? It is fair to provide links out to keep the presentation tight. Dare I say a little math would be good as part of the explanation.
We need to develop testable hypotheses that relate to the creation, maintenance, and disruption of these hierarchical “layers” of columns in neocortex. Or call out for them.
Thanks, Matt. I really appreciate that good examples and presentations are difficult to develop. Please continue making new episodes. HTM School is a wonderful “product” of the company.
Castro Valley, CA
Great video! Always a pleasure to watch them.
The concept of hierarchy developed at Numenta with the TBT is very appealing.
Some approaches in the ML field go into this direction. If I understand well, they are talking about skip connections and top-down modulation & feature pyramid networks to implement this kind of hierarchy.
Here is what I found on the subject (not exhaustive of course):
The big issue with those later models concerns the training phase because their recurrent architecture is not well-adapted to the backpropagation algorithm.
That’s one more reason why we need to switch to local learning rules!
Thanks for your thoughtful post.
You are correct. It is really just a comparison of what we know vs how we’ve always used it in ANNs until today.
Regarding your other questions, there are still a lot of open questions. I did not try to answer anything I did not think I could explain with evidence. I can only go so far when following along with the research. What I’ve presented is only what I am certain about so far.
Great episode Matt!