As before, definitions matter. HTM does make models in my understanding of the word. How would you define a “model”?
You may be correct, but I personally disagree with this interpretation. I believe the networking infrastructure, besides setting up NuPIC as an expandable framework from the get-go, is at the moment used for two purposes – practical use of HTM in real-world or toy applications (where knowing all the biological details are not necessary), and to provide building blocks to be used as placeholders for various aspects of a circuit. for which a deep research dive has not yet occurred. This allows them to quickly hook up a new idea (glossing over the finer details) so they can tinker around with the idea.
They believe a single CC is far more capable than most view it, and the direction has (for as long as I have been observing at least) been to fully understand a single CC by itself before leaning on hierarchy to solve a problem (Jeff views this as a cop-out to cover for not understanding something well enough – I couldn’t find an example off hand, but there are a number of videos of him taking this perspective). This is a preference, of course (you can’t boil the ocean, so you must start somewhere). There are other approaches that can be taken (I for example am starting with hierarchy, and less focused on biological plausibility).
At the moment, they are obsessed with figuring out reference frames, believing it to be one of the key ideas of understanding the cortical circuit, and they are not focused at all on hierarchy yet. It is of course impossible to know someones motivations, but their research meetings are publicly available, so it is not difficult to watch them working and see where their heads are at.
This was a subjective observation on my part, sorry. My perspective is that there is frankly nothing yet to compare it with to estimate what a “normal” total timeline should be (nobody has achieved a working model of the neocortex). I say it is early stages, because even the things we know (and not counting the things yet to be discovered that we don’t know), there is a longer list of TODOs than “dones” (reference frames, ego-centric<>allocentric, timing, reinforcement, behavior, attention, consciousness, thinking, planning, decision making, etc just to name a few).
That may be. To your point, I do recall several videos where Jeff has made comments indicating that he feels like they are getting close to understanding a single CC (but typically not long after those, a slew of new unknowns surfaces). I think that is human nature (the more you know about a topic, the more you understand what is yet unknown).
Anyway, I understood the point about presenting a framework to be that neuroscientists have been studying and accumulating a massive wealth of knowledge about the brain, but we sort of spinning our wheels on putting it all together without a theoretical framework to view it in some higher-level perspective.
That is one aspect of hierarchy they have put forward, yes (a means of handling input at different scales). They also have reiterated in other videos and talks, that they are not throwing out the traditional form of hierarchy, either, only adding to it. I don’t think you can ever build an abstract concept like “democracy” with the scaling mechanism described in that video alone.