What are the flaws in Jeff Hawkins's AI framework?

I was reading this Stack Exchange’s post. I thought it would be useful to that and this community to share it with you. In particular, it would be interesting to have some comments on the currently only answer to that question and maybe to have more (updated) answers to that question.

I would rather focus on what we know about how the brain works than point out flaws in anyone’s work. Pointing out the flaws doesn’t seem very useful. Let’s point out the ways forward.

1 Like

I actually think that we also need to face or identify the flaws, so as to try to fix them.

Another way to look at the problem is from the perspective of what’s left to do (there is a lot of opportunity here!) Numenta’s work is an iterative process of researching, theorizing, testing those theories, and reworking the theories as new information is learned. I personally don’t agree with the premise that flaws are holding back realization of the original vision. Reaching the end goal of modeling the neocortex is extremely ambitious, and is going to take time. I don’t think anyone who really looks at the evolution of HTM over the last 10 years would say that progress has stagnated.


I see research efforts like this as being grounded in a general direction and occasionally pivoting as needed.

So rather than focus on the detail of what HTM does or doesn’t do today, the bigger question is whether or not the “Biological Inspirationalist” direction is more promising than the others.

Personally I’m convinced that creating models of intelligence based on how the brain works is the most promising way forward, otherwise I wouldn’t spend so much time here reading posts and papers :grinning:

Will Numenta tread a perfectly optimal line directly toward the quickest possible breakthrough? Unlikely, but that’s not the point, rather it’s to explore an area and build understanding until breakthroughs eventually happen.

These forums are a great complement to Numenta’s research. If you think there’s an under-emphasis or over-emphasis on some aspect, you can effectively create your own little research arm - for example @Bitking’s incorporation of the lizard brain.


It is not really possible to talk about the disadvantages or flaws of the HTM theory here, right?

This clarification of the flaws or disadvantages of the HTM theory would facilitate the life of people (outside of Numenta) who want to do research on this theory and tackle the existing problems that need to be solved.

Your approach is very disappointing. I’ll have to apparently do this research alone. It will take longer, but I will do it anyway and share it with the world.

“Classical” neural networks are also based on or inspired by biological neuron networks.

Note that, often, in mathematics and computer science, the simplest model that explains the data is the one that should be chosen (see Occam’s razor). In other words, HTM may be a more complex model of how the brain (in particular, the neocortex) works, but it doesn’t necessarily work better (in general). Note: I am not saying that it doesn’t or it won’t possibly “work” better (in practice) than other models, I just think that you should keep the Occam’s razor principle in mind and adopt a more scientific view when deciding to follow an idea.

Sure it is, it’s a research community and not a cult. But at the same time, most people are here because they favour Numenta’s overall direction, so it’s not exactly the best place to find lots of detractors.

A quick forum search returned this as top result, where a temporal memory limitation is called out, it’s admitted as an area of weakness, a frank and open discussion ensues and it ends with a reference to Occam’s razor. Sounds like your ideal thread!


I think the resistance you are sensing comes from the fact that the Stack Exchange question you linked has a strong undertone of something like “It’s been a decade, so why is there nothing to show?” That is also why I mentioned that a better way to look at the problem is from the perspective of what’s left to do (or as you say, what needs to be solved). You’ll likely have a more positive interaction here on the HTM forums from that perspective.

Anyway, there is a lot of opportunity to improve HTM. A better way to handle repeating sequences is an obvious one (as discussed on the thread @jimmyw linked to). I believe the solution for that problem involves feedback from a pooling layer (which is the direction I am currently exploring in the Dad’s song project).

If you’re looking for a somewhat larger challenge there is the so-called “numerous fields” problem. This one will likely involve feedback from a pooling layer, voting, and hierarchy.

How to implement attention is another one. We had a nice discussion on that problem recently on this thread.



I have been very vocal that the HTM model focuses on how to fit the various members of the cortical zoo into the location-based processing view to the exclusion of all other possible functions.

When you list “top” posts mine is second:

This has a very different view of what the layer 2/3 is doing as compared to HTM canon.

Elsewhere you will find this paper:

This details a very good exposition of the theory and practice of the Deeplebra model that has a very different view of how layer 5 does predictive cells.

This community is certainly open to examination and discussion of the good and bad points of HTM and related models. I think it is safe to say that we are all here because we think that the general approach of Numenta (brain-based study and modeling) is most likely to have the best results in the long run.

There is no need to run around like your hair is on fire shouting “HTM is falling.” I have to say that the link you posted above looks like a hit job and I am not very interested in participating.

1 Like

AFAIK, right now, there aren’t many production-ready (and, hence, really useful) applications (if any) using HTM theory (after more than a decade of ads and promotion). We could talk about cortical.io. What more?! So I don’t think that the Stack Exchange question is unfounded.

HTM is doing well in the neuroscience field, I’ve heard, so that’s a big concrete achievement as far as I’m concerned. I don’t think focusing on production-ready applications is good, because if the goal is to mimic the brain, the neuroscience and having a path towards general AI are more important.


It’s not that the Stack Exchange question is unfounded, just that it isn’t likely to get a very positive response from this community, because it makes an incorrect assumption about the goals and research path that HTM is taking. It also assumes that lack of production-ready application is equivalent to stagnation.

It is no secret that HTM doesn’t currently beat more traditional AI/ML techniques in very many practical applications other than perhaps anomaly detection for streaming input. But that isn’t the point of HTM research. Numenta isn’t in a competition. They are exploring how the neocortex works, learning how the brain solves problems, and modeling those functions in software.

Most of us in this community are in it for the long haul, believing that the biological approach to AI will ultimately reveal the important mechanics of intelligence. Once understood, those mechanics can later be optimized and improved. Biology provides a road map to intelligence, and the purpose of HTM is to explore that road map.


@nbro “It’s been a decade, so why is there nothing to show?”

10 years to production ready?

Let’s put that in perspective. The perceptron was introduced in 1957. It did not really even start to flower as a usable model until the release of the PDP books in 1986. For those keeping score: 29 years.

From the PDP books, we did not see that elaborated as usable deep networks until the last decade. If you take the Andrew Ng and Jeff Dean cat recognition task as a deep network defining event that’s 2012.
Arguably more than 25 years to production ready.


@nbro “So far the implementation is only used for anomaly detection, which is kind of the opposite of what you really want to do. Instead of extracting the understanding, you’ll extract the instances which the your artificial cortex doesn’t understand.”

On the other hand - this is EXACTLY the driver to allow one-shot learning, something that deep networks can’t do at all.