Why isn't HTM mainstream yet

This topic is broken off from the Any questions for Jeff? topic.

I love your analogy :slight_smile:. We have almost a 1000 users on HTM Forum, and the HTM School series has almost 40K views total. I have been managing this community for over 3 years now, and I’ve seen steady linear growth in our community across all metrics. I think YouTube has helped a lot.

Why isn’t everyone jumping on the bandwagon? I think because HTM represents a stark contrast to currently held beliefs about intelligent systems. It has been hard for the Machine Learning community to accept our claims because the nature of biologically intelligent systems does not expose a lot of opportunities for the mathematical proofs the ML community values.

Another aspect is that HTM Theory is hard to understand! I’ve been trying to make it more approachable with YouTube videos, but honestly before they existed it was very hard for laypeople to dig into HTM theory because they would need to read very technical papers and understand pseudocode to really get how it worked.


Don’t get me wrong, I’ve been following the progress here for many years now, and it’s clear the neocortex is doing something we ought to be mimicking in AI. So I’m not a skeptic by any means.

I don’t disagree with anything you’ve said Matt, but I have a slightly different perspective on why different things are popular. By my characterisation, the reason people are on the bandwagon with mainstream ML techniques is because they work so well you can’t ignore them. If you want to solve a problem in vision, you use a convolutional net, and if you have enough data it just works.

HTM doesn’t solve the problems people are interested in solving in 98% of AI-related fields, and there’s a simple reason why. 98% of AI problems do not involve time. So to refer back to the rocket example, I think that we’ve got a rocket that can escape the atmosphere but cannot yet achieve orbit, and we’re in an environment where people just want better cars.

The one major area in which time really can’t be ignored is a small but fast-growing field. It’s robotics. People have acknowledged for a long time in robotics that the traditional AI paradigm of time-agnostic input → output doesn’t scale. The physical world has temporal continuity, and that property is arguably more important than any other property. The need for temporal algorithms is growing, and one of these days there’ll be a killer application that can’t be solved well by any other means, and that’s when people will be forced to come on board.

So to summarise, rocket theory and engineering still needs a lot of work, but I’m sure people will start using the rocket once they actually need to go to space.


There are business reasons why HTM hasn’t caught on. I think I Jeff discussed one of them at the end of the March 2015 hackathon at Cornell Tech. Jeff talked about trying to sell this technology to others for search ad prediction and energy consumption for a power station. First of all, business have an established way to doing things, and upsetting that is quite a challenge.
I have talked to several people, and the reply I get goes more along the line of “not vetted technology,” meaning that business people are reluctant to put money up on ideas of which they are unsure. In many of their minds, this is spooky technology, and AI has had too many failures in the past, in which people have lost money.
The reason that IBM gave away the PC market to Microsoft was that IBM did not take PCs seriously. They were considered a toy. So, as long as MS was willing to play with toys, IBM did not really care. By the time that IBM came to realize that PCs were a serious business, IBM tried to push MS out of the game, only to find that they were too late.
Even later on, IBM did not take color monitors seriously, preferring to stick with monochrome, because IBM thought that businesses were unwilling to spend money for color monitors, which was largely thought to be aimed for games.
Similarly with the space race. The US did not invest heavily in space exploration thinking that letting the USSR spend money on a folly was a good thing. Once the USSR showed the feasibility of its design. the US came in strong and with lots of money to win the game.
The Wright Brothers were also considered foolish in playing with their toys. Curtis ignored the development of the Wrights, and managed to get the government to subsidize his research with heavy boondoggles. The Wrights were successful in flying their plane and managed to get patents on that technology, Once it was seen that aircraft worked and had uses in preparation for WWI, Curtis got the government to invalidate the Wright’s patents. The Wright Brothers were forced to sell out the Curtis Aircraft.
A similar story happened with GE & J.P. Morgen taking over Tesla’s AC and Westinghouse’s technology. (Great “History Channel” story).
Business is less about ideas and more about money and controlling others who have technology.


I agree, HTMSchool has helped a lot! Another thing that is probably holding
HTM is it’s adoption by the big companies. Deep learning gained a lot of
popularity when the word about Google using it became widespread. Same
thing with Python. It’s a positive feedback. And once something has reached
certain level of adoption, it’s very difficult for something else to
replace it.

I hear now more about Numenta than I did a year ago, though, so hopefully
this will change in the future :slight_smile:

1 Like

I think the main reason there hasn’t been large scale acceptance by the general ML community is because of a few reasons:

  1. The development of deep learning and associated algorithms like convolutional networks, GANs, LSTM etc. showed vast improvements over older methods in a number of areas, specifically around image classification, voice recognition and text translation. All these improvements won the biggest competitions and so that’s where the attention and research went. So HTM which was being developed at the same time was in the shadow of these achievements

  2. HTM currently works well and has good results with time series prediction and anomaly detection. I think you’ll see more attention when there are improvements in other tasks such as speech recognition or image classification. HTM may be able to solve these tasks, but it’s matter of finding the right network configuration (ie. copying the right regions of the neocortex) as it is in traditional neural networks.

In my opinion, an accurate model of the human brain is entirely possible, it’s just that the size and accuracy of the model are the hard part. With enough hardware and software we can get there.


There are reasons:

  1. You can’t just drop a shiny new codebase into “the community” and say “go play with it” (or rather you can, but the results are not going to be satisfactory). Nupic’s documentation is awful, some wiki pages aren’t updated since 2015, and there is no “discoverability” meaning you just can’t start hacking things for your own ML project unless you want to spend days in frustration and learning things directly from source code.

  2. Most timeseries-related problems are in finance and algorithmic trading – there is an excellent community of ML enthusiasts there. But there isn’t a single code example related to this area.

  3. What should I use now? HTM.java? (most of my codebase is in Scala). There is still no swarming there, so, it is unusable for training new models. Nupic/Python? Way too slow for production. Nupic.core? Even worse documentation than those two. I would say that the project in the current state is completely unusable except for hardcore enthusiasts and founders.

API documentation is not documentation. Code examples are not documentation (though they help a lot). Community-maintained outdated wiki is not documentation. Projects die without documentation, and this is exactly what’s happening to nupic. And yes, I am going to get “pull requests are welcome” response, but I am not qualified enough. I’ll do what I can if Nupic will finaly prove itself useful to what I’m doing (predicting order flow and anomalous market conditions), but so far I was unable to obtain good results from it compared to other methods.


There are also some general kind restrictions among the audience.
True religion believers, creation scientists, those who believe in some special undiscovered technology required for intelligence (like quantum computation).
Lot of people cannot accept the possibility to build human–like general intelligence, just because of their cultural beliefs.

1 Like

On more technical ground.
Time series should be the best example of where HTM should shine i.e. the killer app.
Mainstream methods like ARIMA do long-term prediction, but all Markov chain methods including HTM are at best one-step ahead predictions.

Personally I veered to other methods to try to do time-series, where I think once I find a working solution I may try to convert the normal State model (used in such techniques) to SDR based State model. That is my plan at least.

1 Like

^^ this


Keep in mind that you can write a python program to use NuPIC’s python API and specify running the algorithms in C++. That’s what we do. (Of course the documentation is not very clear about how to do this.)

I hear you, and I’m working on it! New API docs for NuPIC include some general guides for the software as we move towards a 1.0 release.


HTM does NOT work well with time series prediction or anomaly detection. If it did, people would use it. Kaggle has offered a number of competitions in anomaly detection over the years, and the last time people tried to use NuPIC there, they could not even get into top 100: https://www.kaggle.com/c/seizure-prediction/leaderboard (#158).


Matt, I’m pretty sure you know that the main, and the only reason ML community does not care about HTM is because it just does not work well on any problems of interest (including times series prediction or anomaly detection). ML practitioners could not care less about mathematical proofs - if they did, deep learning would never become popular. No one really understands why DL works so well (let alone can prove anything), but it does, and that’s why people like/use it.

The only thing Numenta could do to instantly make HTM popular is to show any task where it could beat other ML algorithms. Show a good result on an existing benchmark, or win a Kaggle competition. Publishing proofs or making it easier to understand is not going to do it, sorry.

HTM has value (I’m a long time fan): its value is in helping us understand how brain works. Saying things like “Numenta built a rocket that can fly to the moon” is silly and distracting. Numenta has built parts of the engine that might one day be used in a rocket. The engine is not complete yet, so obviously it does not work. We don’t even know how much we don’t know about that engine, so let’s focus on figuring that out, rather than trying to impress ML people.



How do you think a human would compare to those very same anomaly detection algorithms? So I guess then, if we were to apply a human neocortex simulation against Kaggle data competing against specifically tailored algorithms (oops - wait! That’s what we’re doing!); we would inevitably conclude that we were on the wrong track - and that the human neocortex is not a significant step in the right direction toward intelligence?

If we were to create true AGI that performed no better than human beings, that entity would suck at certain competitions just as a human would!

But is nevertheless unimportant and immaterial when the REAL goal is being considered. ML competitions are suited for that technology - I could care less, really. Simply put.



How do you think a human would compare to those very same anomaly detection algorithms?

I don’t know, but based on other benchmarks where human performance has been measured (vision, speech, language tasks), a trained human would do really well on an anomaly prediction task. I sure hope a neurologist would be able to spot any anomalies in my EEG! Otherwise neurologists would use ML algorithms by now.

Your argument is weak because a human does very well on MANY tasks, ML algorithms do very well on ONE task, and HTM does not do well on ANY tasks, when compared to either a human, or to an ML algorithm.

I completely agree, and I said it in my reply to Matt: [quote=“michaelklachko, post:12, topic:2007”]
HTM has value (I’m a long time fan): its value is in helping us understand how brain works. Saying things like “Numenta built a rocket that can fly to the moon” is silly and distracting. Numenta has built parts of the engine that might one day be used in a rocket. The engine is not complete yet, so obviously it does not work. We don’t even know how much we don’t know about that engine, so let’s focus on figuring that out, rather than trying to impress ML people.


I suspect as time goes by there will be 2 tracks which get the brunt of research effort. Those that are specifically configured to excel in specific areas; and those that are geared toward a generalized intelligence applicable across any problem domain.

I expect both will get really well at what they do (with any luck), and then those that are specifically tailored will become more and more costly (in terms of time and effort) to improve and the general one will continue to be improved without encumbrance until the day when we look back on the others and note how far we’ve come.

Comparing the two from the vantage point of future Michael, will be seen as an inconsequential curiosity, most likely :wink:

Please remember that a neurologist is given graphical tools and does not analyze streaming raw numbers. Those tools are meant to abstract the data so patterns can be seen spatially/graphically. Algorithms do no such thing. And yes, humans (most) would suck at analyzing 10,000 numbers thrown at them; and wouldn’t retain anything beyond 7-10 numbers - in any case far fewer numbers than what would be required to distinguish true anomalies. (To any really fine resolution)

1 Like

You are absolutely right about that. For sure. I’m sorry I was on mobile and busy responding and didn’t read everything you said here.

Can you spot a fallacy in your arguments above?

A human would not do well on EEG competition if given raw data, but a human neocortex, and therefore HTM, should!


Nitpicking here, but…

…are misleading. There is strong mathematical basis behind how/why neural networks work, leading back to the initial Perceptron algorithms. Teaching a model to “learn” by tweaking weights via backpropagation, the chain rule, and gradient descent are well understood neural net mechanics (more links below).

You may be suggesting that people who are not in academia and not involved in research are the ones who don’t care or understand, in which case you may be right. But in general I think Matt is right in that a mathematical foundation is something existing ML researchers crave and look for, and something HTM doesn’t necessarily provide. Not a bad thing, just different approaches.




You’re wrong. The top people in DL, including Lecun, Hinton, Bengio, and even Karpathy, stated many times that we don’t understand why deep learning works so well. People tried different things, and some of those things worked better than others. Watch Karpathy’s lectures, and you will hear a lot of “we are not really sure why this trick works so well”.
As an example, a simple question: why does gradient descent not get stuck in poor local minima? No one really knows…