HTM Service Based Approach - Pros/Cons


Hey guys,

What do you think would be the pros/cons for a service-based HTM implementation? I don’t wanna use the term “microservice” as I’m afraid it may cause a lot of discussions. But the idea of a microservice approach is similar.

Presently (disclaimer: limited knowledge to nupic), nupic is being used similarly to how most data scientist’s use ML tools to solve a particular problem, and that is in a sense a sequential manner. Gather data, refine data, build model, feed the data to the model, repeat, until results are acceptable.

What would be the pros/cons if the usage is extended to be aligned realistically with how systems are deployed nowadays? That is mostly are deployed as services (e.g. tiny docker apps) that are independent with each other and scale distributedly, after all the cloud is distributed and it is where “living” apps are deployed.

Every time I think about building an HTM system, I get inclined to considering the deployment and operational aspect of the system, IOW improving the HTM implementation from a library to a modular system (e.g. apps). By intuition (sorry I have nothing to show yet), I think that there will be lots of applications that may benefit the modular parts of an HTM system. As an example, the minicolumn is basically a mini storage of permanence values, and works well with sequential pattern recognition, at first look, caching systems may benefit with its strengths while being independent with other HTM modules. Another example is probably the encoder module, how about an encoder module that learns? There are lots of possibilities and extensions that I can think of if HTM modules are services themselves. Maybe not applicable yet, but I think that these possibilities when applied will provide more practical solutions to existing computing systems.

Interested of your thoughts.


We, here at Numenta, used to have an HTM service. It was an HTTP service (REST). You could run swarms, create new models, and stream data to models, getting predictions and anomaly scores back. We put our learnings into HTM Engine, which contains the same model-swapping strategy we used in that implementation. The service worked pretty well. We even wrote some clients in JavaScript and Python for it. This was over 5 years ago, before we open sourced HTM algorithms.

Cons are that services are hard to maintain. There are lots of logs, and lots of infrastructure. Not to mention security these days. You’re going to need to build out a strong software team and take a hands-on approach when constructing your services. Do not outsource this work.

Pros are that you have something that is actually really useful. There have been many times over the past 5 years I wish that service still existed. It would be much easier for demos to have an HTTP client hitting a NuPIC service than have to run it locally.


You might also be interested in this open source HTM HTTP server:

And here is a python client:

I’m not sure what the state of these projects is these days. I would be surprised if they worked with NuPIC 1.0!


Thanks @rhyolight


One of the Pros I can see is that because the nature of HTM’s computational side-of-things is relatively straightforward compared to other ML algorithms out there, an HTM service(s) can be made to allow interconnected and long-running systems/apps that could potentially perform other interesting stuff that can’t be achieved in a jupyter notebook. What are these interesting stuff? I don’t know really, but looking at HTM as an evolutionary system (e.g. cellular automata), there must be some emergent features that will be interesting to study about.


These inputs are really good to know. I’m currently studying the HTM code and would like (started already) to write apps to test my learning. I’m also presently immersed and working on a scaleable infrastructure/apps using basically kubernetes and docker. My current mindset is pretty much influenced by this immersion, and I’m taking advantage of it to build HTM apps in a slightly different way than ML practitioners would build them. Interestingly, it seems to me that these basic HTM capabilities are very useful in systems. Take for example, prediction of system metrics that can be injected in the common setup of kubernetes and prometheus. If this service can be proven useful, it would be extremely useful to operations engineers as the said software/systems are widely used. Ok too much talk for me, I gotta work. :slight_smile: