Is HTM/NuPIC used in Numenta’s research today?

Apologies for my ignorance - I have yet to go through all the papers, books, etc. Per the title, I’m just curious if either are being used in their engineering research. Reading the forums a bit, I understand HTM hasn’t found it’s way into very much besides some time series use cases. The most I’ve found in recent papers is extension of the HTM spatial pooler, which seems to be the most common and valuable feature. Good to see some use there, but is the spatial pooler the only “profitable” piece from HTM that continues to find itself in recent research at the moment? Is NuPIC at the core of the implementations, or are TF and PyTorch still the main frameworks being used? Is NuPIC used at all, or is it just sitting on the sidelines until it finds the right time or is improved upon enough?

It sounds ignorant questioning whether the creators of HTM/NuPIC even use the tools for their research and engineering implementations, but I suppose that’s what I’m curious about. Of course, I need to catch up on all the papers (on the to-do list) :slight_smile:

Thanks in advance!

Hey @Gabriel, welcome!

I think the most robust application of NuPIC is NAB (Numenta Anomaly Benchmark). Numenta tested HTM (using Spatial Pooler+Temporal Memory+Anomaly Likelihood) and compared its performance to a few other anomaly detectors on like 60 datasets from different applied domains with labeled anomalies (Numenta Anomoly Benchmark | Numenta).

NuPIC itself has been in maintenance mode for a while and not a main focus for Numenta anymore, but the module it calls is still used for research – the Network API. This is the generic tool for constructing HTM networks. The standard is a single-region SP+TM, but other multi-region networks are built and tested in the more recent papers to do stuff besides anomaly detection like 3D object recognition!

I think the most ‘profitable’ and distinctive single piece of HTM is the Temporal Memory. Its Hebbian learning mechanism is what enables the system to differentiate similar inputs based on context so quickly & space-efficiently.

I see applied HTM like a radish or carrot growing in a garden. Most of the vegetable is invisible because it’s underground, you can only see the top. I think the raw learning capacity HTM has shown - both in anomaly detection and the more recent world-navigating type of tasks - suggests it has immense potential for applied AI. Numenta has developed the theory behind HTM and demo’d it in a couple settings (impressively I’d say), but it’s big break that’ll really grab headlines is yet to come.

** Disclaimer: Anything I said about Numenta’s priorities or intentions is from my general understanding, and could easily be outdated or incomplete! **

2 Likes

Excellent - I appreciate the thorough response and clarification! I’ll definitely be taking a closer look and playing around with it.

Thanks!

1 Like