NuPIC vs system requirements (vs embedded)

Hi Forum!

I was curious about the system resource usage when running NuPIC and made a very small investigation:
I kicked off the NAB with the command

$ python -d numenta --detect --windowsFile ...

and in a separate terminal run

$ ps -aux

I noticed that 13 processes were involved, and that the total RSS (Resident Set Size) for them was ~620MB.
Adding timestamp printouts I also concluded that each execution of took approx 10 ms.

What if I wanted to run NuPIC on an embedded device?
What are the minimum requirements on HW?
NAB uses a RandomDistributedScalarEncoder and a DateEncoder (timeOfDay). How would the HW resource requirements scale if more encoders were added?
I guess that memory size is essential since HTM is mimicking the function of the brain. There should also be some relation between memory required and the length in time (, and perhaps also the complexity) of temporal patterns that can be run. Is there some documentation on this topic?
Also, how would the execution time scale with less memory and/or smaller number of CPU cores (running on lower clock frequency)?



Hi @snaredrum (Christian),

One thing to note. When you run NAB you’re running more than just NuPIC; you’re running an entire anomaly benchmark apparatus which takes ~20 input files and runs them concurrently (hence the number of processes detected); and then outputs a whole heckuvalot of information meant for anomaly detection evaluation.

NuPIC itself is a much tighter core of intelligent inferencing software that is a small(er) part of that and by itself is very suitable for building applications which run on various devices (depending on the compatibility of dependencies).

Summary: What you’re looking at is much more than the core NuPIC intelligent prediction software.

Note: this is coming from a non-Numenta employee mind you…

1 Like

Hi Cogmission!
Thank you for your reply! Actually my NAB run was rather stripped down. What cannot be seen in my post is that I used a windowsFile which pointed out only one data file. Also, I used the --detect option, which tells NAB to do only the following:

  • Pick out model parameters with getScalarMetricWithTimeOfDayAnomalyParams()
  • Repeatedly call handleRecord(inputData), which in turn, for each input data
    – calls
    – computes anomaly score, anomaly likelihood and logLikelihood

My run contains no threshold level optimization, no comparison of results against thresholds etc.
I suspect that also Numenta has considered the system requirement aspect of NuPIC, but I haven’t been able to find info/documentation.

1 Like


Actually, it’s the “detect” phase that spawns multiple processes (due to the necessity to run 5 or so files per category, of which there are ~12-20). From observation, the scoring and normalization are single threaded.


1 Like