Bad prediction in detecting cpu usage

Q1) I am trying to run "example of feeding cpu data into the opf " and after some time I start getting totally incorrect prediction…mostly zero. Initially I thought of manipulating the CLA parameters so I collected a bunch of my cpu usage dataset and run a swarm over it but the parameters remained almost similar(no change) as described in the previous model_params file . I am not able to detect that where am I getting wrong ?

Q2) Is it correct that running CLA through opf deals with only one hierarchy while network-api deals with multiple hierarchy. If it is so then how much difference will it create through network-api in terms of accuracy?

1 Like

I’m not in a position to answer the first question but for the second one, have a look at H is for Hierarchy, but where is it? . OPF only supports one hierarchy at the moment.

Thanks @Element it is helpful…and In addition to my first question it was giving a descent prediction if I raise up my cpu usage but failing when it is normal…why is that ?

Are you talking about the CPU example in the NuPIC repository at examples/opf/clients/cpu?

If so, I just tried it and it did not work for me, so I am fixing it. With that fix, it works just fine for me. Here is the output graph:


Just to understand the example… the predicted line basically follows the actual one by a shift of 10 seconds. Is this the right behaviour of a prediction?

Initially, before the model has had a chance to see much data, its predictions will lag behind actual values. As it sees more data and recognizes more patterns, performance tends to improve. In the example above it has not seen enough data to make good predictions yet.

Ok, I have run the example code for half an hour and still follows that pattern (see capture). How long approx. should be running?


It depends on the data. Are there patterns in the data that you can discern with your own brain? At what interval are you sending in data? Given the example data you posted in the image above, it does not look very predictable. For example, the sudden drop and recovery in the data… if that pattern has been seen before at a certain time of day, or at a certain interval, HTM might predict it. But it would never predict a sudden change like this with no precedent.

The data is psutil.cpu_percent(interval=1) and updated every 2 seconds, basically as the example in the repo. I have manually created some high CPU load with the stress tool, and the picture represents the moment where I switched to a lower CPU load.

My point is trying to get a demo of host metrics with prediction and anomaly detection that could have visible results in some minutes of demo. Any advice would be appreciated!

1 Like

CPU patterns, in our experience, work well when applied to servers running automated processes. It is not so useful when applied directly to your own CPU because human behavior is a bit erratic. Your CPU usage will have patterns, but they will likely be daily or weekly patterns.

I would suggest you aggregate the CPU input to 10 minute intervals and run for a few days, which might not be feasible if you are running on your own work machine. But this does work. There is a product called Grok that runs NuPIC on AWS metrics to provide anomaly indications on AWS Cloudwatch metrics (and more).


So if the algorithm is unable to detect any pattern, it will just output the input data with a delay?

Yes, typically if there truly is no best inference yet because no patterns have been learned, it will predict what it has just seen.