Questions about using saved models


Hi everyone,

Right now we’re trying to use previous trained models for anomaly detection. So after the training with a revised dataset, we disabled learning and saved the model.

model = createAnomalyDetectionModel(dataFrame)

After that we load the model and we test it with a real dataset but we’ve noticed that first values (around 400) aren’t evaluated, instead always have the same anomaly likelihood (0.5). It looks like we are not using the saved model and we need to learn again.

trained_model = ModelFactory.loadFromCheckpoint('/scalarModels/model_params/RequestCount.model')

Do Nupic need a time to wake up? Maybe I’m wrong but I thought that anomaly detection based in a trained model would work since the first value.




You’ve found a deficiency in NuPIC. Currently, saving a model in the OPF will not save the state of the anomaly likelihood because it is a post-process. I could probably pickle the AnomalyLikelihood instance yourself and manually resurrect it with the model and re-attach.


The anomaly service in htmengine (from numenta-apps repo) deals with this by saving the anomaly state in mysql as part of its normal processing of many models.



thanks a lot for your help, I will read the HTM Engine code.

I’m trying to build the next schema:

json input --> Trained Nupic Docker --> ElasticSearch <-- Kibana

A more simplified schema than the HTM Engine, without RabbitMQ, neither mysql.

At the first step, I was using HTM Engine but I was not able to use a complex model.



Hi @juanhorta thanks for you help. I have implemented anomaly likehood almost as the method suggested by @rhyolight . I am pickling one month data which include timeStamp ,raw anomaly score and scalar input value. I have modified and feeding the pickled value when calling anomalyProbability() function. I am updating the pickled value each time i am coming out of anomalyProbability() function. I am able to get the result without having the issue of having cold start for 1st 388 value. Still i need to verify the results.If you need some more information reach me out


Hi, Taylor, I have been the save/load stage, now. And there are some questions:

  1. Does the ‘deficiency’ mentioned in this topic has be repaired ?, if not, you mean use the module ‘pickle’, right? As I am not familiar with this module, could you give me some more information about how to save the likelihood by using pickle ?

  2. I have got the right place to learn how to save/ load models:
    However, I am a little confused with it. As I have read the code of htm_prediction ( and model_factory), I have find there are two ways to save/load model:

  • save/load: inside the code, it take advantage of pickle module.

  • readFromCheckpoint/writeToCheckpoint ( orloadFromCheckpoint of model_factory, which inner call readFromCheckPoint ): behind them is capnp

    Both of these have been deprecated, according to the page: “Serialization” (i.e., the url mentioned above). Since the methods are still in the the model, does it means I could still use them either ? then, I wonder the difference between these two ways with respect to the saved model ?

  1. In the page mentioned above(i.e., the url above), writeToFile/readFromFile are the recommended methods, but in the htm_prediction model, I could not find the functions, does it means I can’t still use them? If they could be used now, does it work similarly to the methods mentioned above. Say I would like to save a model I can just work like this: model.writeToFile(dir) ? as the example just illustrates how to save SP, I don’t if it is the same to model.

  2. I find a topic: "Load/Save model may lose precision?"
    Load/Save model may lose precision?
    Does this affect much to the result or model?

As last, I there is another question :blush:
5 . HTM loss its prediction ability
In this topic, I ask a question, could you give an answer ?

Thanks :blush:


No, it is not repaired, and we don’t plan on repairing it. You can pickle the Anomaly components of your models yourself. Just look up how to pickle objects with python.

Yes, you can still use them. The new way to serialize is more granular, meaning you save smaller objects like algorithm instances. The old way using pickle will pickle the entire model object, including the algorithm instances. If you use the old way it should still work.

Right, that model is not serializable using the new way. Only the algorithm instances themselves are serializable. If you are using HTMPredictionModel in the OPF, you might find it much easier to use the old way, but it will be slower.

I don’t totally understand that issue. You might want to comment there. It does seem like there is a bug there, and I filed a ticket about it, but not sure it will get worked on soon.


Thanks, Taylor, got it. you have, again, resolved my puzzles. And I have saved several models now, I will try to save likelihood by using pickle. Thanks a lot, again, sincerely.