Right now we’re trying to use previous trained models for anomaly detection. So after the training with a revised dataset, we disabled learning and saved the model.
model = createAnomalyDetectionModel(dataFrame)
After that we load the model and we test it with a real dataset but we’ve noticed that first values (around 400) aren’t evaluated, instead always have the same anomaly likelihood (0.5). It looks like we are not using the saved model and we need to learn again.
You’ve found a deficiency in NuPIC. Currently, saving a model in the OPF will not save the state of the anomaly likelihood because it is a post-process. I could probably pickle the AnomalyLikelihood instance yourself and manually resurrect it with the model and re-attach.
Hi @juanhorta thanks for you help. I have implemented anomaly likehood almost as the method suggested by @rhyolight . I am pickling one month data which include timeStamp ,raw anomaly score and scalar input value. I have modified anomaly_likelihood.py and feeding the pickled value when calling anomalyProbability() function. I am updating the pickled value each time i am coming out of anomalyProbability() function. I am able to get the result without having the issue of having cold start for 1st 388 value. Still i need to verify the results.If you need some more information reach me out @firstname.lastname@example.org.
Hi, Taylor, I have been the save/load stage, now. And there are some questions:
Does the ‘deficiency’ mentioned in this topic has be repaired ?, if not, you mean use the module ‘pickle’, right? As I am not familiar with this module, could you give me some more information about how to save the likelihood by using pickle ?
save/load: inside the code, it take advantage of pickle module.
readFromCheckpoint/writeToCheckpoint ( orloadFromCheckpoint of model_factory, which inner call readFromCheckPoint ): behind them is capnp
Both of these have been deprecated, according to the page: “Serialization” (i.e., the url mentioned above). Since the methods are still in the the model, does it means I could still use them either ? then, I wonder the difference between these two ways with respect to the saved model ?
In the page mentioned above(i.e., the url above), writeToFile/readFromFile are the recommended methods, but in the htm_prediction model, I could not find the functions, does it means I can’t still use them? If they could be used now, does it work similarly to the methods mentioned above. Say I would like to save a model I can just work like this: model.writeToFile(dir) ? as the example just illustrates how to save SP, I don’t if it is the same to model.
No, it is not repaired, and we don’t plan on repairing it. You can pickle the Anomaly components of your models yourself. Just look up how to pickle objects with python.
Yes, you can still use them. The new way to serialize is more granular, meaning you save smaller objects like algorithm instances. The old way using pickle will pickle the entire model object, including the algorithm instances. If you use the old way it should still work.
Right, that model is not serializable using the new way. Only the algorithm instances themselves are serializable. If you are using HTMPredictionModel in the OPF, you might find it much easier to use the old way, but it will be slower.
I don’t totally understand that issue. You might want to comment there. It does seem like there is a bug there, and I filed a ticket about it, but not sure it will get worked on soon.
Hi, sorry for taking the liberty of bothering you. I am wondering can I inquire you about how to save/reload trained model of HTM nupic library? I am recently working on a project that needs to store the model state and came across some issue when saving the trained model. But when I saw your post on HTM forum saying that you successfully saved and reload the model, I felt very excited and curious about how you save the model or what is the environment or procedure that you save the model. I’d appreciate any instruction/suggestion/reference from you and thank you so much for your time and patience!
FYI, I did the following lines of code to save the trained OPF model:
Traceback (most recent call last):
File "test.py", line 413, in <module>
File "/Users/ruzhong/Library/Python/2.7/lib/python/site-packages/nupic/frameworks/opf/model.py", line 360, in save
File "/Users/ruzhong/Library/Python/2.7/lib/python/site-packages/nupic/frameworks/opf/htm_prediction_model.py", line 1429, in _serializeExtraData
File "/Users/ruzhong/Library/Python/2.7/lib/python/site-packages/nupic/engine/__init__.py", line 729, in save
engine_internal.Network.save(self, *args, **kwargs)
File "/Library/Python/2.7/site-packages/nupic/bindings/engine_internal.py", line 1214, in save
return _engine_internal.Network_save(self, *args, **kwargs)
SystemError: NULL result without error in PyObject_Call
Thanks again for your time and patience!! I’d appreciate it if you can reply to me at your convenience.
Hi may I know how did you saved your model? I am currently having some issues wrt saving OPF model as I indicated in my last reply to @vikash0837 in this post. Thanks for your time and patience and I will appreciate your help!
I’d also appreciate any suggestions from @juanhorta on saving/reload the model state since I am having some issues wrt saving model state recently (plz refer to my reply with error trace to @vikash0837 ). Thanks!