How to update and persist models after a prediction set?

Hello - I am a newbie to HTM interested in trying the anomaly detection code. I was able to quickly get it up and running and the setup was a breeze! Few questions I have.
a) After swarming, I use it to detect anomalies on new incoming streams. The models are getting updated as new streams arrive. Is there a way to persist the updated model after a prediction set ? Assuming we can persist, would there be a difference between such a persisted model and a re-swarmed model with ALL the data available in hand ?
b) I see the max cores supported for swarming is 32. Are there ways to use multiple machines (even if it is not automated, if we can try it across different machines somewhat manual and join them, it would save time)

Thank you! It’s always exciting to try new tools and am glad I was able to quickly set it up.

2 Likes

Yes.

Short answer: no.

Swarming produces a best guess at the model params that should be used for a given data sample. When a new model is created with those params, it is a blank slate. As data is passed into the model it learns and changes. Persisting the model and resurrecting it doesn’t change it at all. It is as if you never saved it, the state of the model is the same. You can pause a data stream, save the model, bring it back in a week and restart the stream where it left off without any problems.

Not today, but pull requests are welcome!

I’m happy you had a good setup experience. :slight_smile:

FYI, I just implemented a simple data model to store HTM models in a database. Right now they’re being stored to MySQL. But, I based the data model class on PeeWee, so you should be able to easily store the HTM models in whatever DB you wish.

EDIT: I went ahead an just uploaded the code to Github. You can find it in the repo mellertson/PeeWeeExtension.

1 Like