Fault Tolerance on HTM Models


I am very new to HTM. I just started reading HTM and OPF. I have a few basic questions. Please clarify those questions.

In OPF, the models are stored in “in memory”? Below is my use case.

Use Case:-

I have streams of data coming from external systems to detect anomalies. I am planning to use HTM to detect anomalies. OPF client instantiates the model, read the streams of data and give me the anomaly score. Is that model will be in “In Memory”? Due to some reasons, If I reboot that machine, what are the patterns learned by the model is lost right? I’ve to restart the model and replay old events before start consuming new events right?

Please correct me If my understanding is not correct.

Is there a way I can take snapshots of the model regularly and store in a persistent system like NFS.


Hi @Madabhattula_Rajesh. Firstly, I updated the topic of this thread to read “HTM” instead of “CLA” (we don’t use that term anymore to refer to our algorithms).

Yes, they are stored in RAM.

See our FAQ: Can I Save and Restore Models?

Thank you for the details and pointers