I am very new to HTM. I just started reading HTM and OPF. I have a few basic questions. Please clarify those questions.
In OPF, the models are stored in "in memory"? Below is my use case.
I have streams of data coming from external systems to detect anomalies. I am planning to use HTM to detect anomalies. OPF client instantiates the model, read the streams of data and give me the anomaly score. Is that model will be in "In Memory"? Due to some reasons, If I reboot that machine, what are the patterns learned by the model is lost right? I've to restart the model and replay old events before start consuming new events right?
Please correct me If my understanding is not correct.
Is there a way I can take snapshots of the model regularly and store in a persistent system like NFS.