Can I make OPF model serialization faster?

Hello!

I’m using HTM on a system with hundreds, potentially thousands of models. I can’t keep all of them in memory at all times, and they only get a data point every 30 minutes, so I’m storing them on disk (via model.writeToFile) and loading them when needed.

I have a problem with the time it takes to serialize them (writing to disk is actually fast), as it takes between 6.5s and 6.7s, which means I can only update about 270 models before the next data point comes in. Processing multiple models in parallel is not a viable option with my target production systems.

I think I could save some time by only serializing and storing the parts of a model that have changed (e.g. if the SP is not learning any more). What do you think?

1 Like

So as far as I know, an HtmPredictionModel instance does not have a writeToFile function, it has a save function. The writeToFile is a method of the newer serialization API, which currently uses capnp.

Can you please confirm which object you are calling writeToFile on?

I’m calling writeToFile (and readFromFile) on an HtmPredictionModel object, which seems to work on nupic 1.0.5.

Edit:
I have just tested the save method on my models and it’s so much faster. Below 600ms, an order of magnitude faster. The fact that it’s deprecated, though, makes me doubt about actually using it.

1 Like

The fact that an HtmPredictionModel has those functions is strange. Don’t use them.

And yes the save function still works. You can test it by calling loadFromCheckpoint and continuing processes at the same place. It should not affect predictions or anomaly detection.

1 Like

Ok, thank you. :slight_smile: