So I’m trying to get the SDR classifier working reliably (1st time). My implementation involves loading in and saving out the classifier object to disk at each time step, since it’s learning too.
The immediate issue I’m hitting is that the classifier objects seem to get very big – over 200 MB for single-field models and even almost 4 GB for a big 33-field model. I know 33-field models aren’t generally best practice, just testing the capacity limits.
Here’s how I’m using the classifier. If you see anything questionable-looking please let me know!
One thing about this implementation is that the HTM models
TemporalAnomaly so that it uses the
backtrackingTM. This means there are no classifier params and no raw-data forecasts output from the opf model.run(), just anomaly scores. So I’m adding this classifier afterwards to get those forecasts too. I think it should be possible, since the classifier compute function just needs the encoding bucket, actual value and active cell indices from TM – but maybe this is problematic?