SDR Classifier (am I using it right?)

Hi all,

So I’m trying to get the SDR classifier working reliably (1st time). My implementation involves loading in and saving out the classifier object to disk at each time step, since it’s learning too.

The immediate issue I’m hitting is that the classifier objects seem to get very big – over 200 MB for single-field models and even almost 4 GB for a big 33-field model. I know 33-field models aren’t generally best practice, just testing the capacity limits.

Here’s how I’m using the classifier. If you see anything questionable-looking please let me know!

Import:

Initialize:

Implement:

Load:

Save:

One thing about this implementation is that the HTM models inferenceType is TemporalAnomaly so that it uses the backtrackingTM. This means there are no classifier params and no raw-data forecasts output from the opf model.run(), just anomaly scores. So I’m adding this classifier afterwards to get those forecasts too. I think it should be possible, since the classifier compute function just needs the encoding bucket, actual value and active cell indices from TM – but maybe this is problematic?

1 Like

As far as I remember, the learning and inference phases should be activated separately not in one step. When you are pulling the results, the learning phase should be off! Not quite sure if this will solve your problem, but it is something I noticed in your code.

2 Likes

Hi @sheiser1, I was just getting to this point in my own HTM progress, when 2020 hit. This looks just like what I would have tried. When I get back up to speed later this year, I’ll make sure to post here if I get any further.

3 Likes