Autolearning problem

Hello. There are some questions about learning model in hot gym prediction tutorial. Has enableLearning been turned on in this example or not? If not how can I turn enableLearning on? Is it enough to use something like model.enableLearning()?

If you create an OPF model, learning is enabled in both the SP and the TM by default. You can turn them both on and off with:

1 Like

And no model.finishLearning() needed right @rhyolight ?

Iā€™m turning off learning after a certain set point and leaving it off, so donā€™t want miss any way to optimize inference from there.

1 Like

Right, only call finishLearning() if you know you wonā€™t need to learn ever again as an inference optimization.

Ok great! So Iā€™m trying to use model.finishLearning right after model.disableLearning() and getting this error:

Traceback (most recent call last):
  File "<input>", line 623, in <module>
  File "<input>", line 465, in do_stream
  File "/Users/samheiserman/Desktop/Dppm.app/Contents/Resources/DPPM_app/myenv_folder/lib/python2.7/site-packages/nupic/frameworks/opf/htm_prediction_model.py", line 306, in finishLearning
    self._getSPRegion().executeCommand(['finishLearning'])
  File "/Users/samheiserman/Desktop/Dppm.app/Contents/Resources/DPPM_app/myenv_folder/lib/python2.7/site-packages/nupic/engine/__init__.py", line 467, in executeCommand
    return self._region.executeCommand(args)
  File "/Users/samheiserman/Desktop/Dppm.app/Contents/Resources/DPPM_app/myenv_folder/lib/python2.7/site-packages/nupic/bindings/engine_internal.py", line 1555, in executeCommand
    return _engine_internal.Region_executeCommand(self, *args, **kwargs)
  File "/Users/samheiserman/Desktop/Dppm.app/Contents/Resources/DPPM_app/myenv_folder/lib/python2.7/site-packages/nupic/bindings/regions/PyRegion.py", line 402, in executeMethod
    raise Exception('Missing command method: ' + methodName)
Exception: Missing command method: finishLearning

Hereā€™s the line itā€™s happening. Iā€™m streaming in multivariate files and maintaining separate models for each field, so the self.fields_modelsdicts[field][ā€˜modelā€™] 's are ModelFactory.model objects. This works with disableLearning() on its own, but not once I add finishLearning().

self.fields_modelsdicts[field]['model'].disableLearning()
self.fields_modelsdicts[field]['model'].finishLearning()

This performance gain could add up to a big difference as the system scales to many fields (why Iā€™m bugging you on this).

Thanks again @rhyolight!

@sheiser1 Is the model instance youā€™re using a TemporalMemory or BacktrackingTM?

1 Like

BacktrackingTM, the one that comes with getScalarMetricWithTimeOfDayAnomalyParams

My suspicion is that youā€™re using CPP version of the algorithm, and it doesnā€™t have a finishLearning function, which probably means it doesnā€™t need to be optimized in CPP (hopefully). Let me know if you notice a speed different, but I would just not call that function.

1 Like

Yep I am

tmImplementation="cpp"

Ok, so I could try it with ā€œtm_cppā€ and .finishLearning(), and see if thereā€™s a speed difference from the current setup.

Thanks again :smile:

Thereā€™s cpp and tm_cpp, which are unfortunately named, especially since I canā€™t remember which one is the TemporalMemory vs BacktrackingTM.

1 Like

The cpp is default in getScalarMetricsWithTimeofDayAnomalyParams(), I suspect its the BacktrackingTM

1 Like

Be aware that tm_cpp will not work as cpp well on anomaly detection problems.

1 Like

Do you know why this is or where I could go to find out?

Yes, it is because we added some non-biological hacks into the BacktrackingTM so it could better identify sequences. This helped with anomaly detection, but it did not move us towards a better understanding of the cortex. The TemporalMemory algorithm is not as performant at anomaly detection as the older non-biological one because it has not been ā€œhand-tunedā€ to any one task. (The whole point of our work being that we donā€™t want to hand-tune anything)

2 Likes