Hello. There are some questions about learning model in hot gym prediction tutorial. Has enableLearning been turned on in this example or not? If not how can I turn enableLearning on? Is it enough to use something like model.enableLearning()?
If you create an OPF model, learning is enabled in both the SP and the TM by default. You can turn them both on and off with:
And no model.finishLearning()
needed right @rhyolight ?
Iām turning off learning after a certain set point and leaving it off, so donāt want miss any way to optimize inference from there.
Right, only call finishLearning()
if you know you wonāt need to learn ever again as an inference optimization.
Ok great! So Iām trying to use model.finishLearning right after model.disableLearning() and getting this error:
Traceback (most recent call last):
File "<input>", line 623, in <module>
File "<input>", line 465, in do_stream
File "/Users/samheiserman/Desktop/Dppm.app/Contents/Resources/DPPM_app/myenv_folder/lib/python2.7/site-packages/nupic/frameworks/opf/htm_prediction_model.py", line 306, in finishLearning
self._getSPRegion().executeCommand(['finishLearning'])
File "/Users/samheiserman/Desktop/Dppm.app/Contents/Resources/DPPM_app/myenv_folder/lib/python2.7/site-packages/nupic/engine/__init__.py", line 467, in executeCommand
return self._region.executeCommand(args)
File "/Users/samheiserman/Desktop/Dppm.app/Contents/Resources/DPPM_app/myenv_folder/lib/python2.7/site-packages/nupic/bindings/engine_internal.py", line 1555, in executeCommand
return _engine_internal.Region_executeCommand(self, *args, **kwargs)
File "/Users/samheiserman/Desktop/Dppm.app/Contents/Resources/DPPM_app/myenv_folder/lib/python2.7/site-packages/nupic/bindings/regions/PyRegion.py", line 402, in executeMethod
raise Exception('Missing command method: ' + methodName)
Exception: Missing command method: finishLearning
Hereās the line itās happening. Iām streaming in multivariate files and maintaining separate models for each field, so the self.fields_modelsdicts[field][āmodelā] 's are ModelFactory.model objects. This works with disableLearning() on its own, but not once I add finishLearning().
self.fields_modelsdicts[field]['model'].disableLearning()
self.fields_modelsdicts[field]['model'].finishLearning()
This performance gain could add up to a big difference as the system scales to many fields (why Iām bugging you on this).
Thanks again @rhyolight!
@sheiser1 Is the model instance youāre using a TemporalMemory
or BacktrackingTM
?
BacktrackingTM, the one that comes with getScalarMetricWithTimeOfDayAnomalyParams
My suspicion is that youāre using CPP version of the algorithm, and it doesnāt have a finishLearning
function, which probably means it doesnāt need to be optimized in CPP (hopefully). Let me know if you notice a speed different, but I would just not call that function.
Yep I am
tmImplementation="cpp"
Ok, so I could try it with ātm_cppā and .finishLearning(), and see if thereās a speed difference from the current setup.
Thanks again
Thereās cpp
and tm_cpp
, which are unfortunately named, especially since I canāt remember which one is the TemporalMemory
vs BacktrackingTM
.
The cpp is default in getScalarMetricsWithTimeofDayAnomalyParams()
, I suspect its the BacktrackingTM
Be aware that tm_cpp
will not work as cpp
well on anomaly detection problems.
Do you know why this is or where I could go to find out?
Yes, it is because we added some non-biological hacks into the BacktrackingTM so it could better identify sequences. This helped with anomaly detection, but it did not move us towards a better understanding of the cortex. The TemporalMemory algorithm is not as performant at anomaly detection as the older non-biological one because it has not been āhand-tunedā to any one task. (The whole point of our work being that we donāt want to hand-tune anything)