aviy
March 23, 2020, 8:35pm
1
I want to know how we can specify a training window in a large dataset of server logs containing CPU/memory usage. I suppose that based on the training window, HTM will learn various patterns in the dataset and based on the learnings it will classify the data point as anomaly or not.
Also, tell how we can somehow save the trained model and later use it for real time anomaly detection.
Hey @avly welcome!
Here’s the docs for the HTMPredictionModel. There’s a method called ‘disableLearning()’ which you can call on the model object at any time.
# ----------------------------------------------------------------------
# Numenta Platform for Intelligent Computing (NuPIC)
# Copyright (C) 2013, Numenta, Inc. Unless you have an agreement
# with Numenta, Inc., for a separate license for this software code, the
# following terms and conditions apply:
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero Public License version 3 as
# published by the Free Software Foundation.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
# See the GNU Affero Public License for more details.
#
# You should have received a copy of the GNU Affero Public License
# along with this program. If not, see http://www.gnu.org/licenses.
#
# http://numenta.org/licenses/
# ----------------------------------------------------------------------
This file has been truncated. show original
HTMPredictionModel is set as the ‘model’ value in the model params used to initialize the model. A sample params file is here:
MODEL_PARAMS = \
{ 'aggregationInfo': { 'days': 0,
'fields': [],
'hours': 0,
'microseconds': 0,
'milliseconds': 0,
'minutes': 0,
'months': 0,
'seconds': 0,
'weeks': 0,
'years': 0},
'model': 'HTMPrediction',
'modelParams': { 'anomalyParams': { u'anomalyCacheRecords': None,
u'autoDetectThreshold': None,
u'autoDetectWaitRecords': None},
'clParams': { 'alpha': 0.01962508905154251,
'verbosity': 0,
'regionName': 'SDRClassifierRegion',
'steps': '1'},
'inferenceType': 'TemporalAnomaly',
This file has been truncated. show original
For saving here’s another resource. I just use model.save(model_path) which has worked for me.
# Serialization
NuPIC uses Cap'n Proto and the Python library pycapnp for serialization. See the [pycapnp documentation](http://jparyani.github.io/pycapnp/) for more details on the Python API. The NuPIC algorithms expose this serialization through the following methods:
- [`writeToFile`](../api/support/index.html#nupic.serializable.Serializable.writeToFile) - write the instance to the specified file instance (not the path)
- [`readFromFile`](../api/support/index.html#nupic.serializable.Serializable.readFromFile) - class method that returns a new instance using the state saved to the specified file instance (not the path)
- [`write`](../api/support/index.html#nupic.serializable.Serializable.write) - write the instance to the specified pycapnp builder
- [`read`](../api/support/index.html#nupic.serializable.Serializable.read) - class method that returns a new instance using the state saved to the specified pycapnp reader
## Writing to Files
The typical serialization use case is writing to a file. Here is an example for creating a SpatialPooler instance, writing it to a file, and reading it back into a new instance of SpatialPooler:
```python
from nupic.algorithms.spatial_pooler import SpatialPooler
sp1 = SpatialPooler(inputDimensions=(10,), columnDimensions=(10,))
with open("out.tmp", "wb") as f:
sp1.writeToFile(f)
This file has been truncated. show original
2 Likes