Learning Normal & Ignoring Anomalous Behavior

Hi, I am currently using HTM for anomaly detection on a labeled data set with labels being normal/anomalous. I noticed that for continuous periods of anomalous behavior HTM will learn this as normal. This got me thinking. How reliable would it be to disable learning on input that is detected as anomalous and enabling learning on input detected as normal? Well I did some testing and found not so good results. Does anyone have any suggestions for a method of disabling/enabling learning periodically? Could I follow any patterns or heuristics in the anomaly scores – such as if HTM hasn’t detected anomalies since the past 100 records, learning could be enabled. Anything at all would be an immense help.

1 Like

It sounds like you have more of a classification problem. HTM anomaly detection is built for detecting unusual patterns in the data. If a pattern happens many times and is learned by the model, then the model will not find it anomalous anymore. That’s just the nature of the problem.

If there are certain behaviors that are “normal” (good) and others that are “anomalous” (bad) and both sets can occur many times, then you probably want to use classification to separate the good and bad learned sequences. Specifically, you want sequence classification. We have experimented with using HTM for this but don’t have a great solution at the moment.

One approach to sequence classification is to feed the active cells from the Temporal Memory to a classifier (like our KNN or SVM classifiers). This may work for some cases but wouldn’t work very well on noisy data where it is hard to keep track of the sequence you’re in. To build a robust solution with HTM, you’d probably need some form of pooling to provide a more stable representation of the sequence you’re in.

1 Like