Abnormal vs new normal

Hi,

An anomaly becomes new normal after being repeated for a while. So my question is if a clever hacker tries to desensitize the HTM by repeatedly sending the anomaly then after a while HTM no longer alarms that anomaly. Is there a way to prevent that from happening?

Thanks

4 Likes

Hey @catanit, welcome!

Yes that could be a problem on any model that is continuously learning. All unfamiliar sequences are first seen as anomalous, and then become known patterns once they repeat enough times given their length & complexity. So there will be some spikes in raw anomaly score as this anomaly is ingrained into a pattern. Depending on the noise level in the data these spike may or may not be enough to spike the anomaly likelihood value.

One potential way around this is to turn learning off during deployment. So have a learning/training period before deployment, which should contain all kinds of normal (non-hacking) behavior. The idea is that any behavior anomalous to this well trained model is worth suspicion. This of course relies on the idea that new normal patterns wouldn’t emerge during deployment, which would create false alarms.

4 Likes

Another is to run two systems, one with learning on and one without. The output of the non-learning system could draw examination as needed. After the supervisor has certified that the anomalies flagged are harmless the two systems could swap roles like rotating a log file, with the learned patterns updating the non-learning system.

8 Likes