Can anomaly detection be based on history in new sequences when learning is off?

Put simply: Does HTM support detection of anomalies that require history in the sequence observed with learning turned off?

The following previous thread is relevant: Repeated data not flagged as anomalous - why?
Our own previous thread Inability to accumulate minor anomalies? is also related but we feel it is too specific, in this thread we are concerned with the general use case.

Intuitively we consider there to be two distinct notions of history. There is the history HTM accumulates when it learns and represents with its internal structure of cells and dendrites. However, there is also a history in the sequence observed after learning has been turned off. This latter notion of sequences can be very important when looking for anomalies in data where the anomaly is not represented by a transitional change (such as going from known values to completely new and unrelated values).

We know that HTM has the ability to learn from input, and thus acquire a form of history in the internal network it represents. E.g. that the network has been structured such that an anomaly will only be detected based on the history accumulated within the network.

However, consider the following problem which represents a case of a non-transitional anomaly:

1 is a normal input, observed 50% of the time.
10 is a normal input, observed 50% of the time.

We want to detect when a very long sequence of “1” is observed.

Assume now that learning is off after 5000 iterations of learning. HTM only sees 1 as its input.

At first, this should not be a big anomaly, since HTM has after all seen 1 half of the time, right? But if it continues to see this 1, it should get increasingly anomalous until it eventually reports a full 1.0 as its anomaly detection score. Furthermore, it should continue to report this as 1.0 (anomaly score) for as long it continues to see a sequence of 1.

Since this type of anomaly is not represented by a sudden change, but rather by simply seeing a known value for long enough, it appears this is a completely different class of anomaly which requires some form of history to be able to be detected.

At least, this is what makes intuitive sense from the perspective of a human observer. Can HTM do this? Can it accumulate this anomaly based on the sequence of input it observes when the learning is off?

If it can do it, but requires a special configuration of HTM, please specify it, it would also be helpful to know if it requires learning to be on or off.

Does there exist a working example that can do this? If anyone has a link to an example like that, it would be very useful.

1 Like

This is my experience. You can actually see this happening in this video at 13:18. In this example, I still have learning turned on, so the “flatline” doesn’t stay anomalous. But if learning were turned off, it certainly would.

But to answer your big questions (I think), no, anomaly detection is not based on history received after learning is turned off. When learning is off, the model is essentially frozen. Nothing updates. The winning columns still win, but no permanences are updated, no boosting is applied.