Understanding AnomalyLikelihood?

Because I believe that AnomalyLikelihood is much better than RAW anomaly scores for the robust anomaly detection, so that I am testing now the AnomalyLikelihood Algorithm in NUPIC.

BUT I have a feeling that it does not work correctly at least for my test case as follows:
At the first 240 steps I generate synthetically random raw anomaly scores with random normal distribution (mean: 0.2, variance: 0.1). This describes normal synthetic data with NO anomaly.

After that I simulate the anomaly situations by generating random raw anomaly score with random normal distribution (mean: 0.9, variance: 0.1). This describes abnormal synthetic data.
In my figure, the raw anomaly scores are in red color, and the anomaly likelihood in blue.

I will expect that the AnomalyLikelihood should increased slowly after anomaly is coming (after step 240). Unfortunately, the likelihood is changed suddenly to zeros and stays there for very long time interval.

Iā€™d like to hear any opinion from you!

1 Like

After analysing results, I believe that the formular I used for my test above (100% copy from the file anomaly.py, line 145-154)

    elif self._mode == Anomaly.MODE_LIKELIHOOD:
      if inputValue is None:
        raise ValueError("Selected anomaly mode 'Anomaly.MODE_LIKELIHOOD' "
                 "requires 'inputValue' as parameter to compute() method. ")

      probability = self._likelihood.anomalyProbability(
          inputValue, anomalyScore, timestamp)
      # low likelihood -> hi anomaly
      score = 1 - probability

is not correct!
In my understanding, the function


already provides the anomaly score, so that the correct solution should be

      score = self._likelihood.anomalyProbability(
          inputValue, anomalyScore, timestamp)

If you do in that way, the results are understandable.
Am I right here! Thanks