Clarification on anomaly score and anomalylikelihood

Hi

Can someone clarify my doubt on Anomaly Score and Anomaly likelihood ?
I got anomaly score =0.1 and anomalylikelihood=0.999
Can I consider this is an anomaly ?

There is a lot of stuff on the forums about anomaly likelihood and anomaly scores. Try out a few searches and see if you find anything useful before starting a new post. See the Before Posting section of the Read this first post.

You should fine that the anomaly likelihood score is generally more reliable and less fluctuating than the raw anomaly score. You can also find out how it is generated, and what thresholds you might use to indicate anomalies (0.9999 is typical, but you might want more or less 9s).

Hi Matt(@rhyolight)
I dont post the question without going through previous posts and discussions , I got confused with different responses and clarifications on Anomaly likelihood , Hence I posted the question

In one of the documentation says , Anomaly likelihood is the
probability or confidence level on the current anomaly score , for ex : - anomaly score is .3 and Anomaly likelihood is .9999 , what I understood from the documentation is system is 99.99% confident that anomaly score is .3

As per the below documentation , System is 99.99% confident that current score is an anomaly .
AnomalyLikelihood

anomalyProbability(value, anomalyScore, timestamp=None)
Compute the probability that the current value plus anomaly score represents an anomaly given the historical distribution of anomaly scores. The closer the number is to 1, the higher the chance it is an anomaly.

Pls let me know which one is correct

Pls confirm my understanding
we have to use Anomaly likelihood when environment is extremely noisy (lot of fluctuations in the values )
We need look into Anomaly likelihood score when anomaly score is high .

Hi @wip_user, I understand your confusion. The best reference code for anomaly detection is here:

In short, you should ignore the anomaly score completely. You should only use the anomaly likelihood. I recommend a threshold of >= 0.99999 The above code has some other best practices and has been proven to work well in a very wide variety of situations.

@rhyolight There are a lot of good questions about anomaly detection on the forum. It is hard for me to reply to each one. Perhaps at a future Hackerā€™s hangout we can cover the various questions in detail and more comprehensively?

3 Likes

Thanks Subatai for valuable clarification . Apologies for inconvenience caused .

1 Like

Good idea, Iā€™ll plan it!

1 Like

No inconvenience at all! Just hoped it helps!

2 Likes

Hey guys,

Super quick here, I just wna make sure Iā€™m interpreting the Anonaly Likelihood algorithm right as its shown here:

As I understand it (in plain words):

M_total(t) = mean of all anomaly scores so far
M_recent(t) = mean of all anomaly scores in recent time window
Sigma(t) = standard deviation of all anomaly scores so far

Likelihood(t) = Z-score of: ( ( M_total(t) - M_recent(t) ) / Sigma(t) )

Iā€™m taking ā€˜kā€™ to mean the number of inputs seen to date, though Iā€™d also think ā€˜Wā€™ is that. Do I have this right? Iā€™m trying to implement it myself. Finally to that end, is the time window for ā€˜M_recentā€™ generally held 100 across data sets as per the ā€˜reestimationPeriodā€™ value in numenta_detector.py file?

Thanks!!

Hey @rhyolight, would you mind affirming or correcting me on this super quick? Sry to bother

This is not really my forte. Maybe @scott or @subutai could respond.

1 Like

Gotcha! Could I impose on either of you @scott or @subutai to sure me up on this? My home-brew TM implementation (from the BAMI pseudo-code) yields anomaly scores larger than (but seemingly proportional to) NuPICā€™s, and Iā€™m very curious if the Likelihood values would therefor fall in line with NuPICā€™s. Thanks!!

1 Like

In short, Anomaly Score is the fraction of active columns that were not predicted correctly. In contrast, Anomaly Likelihood is the likelihood that a given anomaly score represents a true anomaly. In any dataset, there will be a natural level of uncertainty that creates a certain ā€œnormalā€ number of errors in prediction. Anomaly likelihood accounts for this natural level of error.

1 Like

k here should just be W and W is the number of recent anomaly scores to include in the historical distribution. It will be the entire history initially, but after enough records it will be the most recent scores in a recent history. Note that this is NOT the reestimation period, which is simply an optimization where the historical statistics are not recalculated every record. See the code here to see the difference between historicalWindowSize (W) and reestimationPeriod:

We donā€™t usually change either value but it certainly can have an impact. Larger values of historicalWindowSize result in a slower adaptation to changes in the statistics. The reestimationPeriod shouldnā€™t be increased by much without negative impacts. You could lower it for minor benefit at the cost of processing time.

Iā€™d recommend this paper for more up-to-date description of anomaly detection and formulae:

3 Likes

Hi @sheiser1 - sorry for the delayed reply. I think it is best to look at this more recent paper where we tried to a bit more careful with the notation. Hereā€™s a screenshot of the relevant section.

To answer your specific questions:

This is the mean of anomaly scores over a large window, the last W samples. In the code this is historicWindowSize, and defaults to about a monthā€™s worth of data at 5 minute intervals.

Yes, but it is very short. Usually about 10 samples. It is averagingWindow, which is different from reestimationPeriod (the latter is just an optimization hack, and not that important).

Yes, k=W (fixed in the paper).

2 Likes

hi all
I see the Anomaly Likelihood ( Lt) is calculated by the Q function . Does anyone know how this function is ( In a simplest way).

Thank you very much