I have a question regarding NAB scores. I am currently evaluating a few algorithms with my own dataset. I am evaluating them based on their optimized and normalized scores. However, some of the scores are negative values. To me, this implies that these algorithms perform worse than the null detector in that they raise false positives that end up affecting the total score. But, shouldn’t the optimizer find a threshold value that is high enough such that there are no anomalies detected at all, which should ultimately lead to a better score (of 0)? Any advice to help me better understand this behaviour would be greatly appreciated. Thank you in advance.
First, can you confirm that you are running all steps of nab? The run.py
command takes the following arguments to run each of the four steps: --detect --optimize --score --normalize
. If you don’t run the optimize step then it could be using a bad threshold.
If you run all of the steps and somehow the threshold isn’t getting set high enough to avoid false positives then perhaps there is a bug in the optimizer. In this case, you can manually modify the threshold in config/thresholds.json
to be larger than one (the max anomaly score) and then rerun the scoring and normalizing steps (make sure not to rerun the optimize step since it will overwrite your manually entered thresholds). This should guarantee that you get scores no worse than the null detector.
Please report back with the results so we know if there is a bug in nab!
Hi Scott,
Thank you for your reply.
Yes, I can confirm that all of the options you mentioned were used when running NAB.
I have re-run some experiments to confirm the results and I still get a negative standard score for some algorithms. As expected, once I manually set the threshold to a value greater than 1 (and run NAB without the optimize option), the final score is 0.
Interestingly, a manually set threshold of 1.0 actually led to a positive score in one of the cases. The NAB optimized threshold leading to a negative value was 0.9999998092651373. I’m not sure if the optimizer never reached 1.0 because of the way the steps are chosen or how small they can be? Anyway I hope this sheds some light into what is going on.
Thank you again for your help!