False positive vs false negative

In " Unsupervised real-time anomaly detection for streaming data" (Neurocomputing 2017), the likelihood threshold is given as Lt >= 1-e giving an upper bound on false positives. Bounding the false positive rate of course is a fairly normal approach.

However, in certain circumstances you want to find a balance where the false positive rate is not so high as to render anomaly detection irrelevant but where the rate of false negatives is somehow also bound. Think for example anomaly detection in critical care where an anomaly may mean the patient is at imminent threat of having a heart attack. You probably don’t want to have too many false negatives in that scenario. At the same time, you don’t want to have so many false positives that you render monitoring the patient pointless.

The likelihood threshold is bound by a very small epsilon (Lt >= 1-e). Ahmad et al found that setting e at 10^-5 provided a good upper bound on false positives.

My question is: What is the “bound-ratio” (for want of a better term) of the size of e to the probability of false positives? I.e. how much does the probability of a false positive increase (assuming with a corresponding decrease in false negatives) with every step-change in the size of e and what is the size of that change?



1 Like