Evaluating Anomaly Detection in my Application


#1

@rhyolight - I have built a Real-Time Streaming Application where Anomaly Detection is happening on sensory inputs emanating from patient vitals. In the testing data-set, I have interjected random values every 1% of the dataset. For example I am tracking anomaly patterns in the heartbeat. Let us say that I have 2500 test data-points. So for every 25th data-point, I have put 15 random data-points meaning 1-25 data-points actual values, 25-40 data-point random values, 40-50 data-points actual values. The above process repeats from data-point number 50 (2% of 2500). So in-addition to anomaly score computation for actual values, the random values should flag the anomaly score to jump up above 0.8 or so. Given that I know that the ground-truth values in the injected random values (should be true positives) , how should I go about creating the confusion matrix (tracking true-positive/false positive etc) for the entire data-set. In the ideal world, in-addition to anomaly scores from actual values, there should be high anomaly scores from the random values. But my dilemma is that if Numenta picks up co-occuring patterns from these injected random values then going forward it might not flag random values as anomalies because it might have inadvertently learnt some similar random pattern in the past. This sort of a situation will throw the confusion matrix up for a toss. What is the best strategy, to evaluate Anomaly Detection in this case ?