I have a simple prototype of anomaly detection running in Java. It’s getting 1hz time stamped data with measurements that typically sit around 30-40. (ms in this case).
When the data first starts flowing the anomaly scores are (rightfully) all over the map. After 4-500 they’ve settled down to a pretty even 0.0.
I’ve seen a few bursts of numbers in the 500+ (even 1000+) but these don’t seem to cause even a ripple in the anomaly score. If I was doing a simple rolling mean these values would stand out but they don’t seem ‘interesting’ to HTM.
Other cases Ive seen values like 110 (even 75) cause a score of 0.025 to appear.
- is there some sensitivity value that I need to work with?
- what constitutes a high value anomaly?
My next goal is to start saving this data to a permanent store so I can graph it and get a better sense of what’s going on. (Speaking of which - is there a param somewhere in the Java .onNext(Inference) that contains the latest values passed into the network? I haven’t found it yet. Would help greatly in storing / graphing)
Edit: after crossing the 1100 values barrier almost every value is a 1.0. A few 0.9 and 0.975 but still averaging in the 30-50ms range (values).
Clearly something has gone awry.