Yes, it should, as long as the encoders can handle the range change (and the RSDE should be able to).
I’m more comfortable seeing a very high anomaly likelihood used for flagging anomalies, because that is what has worked for us in the past. You might have success with different configurations and methods.
So, for data with different input ranges i tried running the model with same initial parameters. This is the plot I get:
(This data is also from similar context - number of posts per hour)
In this case too, the anomaly likelihood seems to be very high for a lot of points. Is this expected behavior ?
If it is, it would lead to a lot of points being classified as anomalies.
Yes, that’s what I’m used to seeing. From my previous post:
Even with a high threshold, I am still getting a lot of anomalies. Apart from the increasing the threshold of anomaly likelihood, is there a way to decrease the sensitivity of the algorithm for anomaly detection such that overall, the algorithm reports lesser number of anomalies ?
That is, the model is more tolerant towards anomalies.
What is the threshold you’re using? Why can’t you continue to dial it up?
I don’t know of one. Maybe @subutai knows.
I tried a bunch of values in the ranges 0.85-095. The thing is, I could fix it at a high value but then as new data arrives, I feel it may stop giving any anomalies at all, because of the high threshold.
Typically we use a 0.99999 threshold, and our systems still get decent anomalies after running for months and months.