I’m calculating the anomaly score for input with two inferred fields, both of which are scalar encoded. One field (field1) has a range between 0 and 250, while the other (field2) lies between 0 and 12000.
I’ve added both fields in field encodings, but I don’t understand how the two input fields influence the anomaly score.
To test my code, I deliberately add anomalies in my input set, which is just a value of 0.The model detects an anomaly when field2 is 0 and field1 isn’t. However, when field1 is 0 and field2 isn’t, the model has a tough time detecting anomalies. Can someone explain why this happens?