Hello! I finally started testing my implementation on real data. I took a classic dataset, which reflects the amount of energy consumed by houses in a city.
On the graphs below, you can see the training stage at the very beginning, the identification of anomalies, but are there too many false “anomalies”?
First screen, from 0 to 1000 steps:
Second scree, from 2000 to 3000 steps:
Also you can see active and predicted columns here. It seems to me that too many columns are being predicted…
Sorry for the vague confusing question, I’m just trying to figure out if the predictions are valid and if there are too many false anomalies.