I’m trying to implement anomaly detection along the lines of the discussion in supplementary section S4 of the “Unsupervised real-time anomaly detection for streaming data” paper (Pages 6-8 of https://ars.els-cdn.com/content/image/1-s2.0-S0925231217309864-mmc1.pdf)
Let’s say I have two models, which output prediction errors
s2_t at every time,
t. The goal of the discussion is to be able to detect if the prediction error of the first model spikes at a different but close time to the second (i.e.
s2_5 are spikes). I’m very confused why they propose including
G, a Gaussian convolution kernel, which seems to use
x, the input to HTM (the raw value).
Could someone walk me through the math of this section, and if possible, how one would implement it?