Let an HTM model do anomaly detection on a scalar timeseries
x(t), which doesn’t have very low frequencies, and which we can assume samples a physical phenomenon, like temperature. Let’s also interpolate
x and make
y therefore has no extra information on the underlying phenomenon (no extra anomalies), but exhibits the same anomalous behaviour as
x, albeit with many extra uninformative transitions.
Is it reasonable to expect that a sufficiently large HTM model should perform equivalently on predicting anomalies in
x and in
y? Is this a desirable property? And what would happen in practice?
I would call this property time scale invariance.