I’m performing anomaly detection using HTM, I’m trying to find a way to find optimal values for n,w for my features but I get strange results.
See my guess was that the more I increase the n value (and keep w at 2% of n) the better the results should get as ‘more’ information should be encoded inside the SDR. Currently I’m using a set of features separately (one HTM for each feature).
However the results of the detection are not correlated to the increase of n.
Is this common behavior ? Have you seen that before ?
See the attached picture as reference of the “strange behavior”.
The higher the MCC the better the detection, note that the results are completely unpredictable.
I use the anomaly likelihood with a threshold of 0.8.
Don’t try to keep encoding sparsity this low. You can encode the input space for the SP much denser. I suggest you try up to 50% sparsity. The SP, if configured properly, should be enforcing the sparsity as input comes into the system.