Strange Results on Anomaly Detection

Hi everyone,
I’m performing anomaly detection using HTM, I’m trying to find a way to find optimal values for n,w for my features but I get strange results.
See my guess was that the more I increase the n value (and keep w at 2% of n) the better the results should get as ‘more’ information should be encoded inside the SDR. Currently I’m using a set of features separately (one HTM for each feature).
However the results of the detection are not correlated to the increase of n.
Is this common behavior ? Have you seen that before ?
See the attached picture as reference of the “strange behavior”.

  • The higher the MCC the better the detection, note that the results are completely unpredictable.
  • I use the anomaly likelihood with a threshold of 0.8.
    example
1 Like

Don’t try to keep encoding sparsity this low. You can encode the input space for the SP much denser. I suggest you try up to 50% sparsity. The SP, if configured properly, should be enforcing the sparsity as input comes into the system.

1 Like

Additionally here’s the same example with the prediction error instead of the anomaly likelihood

I though that this was the rule of thumb in terms of sparsity as I keep seeing it everywhere.

See Encoders & Encoding Numbers for examples.

I see, I think I mixed the notion of sparsity inside the SP and inside the SDRs

1 Like

You can control the sparsity inside the system by changing the number of active minicolumns with respect to the total number of minicolumns in the SP config.

# active minicolumns
--------------------
total # minicolumns

The “number of active minicolumns” is the k inthe k-winners activation function. This formula gives you a sparsity. Ours are usually 2% in the SP.

2 Likes