Why the anomaly likelihood is so high for the repeated data pattern? HELP

It looks better because the anomaly score and likelihood were consistently erratic when the signal was calm. HTM systems contain entropy. You can’t get rid of it. But if you dial it in like this, you can find anomalies like you just did!

Is that graphic not what you expected? This looks like a good anomaly indication to me. In both cases:

You got a significant jump in the log likelihood when the amplitude changed, and again after a sudden spike and another amplitude change. You are in a place now where you should run some experiments, or even find some real world signal to parse. :smiley:

1 Like

Yes, I expected it :slight_smile: I was referring to the raw score from the TM. Is that behaviour due to the entropy of the system?
Thank you very much!

1 Like

Yes, the anomaly score is really erratic. Don’t pay attention to it. The anomaly likelihood gives us a better signal. Choose a threshold level for the log likelihood to flag anomalies and have fun!

2 Likes

@sheiser1, thank you for the thorough reply! (it took me a while to process)
You verified many of my assumptions.

I was curious about the raw anomaly scores for the case where the TM is not reset. Just to understand what defines the spacings, in the plot, between the non-zero results. Because I expected the signal to die down faster.

Indeed, the loglikelihood shows relatively low values and the attention should be focused there.

Thanks again!

3 Likes