Getting anomaly likelihood

Modifying the hot gym anomaly code, is this the correct way to obtain anomaly likelihood?

result = model.run({
  "timestamp": timestamp,
  "kw_energy_consumption": consumption
})

likelihood = result.inferences['multiStepBucketLikelihoods'][1].values()[0]

Hi,

I think that I’m struggling with the same issue… I also modified the hot gym code to try and get both an anomaly probability/score and a prediction. Did this approach proved to be working correctly for you?

No, the anomaly likelihoods are not part of the core HTM computation. It is a post-process that should be implemented outside the compute loop. There are instructions here.

Hi,

I see… but in order to compute the anomaly likelihood the raw anomaly score is required, however I found out that when I set “inferenceType” in my model params to “TemporalMultiStep” I can only get predictions in the inferences object and the “anomalyScore” is None and when I set it to TemporalAnomaly the “anomalyScore” is available but the “multiStepBestPredictions” is not… Is it possible to use one HTM model to get both? Do I need to run two models in parallel (one for predictions and one for anomaly detection) or is there a better way? Can you please share some example code for that?

1 Like

This was not true when I created this tutorial years ago. I’m not sure what has changed. Maybe you can look over this and see what might be different in your model?

Hi, Thanks a lot for the quick reply. I saw in that tutorial that “You might be wondering how to convert your prediction model into an anomaly model”

So, a model can be used for prediction OR anomaly detection but not both?

I thought that the TemporalAnomaly model still returned predictions, but I may be wrong.

1 Like

In practice a ‘predicting’ model should also be just as capable at anomaly detection, tho not necessarily vice-versa. Both are totally based on the current TM state.

Since the anomaly score is just a proportion of bursted to activated columns its trivially calculated. The ‘prediction’ functionality however is non-trivial, since a given TM state (which can vary a LOT in sparsity) has to be mapped to predicted value(s) back in the raw data type’s encoding space.

There have been different implementations of ‘Classifiers’, mapping schemes from TM state to raw data value. I think the latest is the SDR Classifier.