Is there a way to see the confidence of a single prediction?

I’ve been looking through ModelResult class, trying to see if there’s a way to output the confidence (at least that’s what I’m calling it for lack of a better term, maybe accuracy probability?) of a single prediction. From the experiments I’ve been running, the predictions made by NuPIC seem quite accurate. But, I’m trying to find a way to tell if a single prediction is more or less probable to be accurate so I can put more or less weight on it.

I thought I saw something like that outputted when running a model. But then again, I might be mistaken.

Any thoughts?

Hello @mellertson, the internal representations, synapses and dynamics do not have any weights so assigning weights to the outcome would conflict with the learning principles of HTM.

The closest “confidence” would be the activation overlap on distal segments of the predictive cells because some are activated more strongly than others for a given set of predictions. Though I do not think it directly represents the confidence. I would say the prediction accuracy is the combination of the predictive cells so isolating some of the predictions may not be as meaningful as you think without the others. A single prediction/predictive cell means it is able to recognize a subsample of an existing activation. So even if you considered its depolarization rate (predictive activation) as its confidence, it would still not tell how accurate it is on identifying the actual activation as a whole.

Sorry if I misunderstood your question. Weights kind of conflict with what HTM does but I would certainly welcome more insights on this.

Are you thinking about this in terms of a Markov chain?

1 Like

The ModelResult object has a property called inferences, which looks like this:

{
  'multiStepPredictions':{
    1:{
      40.2:0.91409241368304817,
      39.72324776721089:0.0033501887645081565,
      38.9483378705:0.0022740192046981555,
      4.738145870751808:0.0022601530710983958,
      37.03395142903199:0.002925129238754846,
      38.0683314:0.0022796547206743799,
      44.167395299999995:0.0026556539079170645,
      43.22574693799999:0.0025154454074292638
    }
  },
  ...
}

You can find out how confident each prediction is here. For example, this model is predicting one step into the future, so to get the confidences for that prediction, you can access result.inferences["multiStepPredictions"][1], which tells you that the model is 91.41% confident that the next value (1 step ahead) will be 40.2 (from 40.2:0.91409241368304817). Likewise, the model is 0.34% confident that 39.72 is the next value.

If you had asked for 1 and 5 steps ahead, the result.inferences["multiStepPredictions"] would contain confidences for 5-steps ahead as well, like this:

{
  'multiStepPredictions':{
    1:{
      23.5:0.0018656716417910447,
      41.5:0.0018656716417910447,
      28.6:0.0018656716417910447,
      47.5:0.0018656716417910447,
      45.61:0.0018656716417910447,
      11.6:0.0018656716417910447,
      5.338860581200769:0.0018656716417910447,
      22.4:0.94029850746269183
    },
    5:{
      23.5:0.0018656716417910447,
      41.5:0.0018656716417910447,
      28.6:0.0018656716417910447,
      47.5:0.0018656716417910447,
      45.61:0.0018656716417910447,
      11.6:0.0018656716417910447,
      5.338860581200769:0.0018656716417910447,
      22.4:0.94029850746269183
    }
  },
  ...
}
3 Likes

Matt, your response is exactly what I was searching for!!

And coincidentally enough, I was stepping through a model this morning, exploring the datastructures and I found an example in my model of what Matt’s talking about.

In my excitement, I jumped on the HTM forum to post my discovery, only to find the community flag bearer had beaten me to it! :slight_smile:

Thanks all for your responses!

1 Like