[htm.core] How to use Predictor?

Hi, my HTM app takes several features as input and I would like to predict future values for some of them. Taking hotgym example and this comment on #1712 as base I thought of creating a predictor per feature I want to predict, but Predictor.learn() takes “bucket index” of the input that resulted in the given SDR, how can I compute that bucket index?

From what I readed from docs in source code Predictor was taken from SDRClassifier of nupic.core.


Looking at nupic’s CLAModel._handleCLAClassifierMultiStep() I found I can use Encoder.getBucketIndices(inputValue)[0] to get bucket index for inputValue. But I noticed Predictor accepts only categories, why in hotgym example it’s trying to predict consumption which is a real value? By passing bucket index I can predict ranges? Where I take all values that falls in that bucket having same probability to happen? Also why Predictor does not takes into account input value while CLAModel does?

Hey @hldev,

I think the simplest way would be to have several models taking in those features, with a different predicted field for each model. As I understand any nupic model can only have 1 predicted field, based on how the classifier works. There may well be a way to hack around this, but it would require some custom logic that hasn’t been extensively unit tested (as the standard functions have). That’s why I’d recommend doing the separate models with different predicted fields.

Alternatively I suppose you could create separate Predictors (classifier objets) for each field, and run the outputs from 1 model thru all classifiers. The bucket indices you’d need are held in the encoder objects. But you’ll have to keep track of them separately, since in multi field models these separate encoders are concatenated into 1 before input to the SP+TM.

1 Like

Hi @sheiser1 ,

Digging into source code I found that bucket index is given by floor(inputValue / encoderResolution), is that right? I found htm.core’s Network API does not support Region.getSelf() so I can’t interact with wrapped encoder, need to compute bucket index manually.

I can’t see how a model per feature would work because the predicted value depends on other features, like in hotgym example there is datetime and power consumption, would it work if I created 2 models, one for datetime and another for power consumption? Unless there is some way to merge both predictions in another model that’s able to capture the relationship between features.

I am using multiple predictors, one per feature, but I don’t know well how to interpret their prediction. I want to predict feature values that would occur together, like joint probability. Is it OK for me to take each prediction of all predictors in isolation? Maybe yes since SDR holds the representation of whole input which includes all features.

Hi hldev,

The htm.core predictor was originally taken from nupic and then it was cleaned up and simplified.

The Predictor (both nupic & htm-core) can only predict categories (which are encoded as integers). So to predict a real value it is first converted into a integer category. The hotgym example does this using the variable predictor_resolution which controls the size of the categories.

In nupic the input’s “bucket index” was being used as the category, and in htm.core you must calculate that category and give it to the predictor.

In htm-core: The predictor should be able to work with multiple categories, to predict multiple things at once. Everywhere where a single category is expected, the code should also accept a list of categories. Simply convert all of your real valued inputs into distinct categories and pass them into the predictor as a list.

However, given what you’ve described about your inputs, I think you’re correct to use
multiple predictors, one for each input value. You have multiple inputs, one htm, and multiple predictors (one per input)? That sounds like a reasonable setup.

HTM (and I) do not know anything about “joint probabilities”.

hope this helps

1 Like

Hi @dmac,

Sorry I confused joint with conditional probability. So can the output of all predictors be interpreted as a conditional probability?

I’m not positive but it sounds right to me.

Let’s say for the hot gym example there were 2 power consumption metrics instead of 1 and a timestamp (power1 & power2). What I meant is to have 2 HTM models, which both take power1 & power2 as input features. The only difference between these 2 models is the predicted field, 1 for power1 and the other power2. That way you’d get predictions for each which are based on both input features, without needing any special hacks tho.

But if this scenario is supported well by htm.core I’d say go for it! I’d be curious to see a comparison between this approach and the one I discussed. The outputs should be equal in theory I think, right @dmac?

If it does work well for you it’d be awesome to have this functionality as part of htm.core example code, so others with the same goal can easily implement it going forward.


IIRC the predictor treats each category as independent. The predictor estimates how likely it thinks each category is and then it applies the softmax function to the estimates to turn the estimates into a probability distribution.