Force the usage of a field by swarm

Hey everybody,

I’m doing predictions. To find the model_params I use a swarm.
I want to predict consumption data and have the field ‘consumption’, ‘temperature’ and ‘datetime’.
Now the results of the swarm are model_params, which do not include the temperature.
But I know (from an analysis) that there is a strong correlation betewwn the consumption and the temperature.
Is there any way to force the swarm to use the temperature and figure out the best params for this field?
I know that I can add it by hand. But I want the best possible params for this field.
And find them by hand can be very expensive.

Soo can I do this?
Thanks a lot in advance for your help.

You could try running an initial swarm with temperature set as the predicted field just to see what params it chooses for it, then apply those into the other model which forecasts consumption.

thanks =) I already tried this… but now something super wired is happening:

I have my model_params from the swarm:

 { 'aggregationInfo': { 'days': 0,
                   'fields': [],
                   'hours': 0,
                   'microseconds': 0,
                   'milliseconds': 0,
                   'minutes': 0,
                   'months': 0,
                   'seconds': 0,
                   'weeks': 0,
                   'years': 0},
   'model': 'HTMPrediction',
   'modelParams': { 'anomalyParams': { u'anomalyCacheRecords': None,
                                  u'autoDetectThreshold': None,
                                  u'autoDetectWaitRecords': None},
               'clParams': { 'alpha': 0.08843107003444935,
                             'regionName': 'SDRClassifierRegion',
                             'steps': '1',
                             'verbosity': 0},
               'inferenceType': 'TemporalMultiStep',
               'sensorParams': { 'encoders': { '_classifierInput': { 'classifierOnly': True,
                                                                     'clipInput': True,
                                                                     'fieldname': 'consumption',
                                                                     'maxval': 617.76,
                                                                     'minval': 0.0,
                                                                     'n': 521,
                                                                     'name': '_classifierInput',
                                                                     'type': 'ScalarEncoder',
                                                                     'w': 21},
                                               u'consumption': None,
                                               u'datetime_dayOfWeek': None,
                                               u'datetime_timeOfDay': { 'fieldname': 'datetime',
                                                                        'name': 'datetime',
                                                                        'timeOfDay': ( 21,
                                                                        'type': 'DateEncoder'},
                                               u'datetime_weekend': None,
                                               u'temperature': None},
                                 'sensorAutoReset': None,
                                 'verbosity': 0},
               'spEnable': True,
               'spParams': { 'boostStrength': 0.0,
                             'columnCount': 2048,
                             'globalInhibition': 1,
                             'inputWidth': 0,
                             'numActiveColumnsPerInhArea': 40,
                             'potentialPct': 0.8,
                             'seed': 1956,
                             'spVerbosity': 0,
                             'spatialImp': 'cpp',
                             'synPermActiveInc': 0.05,
                             'synPermConnected': 0.1,
                             'synPermInactiveDec': 0.01904419349817022},
               'tmEnable': True,
               'tmParams': { 'activationThreshold': 15,
                             'cellsPerColumn': 32,
                             'columnCount': 2048,
                             'globalDecay': 0.0,
                             'initialPerm': 0.21,
                             'inputWidth': 2048,
                             'maxAge': 0,
                             'maxSegmentsPerCell': 128,
                             'maxSynapsesPerSegment': 32,
                             'minThreshold': 11,
                             'newSynapseCount': 20,
                             'outputType': 'normal',
                             'pamLength': 5,
                             'permanenceDec': 0.1,
                             'permanenceInc': 0.1,
                             'seed': 1960,
                             'temporalImp': 'cpp',
                             'verbosity': 0},
               'trainSPNetOnlyIfRequested': False},
 'predictAheadTime': None,
 'version': 1}

Now I’m adding the encoding for the temperature:

       u'temperature':{ 'clipInput': True,
                        'fieldname': 'temperature',
                        'maxval': 25.25,
                        'minval': -5.25,
                        'n': 387,
                        'name': 'temperature',
                        'type': 'ScalarEncoder',
                        'w': 21},

But my prediction results are terrible now. I am using a crossvalidation bc I want to compare the results of different Models. And now, I have these results for the HTM:

which is basically a naive prediction.
And for another test_set (of the crossvalidation) I have:

So basically a predction of zero all the time.What can be reasons for a behavior like this? Before adding the temperature I had:

So why this strange behavior, when adding informations ( And i know that there is a correlation of -0.9 between temperature and consumption… so it is an important information)

I would really appreciate some help.
Thanks a lot in advance.

Hey =)

I have another question.
Sooo I have again the the model_params above:

If I use the model just like this (so no added temperature and especially no added consumption) I get results which are very similar to the results of a naive prediction:

For this predictions the learning is enabled (in order to compare the results to other models, which also do not learn on the test_sets).
But how is this possible? I do not pass the consumption as variable to be encoded… So how does the model know about this? I am confused and would really appreciate some help.
Thanks a lot.

The swarm looks for the set of input fields & params which most accurately forecasts the predicted field – “consumption” in your case. Tho counterintuitive, it is possible that future consumption may be best predicted without past consumption itself – if there is another field with a clearer connection to it.

So when the swarm picks up only ‘datetime_timeOfDay’, it means that the next hour’s consumption is most clearly predicted by simply what time it is. It could be that current consumption is predictive of the next consumption (auto-correlation), and that the temperature has a known correlation too – but it can still be that including those fields adds more noise to the system overall than simple clock time by itself.

It may seems strange tho it could make sense, since people’s energy consumption is naturally tied to what part of the work day it is. Its a sure bet for example that there’ll be a lull in overall energy use over night (say from 11 PM to 6 AM), then a jump as people make breakfast, another as they make lunch and again with dinner. It reminds me of how something very predictable is described as being “like clockwork”.

Also I saw you say that you don’t have the model anomaly scores since you’re doing forecasting – you actually do get the anomaly scores too. I would recommend plotting them to get a sense of how quickly the TM is learning the sequences. The classifier - which converts the TM state into forecasts in raw data type form (like consumption values) - is a post-processing of the core HTM algorithms. I think the classifier could be yielding naive-like prediction pattern with either a well learned or somewhat confused model – so I’d plot the anomaly scores to get a sense for the state of the model itself.


Thanks for this really helpful answer =)
I’ll try to get my anomaly score and post my results here.

But anyway… What I do not understand is, that the system knows the exact consumption values from the time step before. Is there any internal thing going on, which passes them, even this variable is not encoded?
Again, thanks a lot for your help :blush:

HTM is a prediction system in that it forms an internal model. On first blush that beings to mind all sorts of sophisticated internal planning like DL where there are mysterious internal manifolds and unknowable parsed internal meaning.but it really very much simpler than that.

HTM learns sequences and sequences of sequences.

You see it predicting consumption but it is tracking what happened in the past as sequence - the prediction part is assuming that will happen the same as it has happened before.

HTM will not do a good job of predicting a novel transition that it has never seen before; such an event will be seen as an anomaly and leaned the first time it happens. HTM learns fast but it does have to see it at least once to predict it in a future stream.


suuure… But my point is:
The system is just encoding the time of day… nothing else!! (see my model_params from above:)

With this parameters I got results like this:

So to do a prediction in time step t my model just knows the current time of day… but in my understanding NOT the consumption… So how can the model know which value is the current consumption value? In the figure above you see that the high peaks do have the exact same value, but they have lag 1.
I would understand this behavior, when I pass the current consumption to the model. But in my understanding this is not happening… just the current time of day is known.

I hope my issue is now clearer…So is there anything internally going on, that passes the current value for the predicted field to the model, so that it is able to track this?

I would really appreciate some clarification. Thanks a lot in advance. :blush:

So I tried to extract the anomaly Score… but it is None all the time.
I do extract it like this:

result ={
            "datetime": timestamp,
            "consumption": consumption,
            "temperature": temperature
result = shifter.shift(result)
anomaly_score = result.inferences["anomalyScore"]

(and my prediction comes from here:)
prediction = result.inferences["multiStepBestPredictions"][1]

so and even after 5000 Lines of training I do get an output for print(result.inferences):

{'multiStepPredictions': {1: {407.6458116042465: 0.8741763813255774, 393.12: 0.12582361867442282}}, 'multiStepBucketLikelihoods': {1: {404: 0.12582361867442282}}, 'multiStepBestPredictions': {1: 407.6458116042465}, 'anomalyScore': None}

soo does this mean that my system is super confused? Or is there internally something wrong? I never worked with the anomaly score before… so no clue how it should look like :see_no_evil: :sweat_smile:

I would recommend learning about the SDR Classifier, which is how the system turns TM-states into raw-value predictions. The TM state is a generic sparse array which doesn’t inherently map to any specific raw data type or application domain – the Classifier is what maps those states back into ‘consumption’ values or whatever is actually being forecasted.

I very much think so! The anomaly score shouldn’t be None under any circumstances.

Here you’re passing in all 3 fields, tho the model params from your swarm only uses ‘datetime’ right? I’d try dropping the other 2 from your input dict to

1 Like

Hey again,
Thanks for your answers :blush:
And sorry for late replies all the time… I am in Germany, so time difference makes it difficult to respond earlier.
Anyway… back to the topic =)

Sure I did that… and that’s indeed something I am not sure what is going on…
So I know the SDR classifier is a one layer NN and gets the SDR representing the active cells in every time step. And as I know: the target value (so in my case the true consumption value in every time step) is passed to this SDR as well in order to train the weights…
But I thought that the consumption is just passed in order to train… and if I’m not encoding the consumption for the HTM-Modell, this value won’t be used for prediction.

So how can an model predict a value it does not know. Or can this be because of the NN? That the NN is learning to make some kind of naive prediction?

But in which way does this make sense? I mean my consumption is my predicted field… so in order to train the NN I need the consumption.
Anyway… I tried this and I just get an error… so indeed it does not help :sweat_smile:

But I small success :partying_face:. I changed my model_params in the following way (I just post this in case someone has the same issue!)

'inferenceType': 'TemporalAnomaly',

Soo my anomaly score is at the beginning is 1 (of course) and at the end 0 all the time!! So my system doesn’t seem to be confused…

the point I don’t get is: In my opinion, the system has just one variable encoded… so it has just ONE information (the time of day). As I explained… I am turning off learning in order to compare predictions results of a cross-validation. So I would expect the SAME prediction for 12 o’clock every day…
But obviously this is not happening.
I’m running out of ideas… the only reason I am able to find in my model_params is this:

'sensorParams': { 'encoders': { '_classifierInput': { 'classifierOnly': True,
                                                                         'clipInput': True,
                                                                         'fieldname': 'consumption',
                                                                         'maxval': 617.76,
                                                                         'minval': 0.0,
                                                                         'n': 490,
                                                                         'name': '_classifierInput',
                                                                         'type': 'ScalarEncoder',
                                                                         'w': 21},

So maybe the information is encoded in some way? But I thought that this is just pass to the NN (of the SDR classifier in order to learn.)

Just for information: I also tried two other, completely different datasets… and I have the same problem.
Maybe I am doing somthing wrong while running the model (maybe my usage of the shifter is wrong…but I don’t think that this can be the case).
Anyway, I just post my code here… maybe someone can help me with the problem:

run_model_with_temperature(model, input_file, skiprows=None, 
                           output_file_path='', date_format="%Y-%m-%d %H:%M:%S",   

shifter = InferenceShifter()
counter = 0
for index, row in data.iterrows():
    counter += 1
    if counter % 1000 == 0:
        print "Read %i lines ..." % counter

    timestamp = datetime.datetime.strptime(str(index), date_format)
    consumption = float(row[0])
    temperature = float(row[1])
    result ={
        "datetime": timestamp,
        "consumption": consumption,
        "temperature": temperature

    result = shifter.shift(result)
    prediction = result.inferences["multiStepBestPredictions"][1]

    data_results = data_results.append(
    pd.DataFrame({'consumption': consumption, 'temperature': temperature,
                'prediction': prediction},

data_results.to_csv(output_file_path + '.csv')
return data_results

And to call all this I’m using:

model = create_model(model_params, predicted_field_name)
data_result = run_model_with_temperature(model, input_file=data, skiprows=[1, 2], 
                                         date_format="%Y-%m-%d %H:%M:%S")

Maybe someone has an idea :blush: :blossom:

Soo and another thing I want to share: I have done a nice plot containing a lot if information:

On the top you see the actual consumption (gray) and the prediction (blue, dashed line).
The second plot is the error and the anomaly score.
the third is the number of columns, which contain a predictive cell.
And the last plot shows the total number of active cells…

I am not sure… but it seems that the model is just seeing the time of day and then, every day one column is busting (bc every day at 2:00 we have the total number of active cells I think at 71… so 39 cells and one busting column).

Now I’m wondering… When the system is not able to predict a value that makes sense: Does it just take the last value… so that there is such a gateway, regardless of whether the value is encoded?
I think this could be a general explanation for my problem… but I am absolutely not sure about this :sweat_smile:


I suspect this is the case. Also I notice that those predicted vs actually consumption values seem to fall within the range 50 - 150, though I notice your min/max values for the classifier are 0 and 618. Have you tried condensing that range for the classifier?

Hello =) thanks for your reply… I am running out of ideas :sweat_smile: :see_no_evil:

Well… how I explained I am doing a cross validation.
The plots are just the results of ONE test set. Obviously this one just covers the range 0 to 200 (I would say, because of the anomaly at the beginning ).
But the other data contains indeed values from 0 to 618. So this encoding is right.

But what I noticed, is that my anomaly score for the spacial anomaly at the beginning is 0:

Sooo something is wrong here for sure… but I think it is can’t be my code, because I also used it for some other predictions… and I had quite plausible results.

This encoding min/max is the only thing I can think of to play with to try and improve the results. I’ve no doubt your code works in general, but this range can have a major effect on the model, so I think its worth playing with.

What I do is sample some data before forming the model, and set the mix/max values automatically from that sample – so I don’t have to hard code those numbers. I set the min/max to some percentiles of the data found for those column. I think setting the range too wide makes many of the values look identical to the classifier. So if the mix/max are 0/600, lots of values right around 100, 105, 95, 100 etc will look basically identical.

That being said I’m not sure how much this would improve the results if at all. I have found it helpful to automate the encoding process tho.

okay… well I’m choosing my range automatically as well…
But anyway… maybe I go and try some other encodings and see what happens =)
If I get new results I’ll share them here…maybe I can help someone else =)

Thanks you sooo much for spending all that time with my problem. I am really grateful!!

May I ask one last question:

This behavior shouldn’t be like this, right??

The gray is consumption and the blue is the HTM prediction right? It makes sense to me that the largest error would align with the biggest deviations between the blue and grey-lines – so that early error spike looks plausible to me.

I notice that the blue prediction doesn’t have any big spikes/dips as the consumption line, which makes sense since the HTM is using ‘datetime’ only in its encoding right? The flat anomaly scores with periodic pops seems like a perfectly repeating pattern – as date time must be right?

Based on this anomaly score behavior the TM seems to be learning that date-time sequence clearly as it should (besides the repeating input issue discussed elsewhere). The one other thing I’d look at is the number of predictive cells in the system per tilmestep – to measure how precisely the TM is predicting the next input. The anomaly score measures TM surprise but not precision.

So it seems the real non-trivial learning happening here is in the Classifier – mapping those consistent TM states (encoding date times) to raw consumption values.