HTM do not perform well when learning a simple function like y=x!

I am a beginner using HTM.when I run the quick-start example in the website( with the datasets I made,I didn’t change any parameters,and I suffered an problem:

the quick-start example codes:

import yaml
from nupic.frameworks.opf.model_factory import ModelFactory

_PARAMS_PATH = "model.yaml"

with open(_PARAMS_PATH, "r") as f:
  modelParams = yaml.safe_load(f)
  print modelParams

model = ModelFactory.create(modelParams)

# This tells the model the field to predict.
model.enableInference({'predictedField': 'consumption'})

import csv
import datetime
#file = open('data.csv','w+')
# Open the file to loop over each row
with open ("realMode.csv") as fileIn:
  reader = csv.reader(fileIn)
  # The first three rows are not data, but we'll need the field names when
  # passing data into the model.
  headers =

  for record in reader:
    # Create a dictionary with field names as keys, row values as values.
    modelInput = dict(zip(headers, record))
    # Convert string consumption to float value.
    modelInput = dict(zip(headers, record))
    # Convert string consumption to float value.
    modelInput["consumption"] = float(modelInput["consumption"])
    # Convert timestamp string to Python datetime.
    modelInput["timestamp"] = datetime.datetime.strptime(
      modelInput["timestamp"], "%m/%d/%y %H:%M"
    # Push the data into the model and get back results.
    result =

    #print result
    bestPredictions = result.inferences['multiStepBestPredictions']
    allPredictions = result.inferences['multiStepPredictions']
    oneStep = bestPredictions[1]
    twoStep = bestPredictions[2]
    fiveStep = bestPredictions[5]
    # Confidence values are keyed by prediction value in multiStepPredictions.
    oneStepConfidence = allPredictions[1][oneStep]
    fiveStepConfidence = allPredictions[5][fiveStep]
    twoStepConfidence = allPredictions[2][twoStep]

    result = (oneStep, oneStepConfidence * 100,twoStep, twoStepConfidence*100,
          fiveStep, fiveStepConfidence * 100)
    #esult = (fifStep,fifStepConfidence * 100)
    print "1-step: {:16} ({:4.4}%)\t 2-step: {:16} ({:4.4}%)\t 5-step: {:16} ({:4.4}%)".format(*result)

Datasets of mine:

08/02/10 00:00,0
08/02/10 01:00,1
08/02/10 02:00,2
08/02/10 03:00,3
08/02/10 04:00,4
08/02/10 05:00,5
08/02/10 06:00,6
08/02/10 07:00,7
08/02/10 08:00,8
08/02/10 09:00,9
08/02/10 10:00,10
08/02/10 11:00,11
08/02/10 12:00,12
08/02/10 13:00,13
08/02/10 14:00,14
08/02/10 15:00,15
08/02/10 16:00,16
08/02/10 17:00,17
08/02/10 18:00,18
08/02/10 19:00,19
08/02/10 20:00,20
08/02/10 21:00,21
08/02/10 22:00,22
08/02/10 23:00,23
02/26/11 05:00,4997
02/26/11 06:00,4998
02/26/11 07:00,4999

it is a sequence like the function y=x,total 5000 rows

and the result:

1-step:    4978.66666667 (99.99%)        2-step:    4978.66666667 (99.99%)       5-step:    4978.66666667 (99.99%)
1-step:    4979.66666667 (100.0%)        2-step:    4979.66666667 (100.0%)       5-step:    4979.66666667 (100.0%)
1-step:    4980.66666667 (99.97%)        2-step:    4980.66666667 (99.85%)       5-step:    4980.66666667 (99.97%)
1-step:    4981.66666667 (92.65%)        2-step:    4981.66666667 (92.41%)       5-step:    4981.66666667 (92.26%)
1-step:    4982.66666667 (89.2%)         2-step:    4982.66666667 (89.03%)       5-step:    4982.66666667 (88.6%)
1-step:    4983.66666667 (92.78%)        2-step:    4983.66666667 (92.77%)       5-step:    4983.66666667 (92.77%)
1-step:    4984.66666667 (99.99%)        2-step:    4984.66666667 (99.98%)       5-step:    4984.66666667 (99.99%)
1-step:    4985.66666667 (99.95%)        2-step:    4985.66666667 (99.96%)       5-step:    4985.66666667 (99.96%)
1-step:    4986.66666667 (93.35%)        2-step:    4986.66666667 (92.99%)       5-step:    4986.66666667 (94.28%)
1-step:    4987.66666667 (98.29%)        2-step:    4987.66666667 (98.3%)        5-step:    4987.66666667 (98.45%)
1-step:           4992.0 (49.19%)        2-step:           4992.0 (48.18%)       5-step:           4992.0 (47.94%)
1-step:    4989.66666667 (97.96%)        2-step:    4989.66666667 (97.98%)       5-step:    4989.66666667 (97.96%)
1-step:    4990.66666667 (98.76%)        2-step:    4990.66666667 (98.78%)       5-step:    4990.66666667 (98.78%)
1-step:    4991.66666667 (98.54%)        2-step:    4991.66666667 (98.57%)       5-step:    4991.66666667 (98.54%)
1-step:    4992.66666667 (98.48%)        2-step:    4992.66666667 (98.46%)       5-step:    4992.66666667 (98.44%)
1-step:    4993.66666667 (97.8%)         2-step:    4993.66666667 (97.79%)       5-step:    4993.66666667 (97.82%)
1-step:    4994.66666667 (97.47%)        2-step:    4994.66666667 (97.52%)       5-step:    4994.66666667 (97.49%)
1-step:    4995.66666667 (96.86%)        2-step:    4995.66666667 (96.87%)       5-step:    4995.66666667 (96.87%)

as we see, the prediction value is always less than the orgin value(and it should be equal with the origin data) ,and values of 1-step,2-step,and 5-step are almost the same.So I assume that the algorithm didn’t learn the function y=x well .But the function is so simple,why HTM can not learn such a simple function ?And I assume the fuction is too simple,so the paramters do not need to change . if I use Linear regression of scikit-learn ,it will perform perfectly! So I don’t know what is the problem.

I am not very familiar with the NuPIC framework but what happens if you change the field to be predicted to Int?

i haven’t tried yet ,but I think it doesn’t help

I see. Please try it once. It might be a rounding problem or something related to how the SDRs are formed depending on the dtype.

Thanks,I will try at once,and I would like to re-explain my problem——I created a dataset that satisfies the law of y = x,and I wish that HTM could learn this law.But it works not well.For example,when the origin data x=4999,HTM will predict the 1-step value is 4995.66666667,and the 2-step value is 4995.66666667,5-step value is 4995.66666667,rather than 5000,5001,5004.

I think for representing functions, you should give two values, y and x.
Then merge them into an SDR. Note that the system doesn’t have any understanding of the number system itself.
So train it on the values of y and x(since function can be x^2 as well… And so x and y both are needed I suppose) and then map the predictions.
I think the problem is the type of output HTM gives.

Currently,HTM can learn that sequence is always increasing by adding 1 each time,as following shown
1-step: 4978.66666667 (99.99%)
1-step: 4979.66666667 (100.0%)
1-step: 4980.66666667 (99.97%)

but my biggest question is ,its prediction is not coorect, when x=500,and 1-step,2-step,5-step values should be 501,502,505,but the prediction are 500,500,500

and this are some of my model parameters:

#Classifier parameters. For detailed descriptions of each parameter, see
#the API docs for nupic.algorithms.sdr_classifier.SDRClassifier.
verbosity: 0
regionName: SDRClassifierRegion
alpha: 0.1
steps: '1,2,5’
maxCategoryCount: 1000
implementation: cpp

I see. Maybe it isn’t learning the sequences and the incrementing output predictions you are getting are because of the changing nature of the input itself.
How is the accuracy so high if the next input is not the predicted output?
Are you sure you are to use the SDR Classifier region for this particular problem?

my input is a dataset satisfying the law y=x ,like

08/02/10 00:00,0
08/02/10 01:00,1
08/02/10 02:00,2
08/02/10 03:00,3
08/02/10 04:00,4
08/02/10 05:00,5
08/02/10 06:00,6
08/02/10 07:00,7
08/02/10 08:00,8
08/02/10 09:00,9
08/02/10 10:00,10

you can see the field 1 is increasing

and the prediction has the similar structure with the

1-step: 4978.66666667 (99.99%)
1-step: 4979.66666667 (100.0%)
1-step: 4980.66666667 (99.97%)

So I think SDR Classifier is OK for this question.But the predict Value for 1-step,2-step,5-step is not correct ,because it is different with the origin data at that moment

I see. So I guess a single field is fine.
What time input are you giving to check the predictions? What happens if you remove the temporal content from the data set? That is, only using the incrementing values over time without any actual encoding of time.
I am not yet clear on how the SDR Classifier region works. So I guess it’s right.

I think the temporal content is needed.In the official website ,its dataset also use it ,and the complete model parameter file also includes the parameters for the temporal content.

Oh. But do they encode mathematical functions in their datasets? While trying to extract sequences of patterns from an unknown, albeit real time varying data, time step encoding is required, but we already have a definition of the function and thus it’s output that we are trying predict here. Is this relevant here?
Did you check what happens for integer type inputs and predictions?

yes, it is relevant.when the timestamp value goes to the next step ,the value for prediction will add 1.
08/02/10 00:00,0
08/02/10 01:00,1
08/02/10 02:00,2

and this is the meaning of the law y=x.

That is true for the type of dataset you have created. y=x applies for any randomly iterating values of x as well. Even though after the entire forward and backward iteration, the actual function map will end up being continuous over time.
For example, if you were training the system using random values of x and its corresponding y, using two fields and also using time encoding then it should still figure out the linear relationship between x and y which is x=y. This is the kind of training I was thinking about. :thinking:
So if you are using time steps I would suggest using multistep increments and decrements in x and y over time. Sorry if this is very vague. I am also trying to get a hold of the theory. Don’t know much about NuPIC though.

But I have seen the quick-start codes ,the model of HTM depends on the parameters we give,and it use the timestamp value ,as an Encoder input,and another field , as the RandomDistributedScalarEncoder(as following shown),so I think the situation you assume does not exist

part of the model parameters I use for the quick-start example:
#List of encoders and their parameters.
fieldname: consumption
name: consumption
resolution: 0.88
seed: 1
type: RandomDistributedScalarEncoder
fieldname: timestamp
name: timestamp_timeOfDay
timeOfDay: [21, 1]
type: DateEncoder
fieldname: timestamp
name: timestamp_weekend
type: DateEncoder
weekend: 21

Oh, okay. :thinking: My bad. Do tell when it works along with the corrections.


First of all, NuPIC will never predict a sequence like x=y or x=y! or x=y2 because those functions don’t create repeating sequences. They just go on in one direction forever, there are never any patterns to understand. It does better with a repeating pattern like x=sin(y) but still not good (for other reasons)

2nd thing is that you can format your code in your posts and it looks much better. I’ve already done this for you.

1 Like

QQ, would it work when you provide the differential in a separate column?

This depends on the encoding, right? If we find a way to encode the property of “increment”(for y=x, but similar properties for other functions) into SDRs, then will it work? For example, if we activate a bit in an SDR row for every increment and after reaching the end of the row, we again activate the first bit of the row. And given that we have proper representations for numerical quantities in the SDR, the layer could hypothetically predict the next number. Specially if we use buckets for encoding numbers. I think. :thinking: