SDR Classifier for spatial pooler label


I know this has been asked a number of times in a number of ways, but I’m struggling in trying to get the SDR classifier to actually work as intended.

Has anyone managed to get the SDR classifier to work (at all) using a spatial pooler output (SDR) for an image ie MNIST.

And if so do you happen to know what is the best way to work with the classifier, i.e:

Did you need to run monotonically the recordNum variable or can you just set it to 0. I’m not sure why you would need to track each record in the case of MNIST classification.

Can patternNZ be fed the SP active columns along with the supplied label for the classification={“bucketIdx”: key, “actValue”: key} variable. It would seem to me that this is the correct case.

If i pass the SP SDR and train using this, and then also try and infer on the same SP SDR it provides 100% accuracy (which seems odd).

But providing it a set of test sample SDRs gives rise to 90% error, simply because the classifier always outputs the probabilities as [0, 0, 0, 0, 0, 0, 0, 0, 1], which predicts for ‘9’ and in the test set 10 out of 100 are 9.

Once again this seems odd as I would assume the classifier would have some variability in prediction.

It just doesn’t seem to work, so any help is appreciated.

I am working too now on MNIST classification.
Currently I use encoder + TM + Anomaly and get classification rate of 95%, but I am looking for the better new ideas with higher recognitation rate.
As far as I understood, SDRClassifier works well on the continuous processes over time, and is not so suitable for discrete processes like MNIST classification.
The original MNIST-experiment and OPF of Numenta used KNN Classifier.

Before we investigate on testing KNN classifier I would like to ask Numenta guys: @scott, @rhyolight, @mrcslws for their experience/help if KNN-Classifier still be better than SDR classifier?
What do you think?

Best thanks

1 Like

I don’t know for sure, I don’t think we’ve done it this way. We have always just used the SP for classification. I would try both, honestly.

Regarding the KNN classifier I can (possibly) get a simple spatial pooler version of HTM (so no TM) to get around 2% error on the MNIST data, but this is lower then I would suspect, so I’m a bit cautious without further testing to see if this is a fair result.

I can’t get the SDR to work at all, it just pumps out NaN values for its classifications / probabilities.

I’ll update later on when I know more.

I’m also going to now look at including the TM into the learning process, but whether this will improve things for the KNN it is hard to say. Originally I thought you could basically scan the MNIST digit / image in specific slices and feed this into the KNN and then aggregate over the 28 or so slices what it predicts.

But you still have to train the KNN and I’m not sure it can handle that variation such an approach may introduce.

1 Like

Does this work for you?

from nupic.algorithms.sdr_classifier import SDRClassifier

c = SDRClassifier(steps=[1], alpha=0.1, actValueAlpha=0.1, verbosity=0)

# learning
c.compute(recordNum=0, patternNZ=[1, 5, 9],
          classification={"bucketIdx": 4, "actValue": 34.7},
          learn=True, infer=False)

# inference
result = c.compute(recordNum=1, patternNZ=[1, 5, 9],
                   classification={"bucketIdx": 4, "actValue": 34.7},
                   learn=False, infer=True)

# Print the top three predictions for 1 steps out.
topPredictions = sorted(zip(result[1],
                            result["actualValues"]), reverse=True)[:3]
for probability, value in topPredictions:
    print "Prediction of {} has probability of {}.".format(value,
                                                           probability * 100.0)

When I run it, I get this:

Prediction of 34.7 has probability of 20.0.
Prediction of 34.7 has probability of 20.0.
Prediction of 34.7 has probability of 20.0.

Process finished with exit code 0

Yes, unfortunately that works fine.

I use the output columns from the SP, which I believe is the active columns and the correct set of values to use.

Also RecordNum is a bit ambiguous when it comes to something like MNIST classification as it is not temporal. So should I set it as always = ‘0’, or should I increment it for every example presented, I wasn’t sure but I have tried both.

Currently I use a 1 to 1 mapping for bucketIdx and actValue, so for number 1, I have 1 and 1 respectively when training and inference.

After thinking about this and re-watching this video, I suggest you don’t use the SDRClassifier for this type of data. It depends too much on temporal probability for a spatial recognition task.

Cheers, ya it seems better to stick with KNN for now.

Can I ask, if time is permitting, would it be possible to post the parameters used to achieve the 95% accuracy for spatial pooler only results, and maybe also the TM version.

I’m curious to see whether my implementation is correct or not.

I’ve managed to get a setup with topology to work now (i.e it actually uses a 2D input), but it is a bit counter to what the concept of HTM should be. In this case if you have a receptive field of 1 it quickly becomes a mapping from input to output columns. The image is (left) synapses / receptive field for individual active column x 16, (middle) the active columns, (right) and the actual input (I threshold to minimum value of 200)

Using the following parameters:

spParamTopologyWithBoostingGlobalInhibition = {
“inputDimensions”: (28, 28),
“columnDimensions”: (28, 28),
“potentialRadius”: 1,
“potentialPct”: 1.0,
“globalInhibition”: True,
“localAreaDensity”: 0.3,
“numActiveColumnsPerInhArea”: -1,
“wrapAround”: True,
“stimulusThreshold”: 1,
“synPermInactiveDec”: 0.5,
“synPermActiveInc”: 0.6,
“synPermConnected”: 1.0,
“minPctOverlapDutyCycle”: 0.001,
“dutyCyclePeriod”: 1000,
“boostStrength”: 100.0,
“seed”: 7777,
“spVerbosity”: 0

As you can see it isn’t exactly a sparse distributed representation, but you can increase the potential radius to get one.

This gives about 8% error give or take.

That would be

Actually I was curious if Numenta had at hand the parameters you used if any.

I know the specific ‘Vision’ package is old and is currently getting updated. But my assumption was that Numenta had undertaken a MNIST spatial pooler implementation with classification of some kind.

I could be wrong though.

@rhyolight: Currently I do not use SP.

@momiki “Has anyone managed to get the SDR classifier to work (at all) using a spatial pooler output (SDR) for an image ie MNIST.”


Yes, I did that for MNIST and Yalefaces datasets. You can find more details in this paper:
“Neuromemrisitive Architecture of HTM with On-Device Learning and Neurogenesis”


Can you please provide me with the code for these implementations " SP + SVM (Linear kernel), SP + SVM (RBF kernel) , SP+SDR classifier " ,so that I can better understand

1 Like