Spatial Pooler Implementation for MNIST Dataset

Log boosting seems to create a lot of spurious synapses in TM. If i’m correct, when the activity of a mini-column is low, the boostOverlaps_ of that mini-column is “huge”. That makes that the winners for the same input change substantially.

As a note of caution seems like homeostatic synaptic plasticity (the mechanism behind boost) is disabled in L4 quite soon (After the animal was born [1]). Most of the papers about homeostatic synaptic plasticity talk about the inverse proportion between activation frequency and mEPSC voltage [2] (which ultimately affects who will win the inhibition).

In any case, in [2] you can read:

Currently, we know little about the cellular
and molecular mechanisms underlying homeostatic plasticity in vivo.

[1] A. Maffei, S. B. Nelson, and G. G. Turrigiano, “Selective reconfiguration of layer 4 visual cortical circuitry by visual deprivation,” Nat. Neurosci. , vol. 7, no. 12, pp. 1353–1359, 2004.

[2] G. Turrigiano, “Homeostatic synaptic plasticity: Local and global mechanisms for stabilizing neuronal function,” Cold Spring Harb. Perspect. Biol. , vol. 4, no. 1, pp. 1–17, 2012.

1 Like

Thank you @vpuente for the interesting biological insight! I’ll need to study those papers.

In application terms,

  • would this render log boosting unsuitable?
  • should the boosting mechanism be left out completely?
    • or do boosting only on early stages (~“baby animal brain”), and disable it once SP has somewhat learned?

On this related note, could you please review the PR for “Synaptic competition on a dendrite”

and its post

?

Hi @momiji, very good point!

I’ve implemented the baseline benchmark, you can try it here

It scores slightly above 90%, which is already quite good result! and time is almost instant.

Note, we don’t have kNN classifier in htm.core, “only” the SDR classifier, which is a simple log regression trained on {input SDR, classification} pairs.

With SDR (the class) you can easily try any dimensions of your inputs. Note that SP with global inhibition does NOT support topology, yet. If you use local inh, I believe the dimensions make some sense, but local is way too slow.

Many thanks Breznak,

For a minute there I thought I was going mad, or that my code was just bad in its implementation when I noticed that by accident I had shut off SP learning but was still getting ~90% accuracy using the KNN.

1 Like

by chance, are you experimenting with vision on HTM (ideally htm.core)? I’m going to start a couple for image classification, and vision related experiments, so I’d like to get in touch with people here interested.

Too many questions :slight_smile: (for my poor knowledge)

Homeostatic plasticity seems to be a really important thing. Is critical during embryonic cortex development and early stages of life. My hypothesis is that when the inputs stars to come in, it balances mini-column distal synaptic load across the cortical column. Once the animal has acquired “the base” knowledge, homeostasis is progressively fade away because it will do more harm than good in L4.

My hypothesis is that L4/SP at birth is barely connected (with a large potentialPCT).

Unfortunately, if you instead of using a 0.5 prob of being connected in SP, uses a 0.01, you will see that every input lands into very similar output value (if PCT=1, all in one)

That necessary initially connected synapses might have an impact on system evolution: it the random is not aligned with the input stream it could prevent a homogeneous number of connected synapses per mini-column in the TM. Besides, that 0.5 is not good for noise tolerance.

I think a strong boost is necessary during early learning. If you use a barely connected SP. Once the number of distal synapses per mini-column is balanced, disabling it progressively seems the right thing to do. My intuition is strong initially SP will perform a really strong clustering and boost will “split” the fine detail inside the cluster.

I understand synaptic competition but looks not very bio-plausible (At least, I couldn’t find any evidence of it). Heterosynaptic plasticity [1] does something like that but this is already in the learning algorithm (is the forget of non active synapses).

[1] W. C. Oh, L. K. Parajuli, and K. Zito, “Heterosynaptic structural plasticity on local dendritic segments of hippocampal CA1 neurons,” Cell Rep. , vol. 10, no. 2, pp. 162–169, 2015.

1 Like

Not directly no, just a 2D implementation of the MNIST data set, though the approach is somewhat setup to expand into more vision based stuff if required.

It is based on nupic original python implementation. Your welcome to have a look if you want but I believe Numenta’s implementations from a year or so back in which they were based are more fleshed out.

hi, when I run this code i see this error "No module named ‘sdr’’ ,can any one help me?

I’m not sure what you’re doing to get that error message, but my advice is to use the “HTM.Core” library because it is still actively maintained. https://github.com/htm-community/htm.core/

I have not worked on the “HTM_Experiments” repository which I linked to in the first post in a long time.

Thanks for your guidance :pray: , I think this problem occurs because I could not install the requirements properly. In fact, I was able to install higher versions of requirements. when I was installing requirements in Read me I got an error.I even installed the virtual version of Python 3 And I used the “sudo pip install numpy==1.13.1” command, But again, I could not install the requirements. Can you please advise me what I need to do to run this code ?

You need to fork the repository and fix the bugs in it. The requirements file on my repo is out of date with the latest versions, and in fact, github keeps emailing me about some “securitry vulnerability” in some dependency in that repo… This was my first attempt at implementing an HTM, and I would consider this code as a research prototype.

OR, you could use the “HTM.Core” library which also implements the same MNIST example, and last I checked works great and without any problems.

Hope this helps

1 Like

Thanks for your guidance. finally I was able to install “HTM.Core”. I would be grateful if you guide me how should I change the boost function to compare different modes like exp boost function,log boost function and no boost function on it .

Hi Shiva,

HTM.Core currently only implements the “exp” and “none” boosting functions.
In the past, Breznak and I have tried to make the boosting function changeable, but that work never made it into the main branch of the project. For personal experimentation, you can modify the C++ function which controls boosting.

The boosting function is located at:
File: github/htm.core/src/htm/algorithms/SpatialPooler.cpp
Line 766:
output[i] = exp((targetDensity - actualDensity[i]) * boost);

Hope This Helps

1 Like

Thank you very much for your valuable information.
I examined the code.But I was confused which of these three modes(exp boosting, log boosting or no boosting) leads to a more accurate classification. Each time I executed the code, the accuracy of all three methods was close to each other, in fact The accuracy of each method varied in each experiment, but the total accuracy was close to 95. Each time the accuracy of one method was better than the others, but in the next run these accuracy changed. I still could not understand which method is better and why?

I have another question, I decided to classify the data in three different ways, once by spatial pooling only. Once with Spatial pooling + SVM and finally only classify with SVM and compare their accuracy with each other
Thank you very much for guiding me again

To see performance differences between the boosting functions you should measure the binary entropy of the spatial pooler’s activity. The binary entropy is a measure of how uniform the cells/mini-columns duty cycles are. For example if all columns have exactly the same duty cycle then the binary entropy is at its highest possible value.

HTM.Core can measure the binary entropy of an SDR.
See python class: “htm.bindings.sdr.Metrics
This class will measure the entropy of an SDR and then divide by the theoretical maximum entropy, so the resulting value is between 0 and 1. A higher value is better.

However, I do not think that experimenting with the boosting function will increase the accuracy on MNIST dataset, especially beyond the 95% which you’ve already achieved. In my experience, the only way to get that last 2-3% accuracy is through systematically fine-tuning all of your parameters.

1 Like

This is really great.Based on the interesting work you did, I decided to change the calculations of K-winners to something else as a hobby and compare the result with the ideal state. Is it possible to use ReLU instead of K-winners?
What function do you think can be replaced?
Please advise me how I can apply these changes and give me the address of this part of the code so that I can apply the changes. I could not find the part related to K-winners in the code.

thank you
Can you please answer these questions too. I decided to classify the data in three different ways, once by spatial pooling only. Once with Spatial pooling + SVM and finally only classify with SVM and compare their accuracy with each other. Please tell me where to change the code and how to do it? Do such codes already exist?
In the file htm.core I saw a folder named “svm by mnist_files” but there was no code in it.
At the end of the code mnist.py in htm.core it was also written like this:

baseline: without SP (only Classifier = logistic regression): 90.1%

kNN: ~97%

human: ~98%

But I do not know where the codes of these parts are. How can I get these codes?

Is it cheating to use the MNIST chars in text and then use context to get 100% recognition?

Sounds like something a human would do.

1 Like

Not to my knowledge, best of luck.