Spatial Pooler Implementation for MNIST Dataset

Yes, I saw that Anaconda doesn’t work and play well with nupic core in other posts. I’m taking a chance with this because:

  1. Seeing a Python3 program show up here means people are expanding the range of Python configurations for HTM code – and I’d like the option of running those other Python configurations if more such configurations show up (such as Anaconda2 or 3).
  2. By going with Visual Studio’s installation of all this, I’m hoping the way they configure their own versions of these components will be at least integrated and therefore permit me to not involve them in any builds without explicitly asking for them.
  3. I don’t trust MS’s installer to permit me to change my mind later if I want to add an optional component I skipped in the initial VS install.

Of course 30GB is a lot to download over a 3Mbps WISP out here in rural Iowa.

1 Like

@rhyolight did nupic work on python3 (I’m using python 27 and i have many problems in moduls )??, if yes can you give me the steps to install nupic witn python 3

No it does not. Stay tuned for the community fork.

2 Likes

A post was split to a new topic: Import Error : cannot Import name SpatialPooler

Hi great work. May I ask if during training the sequence of inputs affect the accuracy? IOW if you randomly pick a sample from the training data during training, does each training yield different accuracies? Reason I ask is I’m studying the SP’s properties and I want to understand more of it. Cheers.

My training data is shuffled and iterated through several times. It should not make that much of a difference, but know that the SP has a natural distribution of accuracies. As with all measurements, if you really want to be certain of any measurement you should measure many times and find the mean and standard deviation.

Thanks for mentioning these. Would you mind also mentioning the standard deviation and mean of the accuracies? If its not too much which classifier algorithm did you use?

I do not remember the mean and standard deviations of this experiment. I’ve actually since come up with a different method of local inhibition for the spatial pooler which works better than what’s described here … My latest work in htm-community/nupic.cpp consistently scores >=95%.

For this experiment I used my own SDR-Classifier which is a clone / re-implementation of the nupic SDR-Classifier.

1 Like

I see thanks. I will read your code as I’m really interested with the algorithm and why it works very well and eventually try to fit with my understanding with the SP.

Hi Scott, do you have some code-sample or more information about MNIST with you SP-only implementation?

Just heads up,

it is now really easy and convenient to experiment with HTM (SP) on the MNIST-like datasets for image classification in htm.core (former nupic.cpp)

I’ve tuned the params, so the model is much smaller (columns), thus faster (~60s) and reaches over 95% on a single pass through data and without any data augmentation (boxing, image rotation, moving around,…)

But I’m reaching because of the log boost function.


It should be now easy to experiment with different: log, exp, none boosting methods.

Comapared to the original post, once the SP’s other params have been tuned, log boosting is performing slightly worse than exp. (and is slower too).
Interestingly, if entropy of the columns would be the measure, no-boosting performs the best (and is the fastest), on the validation test the best results are with exp boosting followed by log.

4 Likes

Can I just ask whether this is a 2D column setup by chance that you are using?

And if so, if you completely remove learning on the SP and just feed in to SDR generated into the KNN classifier what accuracy would you get?

Log boosting seems to create a lot of spurious synapses in TM. If i’m correct, when the activity of a mini-column is low, the boostOverlaps_ of that mini-column is “huge”. That makes that the winners for the same input change substantially.

As a note of caution seems like homeostatic synaptic plasticity (the mechanism behind boost) is disabled in L4 quite soon (After the animal was born [1]). Most of the papers about homeostatic synaptic plasticity talk about the inverse proportion between activation frequency and mEPSC voltage [2] (which ultimately affects who will win the inhibition).

In any case, in [2] you can read:

Currently, we know little about the cellular
and molecular mechanisms underlying homeostatic plasticity in vivo.

[1] A. Maffei, S. B. Nelson, and G. G. Turrigiano, “Selective reconfiguration of layer 4 visual cortical circuitry by visual deprivation,” Nat. Neurosci. , vol. 7, no. 12, pp. 1353–1359, 2004.

[2] G. Turrigiano, “Homeostatic synaptic plasticity: Local and global mechanisms for stabilizing neuronal function,” Cold Spring Harb. Perspect. Biol. , vol. 4, no. 1, pp. 1–17, 2012.

1 Like

Thank you @vpuente for the interesting biological insight! I’ll need to study those papers.

In application terms,

  • would this render log boosting unsuitable?
  • should the boosting mechanism be left out completely?
    • or do boosting only on early stages (~“baby animal brain”), and disable it once SP has somewhat learned?

On this related note, could you please review the PR for “Synaptic competition on a dendrite”


and its post

?

Hi @momiji, very good point!

I’ve implemented the baseline benchmark, you can try it here

It scores slightly above 90%, which is already quite good result! and time is almost instant.

Note, we don’t have kNN classifier in htm.core, “only” the SDR classifier, which is a simple log regression trained on {input SDR, classification} pairs.

With SDR (the class) you can easily try any dimensions of your inputs. Note that SP with global inhibition does NOT support topology, yet. If you use local inh, I believe the dimensions make some sense, but local is way too slow.

Many thanks Breznak,

For a minute there I thought I was going mad, or that my code was just bad in its implementation when I noticed that by accident I had shut off SP learning but was still getting ~90% accuracy using the KNN.

1 Like

by chance, are you experimenting with vision on HTM (ideally htm.core)? I’m going to start a couple for image classification, and vision related experiments, so I’d like to get in touch with people here interested.

Too many questions :slight_smile: (for my poor knowledge)

Homeostatic plasticity seems to be a really important thing. Is critical during embryonic cortex development and early stages of life. My hypothesis is that when the inputs stars to come in, it balances mini-column distal synaptic load across the cortical column. Once the animal has acquired “the base” knowledge, homeostasis is progressively fade away because it will do more harm than good in L4.

My hypothesis is that L4/SP at birth is barely connected (with a large potentialPCT).

Unfortunately, if you instead of using a 0.5 prob of being connected in SP, uses a 0.01, you will see that every input lands into very similar output value (if PCT=1, all in one)

That necessary initially connected synapses might have an impact on system evolution: it the random is not aligned with the input stream it could prevent a homogeneous number of connected synapses per mini-column in the TM. Besides, that 0.5 is not good for noise tolerance.

I think a strong boost is necessary during early learning. If you use a barely connected SP. Once the number of distal synapses per mini-column is balanced, disabling it progressively seems the right thing to do. My intuition is strong initially SP will perform a really strong clustering and boost will “split” the fine detail inside the cluster.

I understand synaptic competition but looks not very bio-plausible (At least, I couldn’t find any evidence of it). Heterosynaptic plasticity [1] does something like that but this is already in the learning algorithm (is the forget of non active synapses).

[1] W. C. Oh, L. K. Parajuli, and K. Zito, “Heterosynaptic structural plasticity on local dendritic segments of hippocampal CA1 neurons,” Cell Rep. , vol. 10, no. 2, pp. 162–169, 2015.

1 Like

Not directly no, just a 2D implementation of the MNIST data set, though the approach is somewhat setup to expand into more vision based stuff if required.

It is based on nupic original python implementation. Your welcome to have a look if you want but I believe Numenta’s implementations from a year or so back in which they were based are more fleshed out.