Spatial Pooler Implementation for MNIST Dataset

Thank you @vpuente for the interesting biological insight! I’ll need to study those papers.

In application terms,

  • would this render log boosting unsuitable?
  • should the boosting mechanism be left out completely?
    • or do boosting only on early stages (~“baby animal brain”), and disable it once SP has somewhat learned?

On this related note, could you please review the PR for “Synaptic competition on a dendrite”


and its post

?

Hi @momiji, very good point!

I’ve implemented the baseline benchmark, you can try it here

It scores slightly above 90%, which is already quite good result! and time is almost instant.

Note, we don’t have kNN classifier in htm.core, “only” the SDR classifier, which is a simple log regression trained on {input SDR, classification} pairs.

With SDR (the class) you can easily try any dimensions of your inputs. Note that SP with global inhibition does NOT support topology, yet. If you use local inh, I believe the dimensions make some sense, but local is way too slow.

Many thanks Breznak,

For a minute there I thought I was going mad, or that my code was just bad in its implementation when I noticed that by accident I had shut off SP learning but was still getting ~90% accuracy using the KNN.

1 Like

by chance, are you experimenting with vision on HTM (ideally htm.core)? I’m going to start a couple for image classification, and vision related experiments, so I’d like to get in touch with people here interested.

Too many questions :slight_smile: (for my poor knowledge)

Homeostatic plasticity seems to be a really important thing. Is critical during embryonic cortex development and early stages of life. My hypothesis is that when the inputs stars to come in, it balances mini-column distal synaptic load across the cortical column. Once the animal has acquired “the base” knowledge, homeostasis is progressively fade away because it will do more harm than good in L4.

My hypothesis is that L4/SP at birth is barely connected (with a large potentialPCT).

Unfortunately, if you instead of using a 0.5 prob of being connected in SP, uses a 0.01, you will see that every input lands into very similar output value (if PCT=1, all in one)

That necessary initially connected synapses might have an impact on system evolution: it the random is not aligned with the input stream it could prevent a homogeneous number of connected synapses per mini-column in the TM. Besides, that 0.5 is not good for noise tolerance.

I think a strong boost is necessary during early learning. If you use a barely connected SP. Once the number of distal synapses per mini-column is balanced, disabling it progressively seems the right thing to do. My intuition is strong initially SP will perform a really strong clustering and boost will “split” the fine detail inside the cluster.

I understand synaptic competition but looks not very bio-plausible (At least, I couldn’t find any evidence of it). Heterosynaptic plasticity [1] does something like that but this is already in the learning algorithm (is the forget of non active synapses).

[1] W. C. Oh, L. K. Parajuli, and K. Zito, “Heterosynaptic structural plasticity on local dendritic segments of hippocampal CA1 neurons,” Cell Rep. , vol. 10, no. 2, pp. 162–169, 2015.

1 Like

Not directly no, just a 2D implementation of the MNIST data set, though the approach is somewhat setup to expand into more vision based stuff if required.

It is based on nupic original python implementation. Your welcome to have a look if you want but I believe Numenta’s implementations from a year or so back in which they were based are more fleshed out.