My reading is that SP is the adaptive part of input encoding. It takes as input a stream of arbitrary bit patterns produced by the various sense organs/devices and with some parameter settings and no other prior knowledge outputs a corresponding stream of SDRs that satisfy certain constraints. Although the text uses many biological terms, I don’t see a close match between this algorithm and any biological mechanism.
If I read it correctly the hotgym example settings include:
a predetermined number of input and output bits
several other parameters, chosen so that the algorithm will work, rather than for a priori reasons.
Constraints appear to include:
outputs should be ‘sufficiently’ sparse.
outputs should be ‘similar’ for ‘similar’ inputs, for some definition of ‘similar’.
Am I on the right track? If I came up with a completely different algorithm to satisfy the same output constraints, would that be a good thing?
Spatial pooling mimics the action of the receptive fields of the various layers of the cortex, primarily layers L2/3, L5 & L6.
This also incorporates the inhibitory action of the inter-neurons. This inhibitory bit is simulated with the k-winner part of the Numenta implementation.
You may build whatever system you please but it will end up doing about the same thing if you are faithful to the biology.
So - to address your point about “what is being implemented” - time to hit the neuro-biology textbooks.
The bits you will be looking for is the sparsity of action, the balance between excitory and inhibitory cells that controls this sparsity, the general neural signaling methods, and the interrelationships between the various layers. As you dig into this you should start to see some relationship between the biology and BAMI. Numenta code starts with the rough ideas in BAMI and works on from there.
Thanks for the response, but it’s not all that helpful. And no, the biology and the non-HTM literature are no help. Spatial pooling usually means something to do with image processing.
Since posting I found this: https://numenta.org/resources/HTM_CorticalLearningAlgorithms.pdf, which is by far the best so far. The title is misleading, although it’s 2 years old, it really does cover this stuff. See p27ff, which sets out the goals: use all columns, maintain desired density, etc, just like I asked.
If you’re not familiar with it, highly recommended.
It also covers my next question, which was going to be about TM.
Yes! You sound like you’re off to a great start, what you wrote seemed right on the money.
At the least it would be interesting, and might help us to think about old problems in new ways. If your new algorithm was also highly biologically plausible, that could be a very good thing.
I think you underrate it. It’s much the best thing I’ve found so far on the foundations of SP. Perhaps it’s time to update it (or incorporate it in BAMI).
The BAMI section on SP is weak. It covers terminology and the algorithm itself, but completely misses the section on concepts. B comparison, the BAMI section on encoders is excellent and the section on TM is non-existent. Somebody has some work to do.
Sorry, but popular videos don’t do it for me. I have the framework, now I need answers to specific questions to fill in gaps.
Biological plausibility is a bit of a dark horse. Often we don’t know exactly what the biology does, or we do know and we don’t know why, or we know why and it turns out to be computationally expensive so we look for alternatives.
What I would like to see are functional specs for Input Encoding, SP, TM and a couple of other components, where the spec is deemed to be biologically plausible, and I can get on with writing great code to implement them. Not quite there yet, I fear.
We are doing theoretical neuroscience. We don’t care about the computational expense, we want to know how the brain is working. Our research team is currently taking a break from neuroscience research while attempting to apply our models to current Deep Learning architectures, so we’re not focused on this at the moment.
There is a lot of information in our papers. Some of them have mathematical explanations if that is what you are looking for. And BAMI has pseudocode for encoders, SP, and TM. Lots of people have already created HTM implementations using that pseudocode. I’m not saying it is complete, but it can be done with the resources that already exist. I am working on a replacement for BAMI (see Building HTM Systems (WIP Document) - #38 by rhyolight).