Can someone explain the motivation behind the Spatial Pooler?

Hello,
I am looking for an explanation of the SP, and the main motivation behind the concept?
if you have only two bullets/paragraphs to describe it. how you going to do this…
Thanks

Here is how I’ve tried to introduce it in BHTMS:

Spatial Pooling is a process that extracts semantic information from input to provide a controlled space to perform further operations. Additionally, input is converted into a sparse distributed representation (SDR), which provides further computational benefits (citation needed). Even though information is lost during this transformation, stability is gained and semantics are preserved through redundancy.

Regarding the “motivation” behind it, well, it is a discovery, not in invention. It is motivated by evolution, as a way to take the chaotic input to the cortex from sensors and sample it in a controlled way.

5 Likes

One sentence:
A format converter to SDR format.

2 Likes

A post was split to a new topic: How do Grid Cell Modules relate to Spatial Pooling / mini-columns?

Hi,

I just happen to see this channel but:

Does the spatial pooling algorithm perform like this?

I thought encoder is the format converter to SDR format and spatial pooling is to do something else.

The goal of SP is :

Have SDR size and sparsity which is comfortable for the input data generated by the encoders.
Have SDR size and sparsity required internally by the system
Preserve the similarity of the input data

Hi,

Thank you for reply.

May you check whether my understanding is right?

“Have SDR size and sparsity which is comfortable for the input data generated by the encoders.” is because in the input space, the data is not sparse enough so we need an algorithm/mechanism to ensure the sparsity.
Question: why we need the SDR size? do we need that because we want to create the sparsity?

“Have SDR size and sparsity required internally by the system”
I think this is just as same as the first one

“Preserve the similarity of the input data”
I think how can we make this one work is use the permanence value right?

Following Questions:
I notice that some of the concepts are reverse engineering so it means that we need to do this because biologists or neuroscientists found this but does any paper or video summarize which of these concepts are reverse engineering of neuroscientists and which ones are invented by computer scientists to solve specific problems?

Thank you very much.

No … the output of an encoder can be anything, may not be even SPARSE. Or it can be a concatenation or mix of multiple encoders. You need to transform it to SDR suitable to the system…

for example Number encoder can encode 5 as:

       0000000000000000111111111111111111000000000000.........

it is sparse but not very good for SDR… SPooler may make it something like :

       0010000100000000000001001000000000000000000001000000...1..1..

see…but now the problem is how to make 6 such that it overlaps with the new 5 ;), SP to the rescue

sort of yes … but give u as a programmer to decide depending on what you want to achieve … memory, speed … so u can choose size 2000 sparsity 0.02 OR 1000/0.05 …etc

the idea here is u have input SDR and output SDR… the Encoder have some overlap when generating values, because of the data-domain f.e. numbers, gps, date, time, …
you want to preserve it, if you dont then the SP is useless…

f.e. 5 is close to 6, but 10 is farther …so it should have lower overlap…

dont know which specifically … but check numenta site… but

 - Encoder : is eyes,ears,touch
 - SPooler : sensors to brain data normalization
 - TM : part of Cortical column
 - TP : connections between layers and inter column connections
1 Like

Thank you very much! Guess I have to read more papers and intro books for this basic concepts.

Right now I want to implement the HTM theory in C++ but I find out that some of the basic concepts are not mapped together. Do you have something recommandation to read?

Thank you!

1 Like

These papers should give you some insight into the properties of SDRs and the relationship to sparsity.

If you need more:
Kanerva, P. (1988). Sparse Distributed Memory. Cambridge, MA: The MIT Press.
but be warned - you will have to buckle down to read this as it is a very demanding text.

When you have SDRs and sparse coding down, this paper shows a practical implementation that discusses the “why” behind it:

3 Likes

:wink:

Check my project : http://www.igrok.site/bbHTM.html (its old implementation)
(keep in mind it has one logical error ACTIVE and PAST has to be swapped).

I’m also currently implementing the full stack much better this time using indexed-SDR.
I will publish soon beta version…

3 Likes