The HTM Spatial Pooler: a neocortical algorithm for online sparse distributed coding

You may want to look through this thread and see if it helps with understanding the SP operation.

I assume you have read the SP chapter of BAMI?

1 Like

Hey thanks for merging my Q’s. I’ve read the SP BAMI, there is no detail explanation regarding the hypercube or how it is dynamically calculated. I assume it’s the _mapColumn method of the SpatialPooler in nupic that gets the center of a column and assumes the potential pool’s geometry, however I would like to know specifically how is this exactly calculated given an input space. I was thinking there might be some requirements to be met in order to implement this properly than simply finding a center and assuming a square/cube as a potential pool geometry. For example, how are the column centers laid out must they be contiguous or can they be N cells away or can there distance be randomized? Reason I like to be specific with this implementation is because I think there are many ways to find a center of a column given an input space and each of them could influence the next generations of synapse values.

Thanks! I will have a read on this. Initially though, last night I’ve read a couple of articles reading the visual cortex receptive field and I have not encountered any thing describing a hypercube as the column’s potential pool geometry (receptive field). What I found out is the on/off center-surround receptive fields which have boundaries similar to a circle. I just would like to know the receptive field for example in the SP must be calculated such as some requirements must be met.

I am not sure why they chose to call the input space a “hypercube”.

From the SP paper:
“The synapses for the i th SP mini-column are located in a hypercube of the input space centered at xcixic with an edge length of γ. Each SP mini-column has potential connections to a fraction of the inputs in this region. We call these “potential” connections because a synapse is connected only if its synaptic permanence is above the connection threshold. The set of potential input connections for the i th mini-column is initialized as, …”

I see it as a square area (easy approximation of a circle) centered on the mini-column, with a size given by a side length of γ. So, a square γ x γ centered on the mini-column location, assuming that there is alignment between the input field and the mini-column cell bodies. The apical dendrites will be in the fiber mat in layer one where the cell bodies may be found in L2/3, 4, or 5. This is not an unreasonable assumption as the input/apical dendrites emanate from the cell bodies even though they may be on different layers.
Within this square, some potential connections will be chosen at the creation of the model and this set will not be changed after this initial selection. The strength may be changed by training but the connection positions will not be.

I hope this helps.

1 Like

This is because the SP algorithm can be applied to any dimensional space. For a 2D input space it is just a square. Of course in any biologically accurate application you wouldn’t be able to get any higher dimensions than 3 :slight_smile:

4 Likes

Thanks for explaining the biology part and sharing your knowledge. Do you have any idea as to the requirements of this receptive field’s geometry per column. I’m a bit hesitant to simply approximate the receptive field without any requirements/constraints as it may extremely deviate from its biological equivalent. Or maybe I expected too much from the current discoveries/knowledge but here are some questions I’d like to know their answers if possible. Hard to prove but I think that these tiny properties such as the geometry of the RF contributes to the emergence of the SP learning.

  • Do adjacent columns tend to have closer centers than non-adjacent ones?
  • Given x & y columns, is there a geometrical relationship between x’s and y’s distance and x’s and y’s center distances?
  • Is there a theory about the overlaps of these RF that affects learning?

It makes the implementation less ambiguous having these loose rules as well.

I can’t answer on the best numbers for getting HTM simulations to work.
As far as the actual biology I have found some papers that are fairly clear on what to expect from mini-columns.

The mini-column and rising axon clusters are on 30 um centers and the dendrites reach out 250 um in any direction. that makes the “local neighborhood” for a mini-column about 225 mini-columns.

For a much longer post with more intuitive graphics see this post.

Note this post offers a very different view of making a spatial pooler. In the hex-grid view the hex coding replaces the SP part of HTM theory. The advantage is that individual hex-grids combine to describe a pattern that can be much larger than could be described with a single SP. Keep in mind that this is NOT standard HTM and if you play with this you are on your own as nobody but me is actually doing anything with this. My C/perl code will be utterly unhelpful to any project you are likely to be considering as it is a neural simulator model and not a computational model.

2 Likes

I cannot thank you enough for sharing your knowledge. Yours and gmirey’s posts were more than gold. I’m happy to start from these posts, you know it might take me months to even find these publications and a year perhaps to make an even simple conclusion with regards to biological relevances. To be honest, this gave more more questions than answers, it is a good feeling as I’m even more interested about implementing the SP having to learn more on the bio part as well. I will read further and try hard to understand things. I also did a skim read and when I encountered the NS terms I felt like I was firewalking with faith only as my guarantee, this is probably the reason why HTM is not yet mainstream. I only asked for algorithm requirements and now I’m facing NS full of aliens and I’m not complaining by the way just being realistic and thankful.

3 Likes

hi every one,
in python code of paper “the HTM spatial Pooler a neocortical algorithm for online sparse distributed coding”, we choose the input, can you tell me what is the output of spatial pooler (I want to know the out put of SP saved in which parameter?)
thanks

hi,
I look for some different algorithms/paper that improve HTM spatial pooling and I like to get the same input to these algorithms and compare their output of SP.
can any one help me? for start this experiments,I want first work on binary vector and compare the outputs of different methods using SP, and in second step I want work on MNIST dataset.

I would be grateful if help me and send me some paper with code in this topic.
thanks a lot

Have you read the spatial pooler chapter of BAMI?

1 Like

yes,thanks.
but it dos not have code to run

please some one help me, what is the output of SP in the code of this paper.

The output of the code in SP chapter of BAMI is activeColumns(t), which in the definitions section is defined as “List of column indices that are winners due to bottom-up input.”

1 Like

thanks Paul, what a bout the paper “the HTM spatial Pooler a neocortical algorithm for online sparse distributed coding”, in this code which parameter is output of SP?is the parameter “activeColumnsCurrentEpoch” the output of SP?

You are asking about the NuPIC Spatial Pooler implementation, correct? Sorry, your reference to the paper is throwing me off (also, I don’t see any reference to “activeColumnsCurrentEpoch” in spatial_pooler.py).

I don’t use NuPIC much myself, so I can only refer you to the API documentation (links to other versions are here) Looks like you create an array to hold the output of the algorithm, and pass it to the “compute” method. Someone more familiar with NPIC can probably give you a better answer, though.

thanks for your answer.but I read this paper(the HTM spatial Pooler a neocortical algorithm for online sparse distributed coding) and run the code. then I run the “train_sp.py” and i like to know what is the SP output?
the code adress is here:
https://github.com/numenta/htmpapers

The Spatial Pooler computes a set of active minicolumns for each input you give it. The train_sp.py script will create an SP model by running lots of input through it. For each input, the SP computes active minicolumns. The result of running that scripts is a trained model, not the active minicolumns. To get a set of active minicolumns, you must run the compute() function, which in this case is within some imported code you can find here:

Hopefully this function will show you how the outputColumns are constructed. They are created as zeros and populated after the computation.

1 Like

A post was merged into an existing topic: How Can We Be So Dense? The Benefits of Using Highly Sparse Representations