What is the difference in implementation?

hi, in the paper " The HTM Spatial Pooler—A Neocortical Algorithm for Online Sparse Distributed Coding" the author implement a way to create SDR from input data.also in paper “A Mathematical Formalization of Hierarchical Temporal Memory’s Spatial Pooler”, the author describe a way to create SDR from input data. is the algorithm of these two paper exactly similar? is there any difference between the method or implementation of these 2 paper? can I compare the result of these 2 paper or not they are similar?

If you are looking for the Spatial Pooler pseudocode, use this.

Thanks, I try to implement HTM spatial pooling algorithm and then I want compare my result by other works, I like to find some different implementation for spatial pooling to compare.I find the mentioned above paper, but I am not sure is there different or the same.
Please help me

Are you looking for functionally different algorithms which output SDRs, or are you looking for optimizations to the spatial pooler algorithm?

If the latter, @sunguralikaan has implemented some optimizations to SP in his thesis, (such as merging boosting and bumping to use a common boost value, and a mirror synapse optimization).

In HTM.js, I implemented an optimization to the cell activate process in SP. Minicolumn scores are incremented from the perspective of the active cells in the input space. In pseudocode, this looks something like:

FOREACH cell.axon.synapses AS synapse
    synapse.segment.minicolumn.score++

Then pseudocode for the “winner takes all” function:

FOREACH minicolumns AS minicolumn
    FOR c = 0 TO config.activeMinicolumnCount
        IF !( c IN bestMinicolumns ) OR bestMinicolumns[c].score < minicolumn.score
            bestMinicolumns.splice( c, 0, minicolumn )
bestMinicolumns.length = config.activeMinicolumnCount

Similar to @sunguralikaan’s mirror synapse optimization, this strategy allows you not to iterate over inactive cells in the input space or over any synapses on the column dendrites which are not connected with the active input space.

Another optimization to SP that I have proposed (not implemented in HTM.js) is rather than creating every minicolumn’s potential pool upon instantiating the SP, instead, each time an input cell is activated for the first time, its potential connections to the minicolumns could be established on the fly. This would reduce the startup time, as well as significantly reduce overhead in some cases when boosting is not used.

The fastest HTM implementation I know of is @marty1885’s Etaler.

There is also a consolodated list of other HTM projects here. Definitely worth a look if you are comparing implementations.

3 Likes

thanks for your prompt answer.
I look for both (Sp algorithms with different SDR output and also optimizations to the Spatial pooler algorithm).
first i want to know these 2 paper are exactly the same? " The HTM Spatial Pooler—A Neocortical Algorithm for Online Sparse Distributed Coding" and “A Mathematical Formalization of Hierarchical Temporal Memory’s Spatial Pooler” .i mean they algorithms and … are the same and are there any difference between them?

second) can you introduce another optimizations to the Spatial pooler algorithm in python?

To my knowledge, the implementations across papers is the same. I’m not sure about optimizations in python. All NuPIC’s optimizations depends on C++.

1 Like

IMO, the Spatial Pooler (or HTM in general) is quite hard to optimize on Python. You might want to express the SP algorithm as a composition of numpy function calls and avoid raw python loops at all cost - Which is easier said than done. Numpy is good at linear algebra and dense matrices but dealing with sparse indices is a whole new world.

Even if you made it. The resulting code can still be very un-optimized due to how numpy works and a billion computer architecture details.

1 Like

thanks. is there any other paper about spatial pooling that have different implementation in python?
i want to compare 2 different spatial pooling method and conclude witch one is better, but as you say the result of this 2 paper is the same.please help me and give me some other code in python

Not I’m aware of. Most still-maintained HTM implementation have their core in C++. Others are in JS and Scheme.

@Balladeer have ZHTM in pure python. But I haven’t seen him in a while.

1 Like

i want to compare 2 spatial pooling algorithm,for this means i use some information theory algorithms like mutual information and divergence, to do this any time i compute divergence between input space and output of spatial pooling (SDR). i did this method for that 2 paper but the result was exactly the same, i search some other SP algorithm for this computation
if the algorithms different from numenta’s algorithm then the divergence is different from that one

thanks , do you think they are different from numenta’s spatial pooler?
can you send me the link?

Ahh… I get it now. So if I’m not mistaken, you are asking for different varieties of the SP algorithm. Like there’s the Numenta way and a non-Numenta way. Which does the same job of removing redundant information but in a different manar.

Most people here though you are talking about the implementation of th SP algorithm (as it only makes sense to compare the same algorithm for speed). Which, as expected since they’re the same algorithm, will produce the same result.

Anyway in this case. I know htm-community has an experimental SP somewhere that attempts to fix the overrunning learning issue. And Bitking has his hex grids. But don’t know if there’s a implementation of it.

yes that is true.I asking for different varieties of the SP algorithm.but i do not know how i should find it.

You could look over all the htm implementations and see for yourself? (not a task to take lightly)

thanks :pray: :pray: :pray: