What is the difference in implementation?

Are you looking for functionally different algorithms which output SDRs, or are you looking for optimizations to the spatial pooler algorithm?

If the latter, @sunguralikaan has implemented some optimizations to SP in his thesis, (such as merging boosting and bumping to use a common boost value, and a mirror synapse optimization).

In HTM.js, I implemented an optimization to the cell activate process in SP. Minicolumn scores are incremented from the perspective of the active cells in the input space. In pseudocode, this looks something like:

FOREACH cell.axon.synapses AS synapse
    synapse.segment.minicolumn.score++

Then pseudocode for the “winner takes all” function:

FOREACH minicolumns AS minicolumn
    FOR c = 0 TO config.activeMinicolumnCount
        IF !( c IN bestMinicolumns ) OR bestMinicolumns[c].score < minicolumn.score
            bestMinicolumns.splice( c, 0, minicolumn )
bestMinicolumns.length = config.activeMinicolumnCount

Similar to @sunguralikaan’s mirror synapse optimization, this strategy allows you not to iterate over inactive cells in the input space or over any synapses on the column dendrites which are not connected with the active input space.

Another optimization to SP that I have proposed (not implemented in HTM.js) is rather than creating every minicolumn’s potential pool upon instantiating the SP, instead, each time an input cell is activated for the first time, its potential connections to the minicolumns could be established on the fly. This would reduce the startup time, as well as significantly reduce overhead in some cases when boosting is not used.

The fastest HTM implementation I know of is @marty1885’s Etaler.

There is also a consolodated list of other HTM projects here. Definitely worth a look if you are comparing implementations.

3 Likes