K-Winner take all computation in spatial pooling

Hi everyone
I have a few questions, please answer my questions.Can you explain that where is the implementation of K-Winner take all computation in HTM.core or Nupic master python code? How can I change the logic behind it? Is it possible to use the RELU function or the SVM algorithm instead of K-Winner take all computation? Does anyone have a suggestion in this regard?
In fact, my goal is to be able to change Spatial pooling algorithm algorithm in some way and see how this algorithm gets better or worse by changing the K-Winner logic. Any guidance from you will definitely be helpful and valuable to me
thank you

Hi,
In the SpatialPooler, local and global inhibition.

Global inh is exactly k-winner takes all, sorted by columns overlaps.

Local inh, being biologically more correct, also slower, is k winner takes all “but taking locality into account “, so you get sparse winners.

Thanks for your reply
Can you please explain more what I need to do to make the changes now?

I’m not sure implementing relu or svm even makes sense in the context of SP. The minicolumns all present binary values, not scalars.

Edit: unless you mean using a machine-learning classifier like a relu perception to learn the outputs of the SP.

Yes that’s right. In the article “how can we be so dense?The Benefits of Using Highly Sparse Representations”, I saw that the author used “K-Winner” instead of “RELU” in deep learning network, so I thought about changing “K-Winner” by “RELU” to make a new mechanism of spatial pooler.

Edit: unless you mean using a machine-learning classifier like a relu perception to learn the outputs of the SP.
How can I do this?

I believe the NUPIC and htm.core libraries include built-in ML classifiers. Similar to sci-kit learn objects, they can take a set of SDRs representing minicolumn activity and some set of corresponding labels or values and learn to translate.

1 Like