Is local inhibition in SP fixed sparsity?

I convert one image of MNIST dataset (with a size of 28 x 28) to a vector of size 784 and then gave it as input to the Spatial pooling algorithm. I used nupic-master code and the sparsity was 2% . when the global inhibition was true, the number of active column were 15 but when global inhibition was false(that means I use local inhibition) the number of active column were 18!
my question is:
Shouldn’t the sparsity in the spatial pooling algorithm be fix and about 2%?
Is this difference in sparsity a code problem or is it normal?
so Is it wrong to use local inhibition?

Im not expirienced in HTM, but i think 2% - its maximum count of active columns (UPD: as AMZ written below, its just preferred value).

Global inhibition is simlification, but for small regions like standard 2048 columns its not problem. Local inhibition is more “biological”, i think.

2ALL: Correct me if i maked mistake.

Yes, that’s right. I have read in several texts that local inhibition is more like the human brain.But in the above example , the ratio of the number of active columns to the total number of columns is more than 2%. For this reason, I doubted whether it worked properly or not.I also found another implementation of spatial pooling which is related to the article “A Mathematical Formalization of Hierarchical Temporal Memory’s Spatial Pooler” Which they called mHTM, Then I gave the same MNIST image to this algorithm and observed that in the case of global inhibition the sparsity is 2% but in the local inhibition the sparsity is 3.8% (in this case the number of active columns was 30). Based on these observations, can it be concluded that the spatial pooling algorithm of numenta company is better than mHTM?


2% is preferred but we do not have to stick to it!!! You can still push it up or down to enhance the performance of your network for a given task.

1 Like

I have a question too, which of these two codes is more accurate? Because, as far as I know, mHTM algorithm did not use nupic libraries, and the authors wrote all the code themselves from the beginning.
I would love to know the advantages and disadvantages of mHTM’s code vs numenta’s code. and i like to know some measure to compare this methode

In both codes (numenta and mHTM) the sparsity was set to 2.%, and in the global inh mode both methods had a sparsity of 2.%. But in the local inh mode the Sparsites changed. Doesn’t this mean that the algorithms are weak? Because we did not change the sparse for a specific task.

1 Like

Why do you think that? weak in what sense?