First off, to avoid confusion, I should specify that I’m thinking about how to calculate inhibition in an HTM layer where neurons can be added, deleted, or moved, which is why I’m not including arrays in my algorithms. However, this should apply to normal HTMs as well.
In HTM, local inhibition requires some spatial computation to figure out which neurons are being inhibited. There are many good r-tree algorithms for finding the points closest to a specific point, and the complexity of finding those points is O(log_m(n)), where m is the maximum number of children per node. This could be used for only searching nearby low excitation points instead of checking if all points were nearby and had low enough excitation, which is what it seemed like was happening.
Hashmaps could also be used to store x and y locations with O(1) query time, but to calculate local points/neurons, the only O(1) algorithm I could think of would be storing groups of points in buckets (also implemented with hash-maps). However, if I used buckets, then the size of each bucket would have to be specified on bucket creation, and changing that parameter could either require moving neurons out of the bucket, or deleting them and moving them to another bucket, which could practically require me to create a new r-tree library.
A third option that would also be O(1) would be making that new r-tree library, modifying r-trees to use a certain rectangle size and to, upon querying, select a parent rectangle by using a hash function with lots of purposeful collisions for nearby spatial values (bucket-tree?). but that would require creating/modifying an r-tree library, which seems like a lot of work, so I planned on saving it for later.
I could also create lists for most recently/commonly inhibited groups, or lists of neurons inhibited by some specific neurons, like the caches in a computer. But that seems like even harder optimization work, so I’ll save that for even later.
Right now I’m implementing the first method, because it’s the simplest and is the most realistic, but would the second method have significant impact on accuracy of predictions?
I read about a similar topic [here], and a found the linked blog post [here](the link in the list was broken). I didn’t really see any super detailed results of things like vision application results using the separate algorithms, but the visualizations on the blog posts were very helpful. It looks like local inhibition results in better sparsity than global inhibition, and it looks like some inputs are detected with local inhibition that are never detected with global inhibition.
So are there any more details on this? What algorithm was decided on for Nupic? Global inhibition? (Sorry, so far I’m working on editing the buffers in the Nupic connections code to save by neuron, so I haven’t thoroughly gone through that section of the code yet.)
In summary, how should local algorithms be implemented?