R-trees vs arrays/hash-maps vs combined methods for calculating local inhibition

First off, to avoid confusion, I should specify that I’m thinking about how to calculate inhibition in an HTM layer where neurons can be added, deleted, or moved, which is why I’m not including arrays in my algorithms. However, this should apply to normal HTMs as well.

In HTM, local inhibition requires some spatial computation to figure out which neurons are being inhibited. There are many good r-tree algorithms for finding the points closest to a specific point, and the complexity of finding those points is O(log_m(n)), where m is the maximum number of children per node. This could be used for only searching nearby low excitation points instead of checking if all points were nearby and had low enough excitation, which is what it seemed like was happening.

Hashmaps could also be used to store x and y locations with O(1) query time, but to calculate local points/neurons, the only O(1) algorithm I could think of would be storing groups of points in buckets (also implemented with hash-maps). However, if I used buckets, then the size of each bucket would have to be specified on bucket creation, and changing that parameter could either require moving neurons out of the bucket, or deleting them and moving them to another bucket, which could practically require me to create a new r-tree library.

A third option that would also be O(1) would be making that new r-tree library, modifying r-trees to use a certain rectangle size and to, upon querying, select a parent rectangle by using a hash function with lots of purposeful collisions for nearby spatial values (bucket-tree?). but that would require creating/modifying an r-tree library, which seems like a lot of work, so I planned on saving it for later.

I could also create lists for most recently/commonly inhibited groups, or lists of neurons inhibited by some specific neurons, like the caches in a computer. But that seems like even harder optimization work, so I’ll save that for even later.

Right now I’m implementing the first method, because it’s the simplest and is the most realistic, but would the second method have significant impact on accuracy of predictions?

I read about a similar topic [here], and a found the linked blog post [here](the link in the list was broken). I didn’t really see any super detailed results of things like vision application results using the separate algorithms, but the visualizations on the blog posts were very helpful. It looks like local inhibition results in better sparsity than global inhibition, and it looks like some inputs are detected with local inhibition that are never detected with global inhibition.

So are there any more details on this? What algorithm was decided on for Nupic? Global inhibition? (Sorry, so far I’m working on editing the buffers in the Nupic connections code to save by neuron, so I haven’t thoroughly gone through that section of the code yet.)

In summary, how should local algorithms be implemented?

4 Likes

The most comprehensive study on local inhibition that I know was done by a gentleman named Hideaki Suzuki. I actually have a couple of functions on my implementation named after him to remember his name :slight_smile:

He even had very nice slides showcasing what he had done on local inhibition. You can find the slides on the link to understand the actual practical effects of the local inhibition. Many thanks to him again for sharing his own study on local inhibition back at that time.

http://lists.numenta.org/pipermail/nupic_lists.numenta.org/2013-September/005823.html

2 Likes

Hi,

If your work is aimed at improving the encoding of HTM Theory, one thing to note is that inhibition (as it now stands, at least), is only performed by the SP Algorithm - and Neurons are constant with only dendrites and synapses being added and taken away during TM algorithm processing. Of course how and when this all happens may change, but I doubt you’ll ever get Neurons themselves being added or deleted? Just my two cents…

1 Like

Ooh, thanks for that. It looks like local inhibition is important even on the a 32x32 image. That’s good to know.

One thing I didn’t get though, was, why is “the number of scan lookups: (P+A)*(P-A+1)”? Are the pre-active and active columns unsorted? I don’t know if I’m understanding that big-O calculation correctly.

Actually, I just finished modifying the nupic/research/connections.py code so that flatbuffers are written per neuron and neurons are stored in a dictionary so that they can be accessed by id and others near them can be removed with no effect. I still need to unit test and set up something for when a synapse is looking for input from a deleted neuron, but it’s getting close. Oh, I also made some code for generating a specific number of points or neurons at certain positions before that, using r-trees, so neurons can be removed and added without changing the spatial information of other neurons, and, later on, the number of neurons receiving input from some portion of the input space can change over time.

My code’s[here] if you want to take a look, with connections under HTMConnections.py. It’s still pretty messy though.

2 Likes

In NuPIC, the columns are conceptual and only have an index (which implies ordering). In HTM.Java, columns are actual objects and contain an index, but are still ordered. I’m unfamiliar with the big-0 calculation you speak of, though - did you find that in one of the papers?

In HTMTheory (and biology), the micro-columns we model are understood to “contain” neurons - (not really contained but are designated a member of a micro-column by virtue of its vertical arrangement and not contained within any border or membrane (AFAIK). Proximity of one neuron to another, afaik also does not figure in to the algorithm’s calculations - but who knows what the future holds? There was talk at one point of a way to distinguish apical dendrites from distal-lateral dendrites using some form of proximity, but I don’t think that has made its way into any formal algorithm yet… (again I have to qualify my statements by the fact that I’m not privy to current Numenta research; other than what is in the public repo, and don’t work there).

I also don’t mean to constrain the capabilities of the data structure you are inventing because who knows what capabilities may be needed in future HTM implementations? But I just wanted to share what I know now through my brief work with it.

When I get a chance, I’ll take a look at your code. I was very interested when we talked about it before because I like the idea of optimizing the organizational structure of the column/cell matrix - and you have some interesting ideas!

2 Likes

Yes they should be unsorted because sorting wouldn’t help. The for loop does the following as in the slides.

-Choose the column with the highest intensity from the preactive columns
globally, and mark it active.

-Put penalty
to the intensities of the neighbor columns.

So each time you pick the largest column and mark it active, it inhibits the neighbor columns that are inside the inhibition radius. Therefore in every iteration (every time one active column is picked among pre-actives) you need to recalculate the largest intensity among all the pre-active columns again. Even if you sorted preactiveColumns according to their intensity, the order would be invalid the next iteration.

3 Likes

Yeah, that was in the powerpoint in the lists discussion linked by sunguralikaan.

Actually, I might be messing up the terminology a bit. Rewriting the code isn’t as helpful for understanding which part of the algorithm represents a column vs a neuron vs a segment, etc. as I thought it would be. I just added spatial locations in an r-tree for the objects called cells in the connections file. Now I’m looking at the whitepaper and re-evaluating my life…

Well, moving the code shouldn’t be too hard. I just need to set things up so columns have spatial locations now, and I’ve already set up code for writing n-dimensional spatial locations to a file.

Edit: I think I’ll test my connections file as is, then implement the spatial pooler once that’s done, moving the r-tree code to the spatial pooler. I just need to add a note in my repository… done.

Edit2: Oh, I figured out what I misunderstood. I was thinking of the learning radius for the temporal pooler algorithm, and thought that included the location of a cell on a column as part of that radius.

Oh, now I get it.

Though, only the columns within the inhibition radius are changed, and only by a certain amount (I’m not expecting huge inhibitions for columns that were on the edge of the inhibition radius), so only those columns would need to be resorted by intensity, and they’d have a range they would need to be resorted over that’s smaller than the original array. Resorting is O(nlogn) vs unordered searching as O(n), but depending on the inhibition radius, resorting could be faster or slower than not sorting.

2 Likes

I had a “grand” thought which was maybe to do it in a functionally reactive way. Yeah, I know it’s one of those “trend” words, but what if the columns could react to input (in parallel), and then do their own calculation within say an “inhibition group or neighborhood” ? Then, by setting each bit on the input vector, the column neighborhoods (having already be preconfigured), would do their neighborhood calculations to find the winning column? Something like that?

So, said another way, each column would be subscribed to a group of bits in the input space and react to the setting of the input bit as an event which would propagate a calculation within a bunch of neighborhoods?

2 Likes

Wait, how would that change the underlying calculation? Would only a subset of the columns react at a time?

I could see that working on a video, for example, where the columns only updated their calculations if a pixel changed its value significantly enough. That’s kind of like how some videos are compressed now.

1 Like

I was looking at interpolation search recently:
http://www.geeksforgeeks.org/interpolation-search/
If you in some way can get say a 32 bit (unsigned) hash of some object (neuron) then you can sort the list of those (uniformly distributed) hashes and do interpolation search. You are kind of getting close to minimum perfect hashing. If you can tolerate approximate lookup and you use a power of 2 sized array the cost of lookup is a bit shift and a (maybe off cache) memory read.

1 Like

I believe that Hideaki’s code is here: GitHub - h2suzuki/HTMCLA_Exp1: This is an experimental implementation of Numenta's Hierarchical Temporal Memory Cortical Learning Algorithms (HTM CLA). I don’t think he has joined the forum yet, however.

3 Likes

Hi all,

I’ve read the code of suzuki and also his explanation, however I still not understand how can he inhibit the columns. As far as I understand, first of all, he would find the column with highest overlap in whole column region, then he would decrease the overlap of all column in the neighborhood of that column. For next step I dont understand what he continue doing. Because for example if the threshold is 0 -> all columns would be considered as “preactive” column, after choosing highest overlap column and put it in active state, how can he work with other “preactive” column.

1 Like

We have also problems with the speed of local inhibition, and quality of global.
@dmac proposed improvement to SpatialPooler (includes macrololumns within each macrocol global (fast) inhibition happens, communication outside the macrocol is considered too far (there is none))

This helps combine semantic quality of local inh with speed of global.
Actially helps achieve even better results in our MNIST test.

I would be interested in this implementation and discussion of its problems, as I propose replacing inhibition (resp. the neighbourhood search) with proximal connections

It continues by taking the next highest overlapping column and putting a penalty to its neighbors. This continues until you have enough active columns based on your target sparsity.

Yes, if the threshold is 0 all columns are preactive. After finding the highest overlapping column and applying penalty to its neighbors, you set it this column active and remove it from preactive columns. Do this until your target active column count is reached.

1 Like

@sunguralikaan: thank you for your answer, however I still don’t understand that only preactive columns in neighborhood are penalties or all the neighbor columns of the active column are penaltied.

Penalty is applied only to the neighboring columns of the picked/activated column, not all the preactive columns. Neighborhood for each column is defined beforehand. If a column has 8 neighbors, penalty is only applied to all of these, regardless of whether they are preactive or not.

The idea is that highest overlapping columns sequentially get activated one by one and each time one is activated, it has an inhibitory effect on its local neighbors.

1 Like