Inihibition and it's purpose

Hey guys. I’m back in America and I have been pounding out my own code based on the papers for spatial pooling and BAMI.

I’m at the inihibition phase of the spatial pooling algorithm which is pretty straight forward but I’m just wondering if anyone has any insight to its purpose. Implementing it is fine and all, but If I don’t know what it’s purpose is and why I’m doing it, it’s not going to do me any good in terms of contributing extra ideas or thinking about how the whole system is working.

I was wondering if anyone knows if it is inhibiting like signals or if its inhibiting unlike signals. I’m asking because there is a machine learning method I can use, or I could even just use euclidean distance really, to reorganize the neuron arrangement but only if I know the purpose of the inhibition.

Also is radius a biological thing? I know older machine learning methods like kohnen networks and hopfield to an extent use a radius to help organize the neurons, and I suppose cnn’s are sort of relate to radius w.r.t any given signal. But in the brain I was under the impression that the inhibitory effects were based on neurons that an inhibitory neuron is connected, in which case wouldn’t it be best to randomize inhibitory neurons as well.

Just some ponderings I’m having as I program this out. I mainly just need to know if anyone knows the purpose of inhibition and if I am inhibiting like or unlike signals. Any theories and general thoughts are welcome.

1 Like

Yes,

This is the point where the responses are winnowed out to enforce the S of SDRs.

The strongest responding column will activate the attached inhibitory inter-neurons and suppress the nearest neighbors.That is where the whole circle thing comes from.

May I suggest you read through this entire thread:

1 Like

Right, so the inhibition then really doesn’t have anything to deal with like or unlike? So long as the winning neurons come out on top and are expressed more than non winning?

That makes a bit of sense. Maybe I’ll experiment with randomly inhibited, unlike signals, and like signal inhibition. Will be interesting to see what type of results come out of it. I had initially assumed nearby neurons will be responsible for similar signals but maybe that’s not the case.

I’m becoming less and less convinced that the organization and the layers of cortical columns are important for intelligence and more important for biological and metabolic effciency. It seems that complex intelligent like natural systems are random and make the randomness important rather than building with purpose. Which kind of falls in line with why randomly sparse RNN’s do so well as well.

Thanks for the links. I’ll read more into them.

1 Like

Correct.

I will add that there are good reasons to suspect that topology is very important both locally and globally.
Please check out this thread:

2 Likes

Will do, thanks king.

Ok so I did training runs on numbers from the MNIST dataset. My set up was a 1d 784 array from MNIST as input/signal, 256 neuron column in the form of a 1D array. Winning column selection was most connected and top 2 percent. There was also inhibition radius (which after seeing its effects I now realize inhibition radius AND top percentile is pointless) of 4 neurons on either side of the winning neuron. I increment the synapse value when there is a signal underneath it by 0.1 and I decrement by 0.01 when there isn’t. I train on a single label 250 times.

I think what is happening with the inhibition is that it gives other neurons the chance to capture its own version of a pattern. When I had no inhibition and was training all of the neurons on every signal all I got was an average blob where the numbers showed up the most. When inhibition is active though, many of the neurons’ synapses don’t change. Once a neuron learns a pattern they stay active and don’t allow other neurons to try to learn the pattern. I imagine this is the “over active neurons” Matt talked about in his videos.

When a new pattern sufficiently different than the learned ones though is introduced, it may be different enough to cause other neurons to be active and thus capture the new pattern. I was training 250 time on a single label to make really prominent visuals within the active neuron synapses. I also haven’t implemented the boosting factor yet. Which I assume will solve the over active neuron problem.

Regardless, it makes a lot more sense now that I can see the inihbition in action. Also I don’t think that an actual radius matters. I think randomly selecting neurons (where the random selection is the same every time for the same neuron) to inhibit will do the exact same thing. If the point is to have individual neurons capture patterns to fire on, and the purpose of inhibition is to make sure only a few are doing that, then it doesn’t matter what is being inhibited, just so long as enough are being inhibited.

Hope that helps anyone running across this.

2 Likes

Hey Guys.

Just thought I would share some discoveries using randomly inhibited neurons. So first I will give you a run down of the experiment so you can try it on your own if you want to verify for yourself.

Start out with a collection of neurons in an array. For this experiment, a neuron is just an array of integers. The integers represent the index of another neurons position in the array of neurons, these index numbers are numbers that will be used to inhibit other neurons. So if for example our winning neuron has a 5 and a 2, then neurons at number 5 and 2 are inhibited.

For every neuron, randomly pick x amount of numbers. In my experiment all neurons had the same amount picked for them. I experimented with a pool of 10 neurons with 2 and 1 inhibitory connections and a pool of 100 neurons with between 2-20 inhibitory connections.

Make an array for activity scores of N size, where N is the number of neurons you have. For N amount of times, choose a random number between 0 and N. With your random number, find the neuron at that index, then for every inhibitory connection that the neuron at that index has, give a -1 for those positions in your activity array. When you select a neuron randomly that already has a -1 score, do not score -1’s for it’s inhibitory connections, just skip that neuron.

Do this test T amount of times. I did tests of 100, 1000, and 10000. In a seperate array, keep track of every time a neuron did not have -1 as a score. So essentially, keep track of every time a neuron was firing and inhibiting other neurons. Then divide all of those scores by T for their grade of how often they were firing.

And that’s it.

I expected the numbers to be randomly distributed and with a mid range of 1/N times. Because that would be essentially randomly choosing neurons to stay on. Instead what I got was VERY different. Most of the time, if a neuron fired, it would ALWAYS fire. Rarely did active neurons not fire 100% of the time. Sometimes I would get neurons fire 2 or 3 times in a round of 1000 test runs. But most of the time, with random inhibition, you either fired all of the time at every test where the winners were all chosen randomly, or you never fired.

It’s a pretty weird effect. I’m not exactly sure what is happening, but im guessing an incredibly biased firing enforcement like that is something you would like to avoid.

I have the code here if anyone wants to check out the work. It’s a pastebin link and it’s in a language called processing. You can just throw the whole thing in the processing environment and push play. After looking over it of course, because you know running random code from strangers is never a good idea. It’s pretty straightforward though if you would like to recreate it from scratch for yourself.

Let me know fi you see any problems with the experiment please.

1 Like

I updated the code. It had some tiny bugs that didn’t affect the random inhibitory effect outcome. But also I added the radial inhibition which gave the expected results. It’s still really weird to me. I don’t get it.