Is it algorithmically suitable to implement local inhibition by creating fixed number of local neighborhoods depending on sparsity or other parameters and then selecting winners from each neighborhood if neighborhoods are equal to sparsity or selecting winners from the local winners to match the sparsity, instead of creating inhibition neighborhoods for each minicolumn?
Or that the default approach?
I imagine the inhibitory cells work the same way, connecting to a fixed number of cells in a local area. I haven’t read much about this. What is the number of inhibitory cells as compared to the excitatory cells in any layer in the neocortex? I am really looking for biological frameworks about how local inhibition takes place.
I was thinking about implementing abstract interneurons that will be connected to some neurons in the layer and will inhibit some of them based on local rules. Any references will be appreciated.
Also this is interesting: https://hlab.stanford.edu/ssi/ssi_files/image002.jpg
If nothing else you may wish to look at how nature uses these things as inspirations?
I think the diagram on page 20 should be a meditation focus item. Or page 6 - Your call.
To take a stab at the OP I go to the purpose of the inhibitory functions:
One is limited in scope and applies to the local column and no further. The winner in the recognition task shuts down the local talent. If nobody wins then they all get an attendance prize.
You are spot on with the shout out to sparsity. But if you are going to “sparsify” anyway - why not use that as a way to clean and group information? Win-win!
Think of the other pattern-sensing column in the next “grid” spot on the “other side” of the pool of inhibition; this cell participates in the mutual inhibition of “lessor match” neighbors and to help form a larger repeating grid structure. Calvin points to the reciprocal long-range excitatory proximal level connections. This is a possible binding driver; they work together as long as they both recognize the part of the pattern they are seeing. Look at how well this works algorithmically - if receptive fields overlap you get voting on identifying a larger pattern. This grid forms spontaneously and can be of any orientation, any alignment of the grid centers (phasing), and a wide variety of grid spacing sizes - possibly related to the intensity of the stimulus.
There is nothing that says the activation pattern(s) has to be in a grid - just sparse. It is clear from the connectome project that multiple tracts can (and usually do) converge and mingle their outputs. The grid reception field pattern for each column samples the local areas (perhaps 1 degree of visual angle?) and processes just what it can see.
The grid forming structure uses in 2^many (gazillions?) pattern storage capability to remember little bits of patterns that have come its way. Both spatial and temporal.
In an important and related vein: There is a good possibility that more than one pattern could be processed in each cortical map at the same time, and these sub-patterns could communicate through long-range fiber tracts.
There is a fair number of papers that discuss the feed-forward and feedback paths - there is good reason to suspect that the ascending and descending paths have different learning rules and they may have different inhibitory cell activity due to this.
I still have a great deal of trouble visualizing how information “looks” after it gets encoded in a grid. This is one of those stories I tell my self, again and again, each time trying to fill in the bits that don’t make sense.
The bit where the raw senses get hashed to make grids makes sense from an algorithmic view, and I can points to the parts in the hardware that function to make this happen - but I don’t have a good visualization of the process. Maybe it’s just an artist block thing.