How does the brain create sparsity from our senses?

How does the brain make information sparse. For example explaining the eye as a digital camera (since i think it would be easier for me to explain), if the eyes of an animal let’s say has a pixel dimension of 1000 x 1000 (so 1,000,000 pixels), how does the brain make this data sparse? On top of that let say pixel have values of 0 to 255 and color channels. How would the brain represent all this? Obviously I know the eye is not a digital camera but I’m sure there is still a lot of information being sent to cortical columns such as color cones and rods, and intensity of those cones and rods and pre-processed information done in the eyes ect.
Also in this example above, would a cortical column Handel just one pixel. And all the cortical columns handling pixels would “vote” for a complete picture?

The retina already outputs a kindof sparse code with its center-surround ganglion cells that only fire when over an edge.

V1 probably does further sparsification by using something like a wavelet transform.

The brain manifests sparsity as a by-product of the interplay between excitatory and inhibitory neurons. The first excitatory neuron to fire in response to an input signal will typically trigger an inhibitory neuron, which then fires and suppresses the response of most of the other excitatory neurons nearby. Thus only one (or a few) excitatory neurons are allowed to be active within a given region.

2 Likes

Sparsity is also a perception we have in relation to the way we try to model the brain in a synchronous manner / programmed model. Due to the very nature of the way the majority of programming is done the net result is that the model ends up with very few active inputs but the model is computed with everything and so the inputs are “sparse” and relating this back to the brain is sort of wrong. The brain does not process evey sense all of the time (an inactive sense has no “output”). The eyes only “see” changes and saccades are a way for that process to occur in a manner that enough change is then “seen” to re-create an internal perception as to what reality actually is.

The brain is an asynchronous network, whereby inputs are processed as they happen and traverse different routes with different speeds (myelination, recent synapse events, proximal inhibition effects, etc.). Each input creates a wave and where the waves overlap they either reinforce or weaken the coincidence of events. Think of throwing a hand full of stones into a pond, where all the peaks of the ripple overlaps are a form of sparsity of that event for that particular moment in time. The full rings of the ripples are not part of the picture, just the overlaps and that is how I view “sparsity” of the brain.

0-255 gradiation is not necessarily sparsity and more a form of spike encoding of the senses, in terms of pulse amplitude and pulse / burst frequency. You can get say a burst of 3 pulses that are at a frequency of say 100Hz and these 3 pulses occur say every 100mS or roughly 10Hz (a type of double frequency encoding). So you can have 1-n pulses at X Hz that repeat at Y Hz to give a perception of a value between 0 and W.

2 Likes

The cortex tends to have a bunch of different kinds of compartments. The ones I’m talking about are specializations, so they’re specifically for what the cortical region does. The primary visual cortex is super specialized. It organizes & maps info using these compartments. Scientists mainly identify the compartments based on cytochrome oxidase, which is an indicator of the level of metabolic activity or something like that, but a compartment is a functional thing by having its own patterns of connectivity and other functional properties. The compartments are usually only in one layer or even only part of a layer, although scientists often find different properties in other layers depending on whether the other-layer-neuron is horizontally aligned with a particular type of compartment.

In V1, there are blobs and interblobs. Blobs represent color, and interblobs represent the orientation of lines. I believe the idea of minicolumns originally came from interblobs, where each minicolumn activates in response to a line at a particular orientation.

Cortical columns (macrocolumns not mini) aren’t really a strict rule, so much as the idea of local processing (seeing the world through a straw). There aren’t always discrete cortical columns. They’d handle more than one pixel though. If you mean a minicolumn, that’d be multiple pixels too, because those can be for lines at particular orientations.

1 Like