Let’s review what we know about inhibitory interneurons. HTM theory says that they’re responsible for running the winner-takes-all competition and that makes the representations sparse. However, HTM theory simplifies things too much. We know all of the details about how real inhibitor cells work, but we haven’t really analysed how those details effect the system.
Forming sparse representations by local anti-Hebbian learning
P. Foldiak 1990
Abstract. How does the brain form a useful representa-
tion of its environment? It is shown here that a layer of
simple Hebbian units connected by modifiable anti-
Hebbian feed-back connections can learn to code a set
of patterns in such a way that statistical dependency
between the elements of the representation is reduced,
while information is preserved. The resulting code is
sparse, which is favourable if it is to be used as input to
a subsequent supervised associative layer. The opera-
tion of the network is demonstrated on two simple
problems.Link: https://redwood.berkeley.edu/wp-content/uploads/2018/08/foldiak90.pdf
So what is an interneuron really? Is it a functional distinction or a structural distinction? There’s a lot of assumptions when neuroscientists mention them in their papers and I’d like to know what those assumptions are.
The wikipedia article on interneurons mostly explains it from a spinal cord point-of-view and gives very unhelpful explanations like:
“interneurons are the central nodes of neural circuits, enabling communication between sensory or motor neurons and the central nervous system”
Not very helpful.
Of course, we’re talking about the cortex, so we’re primarily interested in how a cortical interneuron differs from a cortical pyramidal neuron. The latter is something we have a good intuition of since all of our artificial neural network methods and the HTM neuron uses pyramidal neurons as their inspiration.
So what does an interneuron mean in the cortex?
Is it an elongated neuron that is more dendrite/axon than soma?
Is it the same as a pyramidal cell but with limited input/output responsibilities?
Are the biological/structural differences between pyramidal cells only the dendrite/axon connectivity or is it a fundamentally different type of neuron cell with different computational properties?
What do the interneuron connections look like in a neuronal circuit and what are their functions?
It’s just frustrating that I haven’t been able to find straightforward answers on what is and isn’t known about interneurons and a more focused explanation on what they actually are.
Inhibitory means that the cell uses GABA as its neurotransmitter.
Interneuron means that it is not a pyramidal neuron.
Here is a good review of the subject:
INTERNEURONS OF THE NEOCORTICAL INHIBITORY SYSTEM
Henry Markram, Maria Toledo-Rodriguez, Yun Wang, Anirudh Gupta, Gilad Silberberg and Caizhi Wu
2004Abstract Mammals adapt to a rapidly changing world because of the sophisticated cognitive
functions that are supported by the neocortex. The neocortex, which forms almost 80% of the
human brain, seems to have arisen from repeated duplication of a stereotypical microcircuit
template with subtle specializations for different brain regions and species. The quest to unravel
the blueprint of this template started more than a century ago and has revealed an immensely
intricate design. The largest obstacle is the daunting variety of inhibitory interneurons that are
found in the circuit. This review focuses on the organizing principles that govern the diversity of
inhibitory interneurons and their circuits.Link: https://www.researchgate.net/publication/8336946_Interneurons_of_the_neocortical_inhibitory_system
also worth to mention that they vary wildly in shape, electrical behavior and plasticity, but almost all inhibitory neurons except for the ones in the basal ganglia project short range connections.
Off topic but I find this paper’s abstract very much inspiring!
The brain receives a constantly changing array of signals from millions of receptor cells, but what we experience and what we are interested in are the objects in the environment that these signals carry information about. How do we make sense of a particular input when the number of possible patterns is so large that we are very unlikely to ever experience the same pattern twice? How do we transform these high dimensional patterns into symbolic representations that form an important part of our internal model of the environment? According to Barlow (1985) objects (and also features, concepts or anything that deserves a name) are collections of highly correlated properties. For instance, the properties ‘furry’, ‘shorter than a metre’, ‘has tail’, ‘moves’, ‘animal’, ‘barks’, etc. are highly correlated, i.e. the combination of these properties is much more frequent than it would be if they were independent (the probability of the conjunction is higher than the product of individual probabilities of the component features). It is these non-independent, redundant features, the ‘suspicious coincidences’ that define objects, features, concepts, categories, and these are what we should be detecting. While components of objects can be highly correlated, objects are relatively independent of one another. Subpatterns that are very highly correlated, e.g. the right and left-hand sides of faces, are usually not considered as separate objects. Objects could therefore be defined as conjunctions of highly correlated sets of components that are relatively independent from other such conjunctions.
This article explains the math and computer-science behind the winner-takes-all competition.
Robust parallel decision-making in neural circuits with nonlinear inhibition
Birgit Kriener, Rishidev Chaudhuri, Ila Fiete (2019)
DOI Link: http://dx.doi.org/10.1101/231753Abstract
Identifying the maximal element (max,argmax) in a set is a core computational element in inference, decision making, optimization, action selection, consensus, and foraging. Running sequentially through a list of N fluctuating items takes N log(N) time to accurately find the max, prohibitively slow for large N. The power of computation in the brain is ascribed in part to its parallelism, yet it is theoretically unclear whether leaky and noisy neurons can perform a distributed computation that cuts the required time of a serial computation by a factor of N, a benchmark for parallel computation. We show that conventional winner-take-all neural networks fail the parallelism benchmark and in the presence of noise altogether fail to produce a winner when N is large. We introduce the nWTA network, in which neurons are equipped with a second nonlinearity that prevents weakly active neurons from contributing inhibition. Without parameter fine-tuning or re-scaling as the number of options N varies, the nWTA network converges N times faster than the serial strategy at equal accuracy, saturating the parallelism benchmark. The nWTA network self-adjusts integration time with task difficulty to maintain fixed accuracy without parameter change. Finally, the circuit generically exhibits Hick’s law for decision speed. Our work establishes that distributed computation that saturates the parallelism benchmark is possible in networks of noisy, finite-memory neurons.
This article is not surprising, but the data is useful nonetheless.
Precision of Inhibition: Dendritic Inhibition by Individual GABAergic Synapses on Hippocampal Pyramidal Cells Is Confined in Space and Time
Fiona E. Müllner, Corette J. Wierenga, and Tobias Bonhoeffer (2015)
DOI Link: RedirectingAbstract
Inhibition plays a fundamental role in controlling
neuronal activity in the brain. While perisomatic
inhibition has been studied in detail, the majority of
inhibitory synapses are found on dendritic shafts
and are less well characterized. Here, we combine
paired patch-clamp recordings and two-photon
Ca2+ imaging to quantify inhibition exerted by indi-
vidual GABAergic contacts on hippocampal pyrami-
dal cell dendrites. We observed that Ca2+ transients
from back-propagating action potentials were signif-
icantly reduced during simultaneous activation of
individual nearby inhibitory contacts. The inhibition
of Ca2+ transients depended on the precise spike-
timing (time constant < 5 ms) and declined steeply
in the proximal and distal direction (length constants
23–28 mm). Notably, Ca2+ amplitudes in spines were
inhibited to the same degree as in the shaft. Given
the known anatomical distribution of inhibitory syn-
apses, our data suggest that the collective inhibitory
input to a pyramidal cell is sufficient to control Ca2+
levels across the entire dendritic arbor with micro-
meter and millisecond precision.
Here’s another comprehensive mathematical analysis of WTA.
On the Computational Power of Winner-Take-All
W. Maass (2000)
DOI Link: On the Computational Power of Winner-Take-All | Neural Computation | MIT Press
PDF: On the Computational Power of Winner-Take-AllAbstract
This article initiates a rigorous theoretical analysis of the computational power of circuits that employ modules for computing winner-take-all. Computational models that involve competitive stages have so far been neglected in computational complexity theory, although they are widely used in computational brain models, artificial neural networks, and analog VLSI. Our theoretical analysis shows that winner-take-all is a surprisingly powerful computational module in comparison with threshold gates (also referred to as McCulloch-Pitts neurons) and sigmoidal gates. We prove an optimal quadratic lower bound for computing winner-takeall in any feedforward circuit consisting of threshold gates. In addition we show that arbitrary continuous functions can be approximated by circuits employing a single soft winner-take-all gate as their only nonlinear operation. Our theoretical analysis also provides answers to two basic questions raised by neurophysiologists in view of the well-known asymmetry between excitatory and inhibitory connections in cortical circuits: how much computational power of neural networks is lost if only positive weights are employed in weighted sums and how much adaptive capability is lost if only the positive weights are subject to plasticity.
Inhibitory Plasticity: Balance, Control, and Codependence
Guillaume Hennequin, Everton J. Agnes, and Tim P. Vogels (2017)
https://doi.org/10.1146/annurev-neuro-072116-031005Abstract
Inhibitory neurons, although relatively few in number, exert powerful control over brain circuits. They stabilize network activity in the face of strong feedback excitation and actively engage in computations. Recent studies reveal the importance of a precise balance of excitation and inhibition in neural circuits, which often requires exquisite fine-tuning of inhibitory connections. We review inhibitory synaptic plasticity and its roles in shaping both feedforward and feedback control. We discuss the necessity of complex, codependent plasticity mechanisms to build nontrivial, functioning networks, and we end by summarizing experimental evidence of such interactions.
This is a review article, it covers the history of the topic and summarizes the state of the art.
There is a bunch of research about how inhibitory plasticity leads to “balanced” or “critical” levels of activity.
Balancing Feed-Forward Excitation and Inhibition via Hebbian Inhibitory Synaptic Plasticity
Yotam Luz, Maoz Shamir (2012)
https://doi.org/10.1371/journal.pcbi.1002334Abstract
It has been suggested that excitatory and inhibitory inputs to cortical cells are balanced, and that this balance is important for the highly irregular firing observed in the cortex. There are two hypotheses as to the origin of this balance. One assumes that it results from a stable solution of the recurrent neuronal dynamics. This model can account for a balance of steady state excitation and inhibition without fine tuning of parameters, but not for transient inputs. The second hypothesis suggests that the feed forward excitatory and inhibitory inputs to a postsynaptic cell are already balanced. This latter hypothesis thus does account for the balance of transient inputs. However, it remains unclear what mechanism underlies the fine tuning required for balancing feed forward excitatory and inhibitory inputs. Here we investigated whether inhibitory synaptic plasticity is responsible for the balance of transient feed forward excitation and inhibition. We address this issue in the framework of a model characterizing the stochastic dynamics of temporally anti-symmetric Hebbian spike timing dependent plasticity of feed forward excitatory and inhibitory synaptic inputs to a single post-synaptic cell. Our analysis shows that inhibitory Hebbian plasticity generates ‘negative feedback’ that balances excitation and inhibition, which contrasts with the ‘positive feedback’ of excitatory Hebbian synaptic plasticity. As a result, this balance may increase the sensitivity of the learning dynamics to the correlation structure of the excitatory inputs.
Discussion
We have studied the computational effect of temporally
asymmetric Hebbian plasticity of feed forward inhibition. Hebbian
plasticity of inhibition generates negative feedback, in contrast to
the positive feedback generated by Hebbian plasticity of excitation.
This can be understood by the following intuitive explanation. If
the feed forward inhibitory synapse is very strong, then it is less
likely that a postsynaptic spike will follow a presynaptic spike. As a
result more pre-post spike pairs will fall on the acausal branch of
the STDP learning curve than on the causal branch. This, in turn,
will depress the strong synapse. On the other hand, if the synapse
is weak, then pre and post spike times will be largely uncorrelated
and the STDP dynamics will sample uniformly both branches of
the STDP curve with equal probability.
[…]
Cortical Circuit Dynamics Are Homeostatically Tuned to Criticality In Vivo
Zhengyu Ma, Gina G. Turrigiano, Ralf Wessel, and Keith B. Hengen (2019)
https://doi.org/10.1016/j.neuron.2019.08.031SUMMARY
Homeostatic mechanisms stabilize neuronal activity
in vivo, but whether this process gives rise to
balanced network dynamics is unknown. Here, we
continuously monitored the statistics of network
spiking in visual cortical circuits in freely behaving
rats for 9 days. Under control conditions in light
and dark, networks were robustly organized around
criticality, a regime that maximizes information ca-
pacity and transmission. When input was perturbed
by visual deprivation, network criticality was severely
disrupted and subsequently restored to criticality
over 48 h. Unexpectedly, the recovery of excitatory
dynamics preceded homeostatic plasticity of firing
rates by >30 h. We utilized model investigations to
manipulate firing rate homeostasis in a cell-type-
specific manner at the onset of visual deprivation.
Our results suggest that criticality in excitatory net-
works is established by inhibitory plasticity and
architecture. These data establish that criticality is
consistent with a homeostatic set point for visual
cortical dynamics and suggest a key role for homeo-
static regulation of inhibition.
This article presents a simplified neural network and does a lot of math and statistics on it.
Self-Tuned Critical Anti-Hebbian Networks
Marcelo O. Magnasco, Oreste Piro, and Guillermo A. Cecchi (2009)
https://doi.org/10.1103/PhysRevLett.102.258102Abstract
It is widely recognized that balancing excitation and inhibition is important in the nervous system. When such a balance is sought by global strategies, few modes remain poised close to instability, and all other modes are strongly stable. Here we present a simple abstract model in which this balance is sought locally by units following ‘‘anti-Hebbian’’ evolution: all degrees of freedom achieve a close balance of excitation and inhibition and become ‘‘critical’’ in the dynamical sense. At long time scales, a complex ‘‘breakout’’ dynamics ensues in which different modes of the system oscillate between prominence and extinction; the model develops various long-tailed statistical behaviors and may become self-organized critical.
P.S. Here is a good review article explaining what “criticality” means:
Being critical of criticality in the brain
John M. Beggs and Nicholas Timme (2012)
https://doi.org/10.3389/fphys.2012.00163Abstract
Relatively recent work has reported that networks of neurons can produce avalanches of
activity whose sizes follow a power law distribution. This suggests that these networks
may be operating near a critical point, poised between a phase where activity rapidly dies
out and a phase where activity is amplified over time. The hypothesis that the electrical
activity of neural networks in the brain is critical is potentially important, as many simula-
tions suggest that information processing functions would be optimized at the critical point.
This hypothesis, however, is still controversial. Here we will explain the concept of criticality
and review the substantial objections to the criticality hypothesis raised by skeptics. Points
and counter points are presented in dialog form.
Another way of thinking about inhibition is as a form of “gain control” as described in this excellent and free article:
A New Mechanism for Neuronal Gain Control (or How the Gain in Brains Has Mainly Been Explained)
Nicholas J. Priebe and David Ferster (2002)
https://doi.org/10.1016/S0896-6273(02)00829-2
One of the more prosaic but necessary features of almost any information processing system is gain control. All such systems must have some way to adjust the relationship between input, which can vary dramatically depending on changes in the environment, and output, which is almost always required to remain within a limited range of amplitudes. While the volume control on a radio or the brightness control on a computer monitor are not the most exciting or highly touted features, imagine such devices without these forms of gain control. Many an engineer can attest to the large effort required to design automatic gain controls in telephones, cameras, and radio transmitters.
The brain is no different in its need for gain control. In the visual system, for example, it seems to occur at every stage. When adjusting between a sunny day and a moonless night, the retina changes the relationship between light level and neuronal output by a factor of more than 106, so that the signals sent to later stages of the visual system always remain within a much narrower range of amplitudes. In the visual cortex, which responds not to luminance but to local luminance contrast, neurons constantly adjust their contrast sensitivity according to the mean level of contrast present in the visual environment. Prolonged viewing of an Ansel Adams photograph, for example, often leads to changes in perception as the visual system gradually adapts to low or high contrast portions of the image, allowing subtle shadings to emerge.
The trigger for changes in gain need not always be external. Internally generated changes in attention seem to act through a gain control mechanism as well. For example, neurons in areas V4 and MT are tuned for the orientation or direction of visual stimuli, but the amplitude of their response depends on whether or not the animal is attending to the stimulus McAdams and Maunsell 1999, Treue and Martı́nez-Trujillo 1999. Similar changes in neural response that can be well described by scaling have been observed throughout sensory and motor cortex (for review, see Salinas and Thier, 2000), so gain control seems to be as important for the brain as it is for man-made machines.
Unlike machines, however, the mechanisms underlying neuronal gain control have not been as readily apparent. One popular mechanistic explanation for gain control has been shunting inhibition. Shunting inhibition refers to a synaptically activated conductance with a reversal potential at or near the resting potential of a neuron. On its own, this conductance does not cause a significant change in membrane potential. But if the conductance of the synaptically activated channels is large enough, activating the shunting synapse will cause a significant decrease in the overall input resistance of the cell, which will in turn lead to an attenuation of the potential changes evoked by excitatory inputs. The attractive feature of shunting in the present context is that all EPSPs are scaled by the same amount (in proportion to the decrease in the input resistance of the cell), exactly what is required for a multiplicative gain control.
[…]
Consider how the spatial pooler would be improved if it modeled inhibitory interneurons. One possible benefit is improved control over the sparsity of the activity. Currently the WTA competition activates a constant number of cells.
However what is important is not the exact number of cells that activate, but rather, that enough cells activate so that other areas of the brain can decode the information contained in the activity. The inhibitory cells respond to specific excitatory SDRs using hebbian learning. So if an inhibitory cell recognizes a pattern in the excitatory activity, then clearly enough excitatory cells have already activated for other areas of the brain to recognize that pattern as well.
Another possible benefit of modeling inhibitory activity is targeted gain control / normalization. Sensory features should be inhibited in proportion to the magnitude. Strongly represented features should be strongly inhibited, allowing weaker features to be detected alongside the stronger ones.
When the spatial pooler processes a sensory input, both the excitatory and inhibitory cell activity forms SDRs, and these two SDRs should form synapses between each other through hebbian learning. When the inhibitory cells respond to a specific sensory feature they will all inhibit all of the corresponding excitatory cells that represent the same feature. But through the magic of SDRs, excitatory cells that do not correspond to that sensory feature would only be inhibited by random chance / stray inhibition.