Inhibitory activity

We can classify neurons into two kinds considering their role of excitation (increasing the likelihood of other neurons firing) or inhibition (decreasing the likelihood of other neurons firing). This seems a fundamental aspect of brains.

HTM only has excitatory neurons. The k-winner algorithm inhibits neurons from firing so I guess this is meant to be the equivalent of inhibitory neurons.

The inhibitory neurons in a brain are “learning” or adapting in ways similar to excitatory neurons eg forming and losing synapses.

From a theoretical perspective what is the reasoning for not modelling inhibitory neurons? Maybe I missed it but I can’t remember seeing a clear rationale.

Are there ANN which do have both kinds of neurons?

Cheers.

2 Likes

K winner is an attempt to model basket cells.

2 Likes

Do you have a reference for that? I wonder if the rationale is more based on the concept of sparsity than modelling a particular type of inhibitory neuron. K winner has no learning (updating of the K winner algorithm) which is a major issue if it is intended to model neurons.

1 Like

K winner is used in the SP algorithm, which does have learning. The “winning k” minicolumns adjust their synapses such that if a (spatially) semantically similar input occurs in the future, those same minicolumns are more likely to win again. This results in specialization, where each minicolumn becomes best at recognizing particular spatial patterns.

The TM algorithm also models the effect of inhibitory cells (perhaps @Bitking or others can identify which ones specifically – the two I hear thrown around frequently are basket and chandelier cells, but not sure which are which off hand). Anyway, the inhibitory effect modeled in the TM algorithm works such that a “minicolumn” represents a collection of cells which share a receptive field (i.e. they would all activate for the same/very similar feed forward input), and which are physically close enough that if one/a couple of them fire a little faster than the others, their input is received by nearby inhibitory cells to silence the others (thus leading to activity representing an input in high-order context). TM of course also includes learning.

That said, I assume by learning you are actually referring to the learning algorithm being run by the inhibitory cells’ dendrites and the dendrites of the cells they transmit to (versus downstream effects of learning that happens on the pyramidal neurons). If we assume for a moment that some of the finer details of inhibitory neurons are not functionally important at a macro level, then we definitely have to explain what macro effect those details we glossed over should have.

I believe the abstraction being made is that inhibitory neurons (at least for the two classic HTM algorithms) primarily have an area effect at the macro level (allowing the finer details to be glossed over). Of course that is another judgement call about where to set the magnification level (which can always be revisited later if it turns out they glossed over something important).

3 Likes

I think you are confusing two things:

  1. the SP algorithm for K-winner selection between minicolumns
  2. the learning in the synapse of the cells in the minicolumns

There is no learning in the SP K-winner algorithm, only the excitatory neurons are learning.

I am confused by your description of TM. The sequence is predicted by active dendrites. Can you please point to the TM pseudo code in BAMI (for example) to show where it is modelling inhibitory cells?

The idea that inhibitatory cells are only working at a macro level seems in contradiction with the neuroscience. They seem important for understanding active dendrites.

1 Like

These are directly related, so be careful when separating them. The k-winner selection is based on minicolumn scores, and minicolumn scores are based on previously learned spatial patterns.

Yes, there is learning in the SP algorithm (and yes you are correct, it is the excitatory neurons that are doing the learning). This is most obvious in one of the non-biological abstractions in the SP algorithm that you may not be aware of – that a minicolumn is modeled as having a single proximal dendrite (treated as if itself is a cell). In fact, in a lot of HTM implementations, the Cell and Minicolumn classes are extensions of the same parent class. This one dendrite learns to align with spatial patterns in the input space (learning is built around who wins the k-winners competition).

Now, of course this abstraction is not necessary though – I and others have coded the SP algorithm where each cell has their own proximal dendrites (but that implementation comes with no functional benefit and at 32x the cost in resources). It just makes it easy to demonstrate where learning is happening in the SP algorithm, which is completely separate from where learning happens in the TM algorithm.

Note that the BAMI algorithm details sections don’t really go very deep into biology or their justifications for any abstractions of it, but rather focuses on their implementation (I would say it is meant primarily for a ML audience). There is certainly better information from a neuroscience perspective and justifications for various abstractions here on the forum.

That said, here are some of the relevant sections from BAMI:

10. function activatePredictedColumn(column)
11.   for segment in segmentsForColumn(column, activeSegments(t-1))
12.     activeCells(t).add(segment.cell)
13.     winnerCells(t).add(segment.cell)

2 Likes

I edited my post - I mean SP K-Winner algorithm. The essential point, that we seem to agree on, is that the learning of inhibitory cells is not being modelled.

This is in relation to active (excitatory) cells. I guess we can also agree there are no inhibitory cells being modelled there?

I think we need a much stronger case than what you made to ignore the learning that chouslve associated with inhibitatory cells.

1 Like

Correct, no inhibitory cells – the algorithm assumes and models their macro effect.

1 Like

This make no sense to me, you can’t reasonably assume that something learning (inhibitatory cells) can be replaced with a static algorithm and expect to get anything like the same functionality - or can you?

I think it comes down to needing to draw the line and get a foothold somewhere so you can actually start trying to understand a complex system. My sense is that the SP and TM algorithms are only a tiny piece of what a CC is doing, and for those two specifically, macro-effects of the inhibitory networks were sufficient to explore aspects of learning spatial and temporal patterns.

This is speculation on my part, but I would guess that on the path to gaining a more comprehensive understanding of the CC algorithm, we’ll sometimes need to explicitly model inhibitory cells in various cases. One example that I ran into myself with TU was the question of what “starts” a sequence unfolding – which perhaps can happen organically if you assume neurons “want” to fire, but are being held back by an inhibitory network that itself can bias specific patterns.

1 Like

I think this is where the reverse-engineering approach fails. It can work when you have a complicated system that is composed of simple systems. But complexity (as in complexity science or network science) is not like that.

Here is a quote from The Spike “Local neurons in cortex that make the inhibitory inputs onto our neuron fire two-to threefold more spikes. And the gaps where these spikes arrive can be four-to fivefold stronger than those of excitatory inputs.”

True – a perfect model would consider all levels of detail (which may be impossible given it is built on a chaotic universe). I like @Bitking’s analogy of a kid taking apart a drum to try and find the piece that makes the noise.

That said, can such an approach make it possible to learn anything useful about the system? I believe it can (and in the case of HTM has). You just have to keep reminding yourself that you have certainly overlooked a lot of emergent properties while zeroing in on a limited aspect, so you don’t forget to keep going back and look at the areas you have glossed over in light of new information that you’ve learned from the focused perspective.

Some inhibitory cells are important for active dendrites, but those aren’t in HTM. The ones HTM model target perisomatically, so they don’t operate on distal dendritic segments, just the total excitation level from proximal dendrites.

There are many types of interneurons, many of which aren’t included in HTM in any form. The theories aren’t ready for that yet. Right now, they’re mostly driven by constraints, like what the brain must do and whether it could implement a hypothetical process. I expect that’ll change quickly at some point in the future.

Other groups will figure out how the cortex works by modelling everything, if that’s the right approach. I don’t think the tools for experimentation are quite at that point yet. They aren’t good enough to use strict statistical standards if a scientist wants to publish anything ever, even though the tools are very advanced. In neuroscience, results are usually reported if there’s less than a 1 in 20 chance of being a statistical anomaly, whereas in particle physics it’s 3.5 million to count as a discovery. That’s pretty extreme, but a 1 in 20 chance of falsehood is actually much higher because if you check a bunch of things there are more false positives. The technology for experimentation is developing pretty quickly, I think, so maybe neuroscience will have the tools to get info for exact modelling in the near future.

Some aspects of the brain are complex systems, but not all of them. For example, an individual neuron isn’t a complex system (I mean, it’s super complicated when you get down to receptors and whatnot, but it’s not a network). What neurons do can say a lot about what the whole thing does. For example, local summation on distal dendrites says there are OR processes going on, and like 90% of synapses between excitatory cortical cells are involved (or some large percent). Other things say things too, like connections and receptive fields.

I think the modeling approach to AI only works if you get all the details right, but neuroscience isn’t there yet. There are exciting new experimental techniques based on genetics, which might develop to the point where that approach works, or might not.

2 Likes

It is not so much about the details. Perhaps more about not treating the whole in isolation from the parts.

In HLC @Falco raised a good point regarding the inhibitory aspect of the minicolum (what we are calling the microcolumn) where only the cells in the predictive state fire, thus modelling inhibition within the minicolumn of the other potentially active cells. This still lacks the dynamics that inhibitory neurons might add when learning. It would be interesting to know the justifications. Maybe it is just too hard to imagine a workable minicolumn circuit with those dynamcis.

“all models are wrong, but some are useful”

We do know, from modeling studies, that excitatory cells compete to activate and inhibitory cells control the number of cells that do activate. The K-Winners algorithm implements this competition to activate.

Those same modeling studies demonstrate all sorts of other interesting properties which the K-Winners algorithm will never have. But they also all demonstrate the same basic competition to activate, which the K-Winners does do.

2 Likes

This is not the suggestion. It is more about noting which assumptions are being made and why they are being made. Assumptions that don’t have a rationale might be the most likely candidates for revision.

Perhaps one of the reasons there is a complexity “science” is that isolating the system into simple parts has proven not to be an effecitve approach for complex systems. Of course it might be the best we can do and it does collect data. Ideally there are principles operating at the level of the system that can be identified and then this gives another set of constraints for arriving at coherent theory.

Rather than looking more to the details there may be value in looking to the system wide properties. An example would be Buszaki’s work.

HTM is still a model. A model can be at different levels of abstraction.

1 Like

It would be good to know what those properties are and why they can be ignored while expecting the system to function as if they are present.

I assume that the inhibitory neurons are not only connected locally but I’m not sure. If the inhibitory neurons have the equivalent of apical dendrites then it would seem important for dealing with hierarchy…

To start with, this is a good general review of inhibitory neuron types and their basic properties.
It’s paywalled, so you’ll need to go through the hub-of-science.

INTERNEURONS OF THE NEOCORTICAL INHIBITORY SYSTEM
Markram 2004, doi:10.1038/nrn1519


Some of the search terms for the properties are:

  • Excitatory / inhibitory balance
  • Criticality and stability/instability
  • asynchronous irregular activity
1 Like

An interesting article on the topic, and mercifully short.

Enhanced responsiveness in asynchronous irregular neuronal networks

Zahara Girones and Alain Destexhe

Networks of excitatory and inhibitory neurons display asynchronous irregular (AI) states, where the activities of the two populations are balanced. At the single cell level, it was shown that neurons subject to balanced and noisy synaptic inputs can display enhanced responsiveness. We show here that this enhanced responsiveness is also present at the network level, but only when single neurons are in a conductance state and fluctuation regime consistent with experimental measurements. In such states, the entire population of neurons is globally influenced by the external input. We suggest that this network-level enhanced responsiveness constitute a low-level form of sensory awareness

[1611.09089] Enhanced responsiveness in asynchronous irregular neuronal networks


Yes, they often fall into the category: “spiking neural networks”.

2 Likes

Do you have an example in mind? It seems that https://vogelslab.org is a key player in inhibitory plasticity.

Yesterday in a webinar on neuromorphic computing the remark was make that DNN style point neurons allow for negative weights so this could be seen as an explixit modelling inhibitory plasticity.