Silence for thought: Special interneuron networks in the human brain

Helmstaedter and his team have discovered that human cortical networks have evolved a novel neuronal network type that is essentially absent in mice. This neuronal network relies on abundant connections between inhibitory interneurons.
“This suggests to us an almost ten-fold expansion of an interneuron-to-interneuron network”, says Sahil Loomba, one of the studies’ lead authors.

“Interneurons make about a fourth to a third of cortical nerve cells that behave in a very peculiar way: they are highly active, however, not to activate other neurons, rather to silence them. Just like kindergarten caretakers, or guards in the museum: their very laborious and highly energy consuming activity is to keep others peaceful, quiet”, explains Helmstaedter.

Theoretical work has suggested that such networks of silencers can prolong the time over which recent events can be kept in the neuronal network: expand the working memory.

“In fact, it is highly plausible that longer working memory will help you deal with more complex tasks, expand your ability for reasoning.

And last but not least: none of today’s main AI methods uses such interneuron-to-interneuron networks.”, says Helmstaedter.

6 Likes

Closer source article (but still no paper):

4 Likes

Twitter thread by Helmstaedter

5 Likes

https://www.science.org/doi/10.1126/science.abo0924
(Behind paywall)

1 Like

There is a function that seems to be fundamental to cognition: computing difference or gradient between the inputs. Fine-grained discrimination is the essence of intelligence. Excitatory neurons can’t do this on their own, and even conventional neuron-interneuron interactions don’t seem to do that well. Maybe this is done by those interneuron-interneuron interactions?

2 Likes

This could be revolutionary, yet they contain it behind a paywall. Go figure.

3 Likes

This is the (non paywalled) paper which did the original measurements (Bakken et al., Nature 2021) mentioned in the twitter posts

4 Likes

The human brain could potentially be a truly massive associative memory with 100 trillion synaptic weights available. I don’t think anyone has filled up a massive associative memory with data and then tested how well it can generalize etc.
It could be tried:
https://archive.org/details/sparse-am
blog

2 Likes

One idea is the real world is ultimately of limited complexity and rather than use deep neural networks to compactly compute that complexity simply overwhelm it with memory. Especially if sparsity can help, as some Numenta papers suggest or show. I could have used faster vector look-up sparsity in the associative memory rather than individual weight sparse look-up but I had ideas about information retention in the sparsification process and looking to get useful sparse behavior.
I’ll maybe do some Java code to make things clearer.

3 Likes

This sounds like an ORCH-OR thing to me. Assuming it is valid, there is a massive (far larger than the cellular neural network of the cortex) quantum computing, not just memory, network buried in the dendritic spines. Bonus: linked to consciousness.

Where it takes two neurons to do the XOR tango, just one microtubule can compute it, like a gate.

2 Likes

Feynman was heavily involved in the Connection Machine:
https://longnow.org/essays/richard-feynman-connection-machine/
Honestly he should have continued with physics instead of taking time out for that.
Likewise with Penrose.
Anyway Quantum by itself does not imply useful compute. Look at the complexity of current quantum computers to see what is involved in getting useful compute out of such systems.
I suppose you are to an extent calling out magical thinking where I suppose that throwing enough real world data into a large enough associative memory will carve within that memory smooth interpolation pathways and generalization capacity. All I say is - that is a thing that may be tried.

2 Likes

No. Suppose instead that the cortex is indeed an associative memory that computes.

The architecture of this computer is neural.

Within that computational structure is another computer.

The architecture of this computer is quantum.

The QC gives rise to conscious awareness.

My current theory is that the computational structure and extent of the neural computer is not powerful enough to support conscious awareness. Given the scaling, the QC is orders of magnitude beyond the NC in computational power and has quantum entanglement that permits instantaneous information exchange within the network that effortlessly solves the binding problem. Just seems to make scary sense to me, no magic involved.

1 Like

for as much as I love those kinds of thories, Penrose’s ORCH-OR still sounds a bit too much like a theory of magic since it explains no functional aspect.

to me it sounds more like “oh, brains are powerful and do stuff we cant explain so it must be QUANTUM™ so that it is able to do magic to get answers”

I mean, even if you explain how tubulin could store quantum states and and run a quantum state game of life, how does that translate into useful functionality? how can quantum states stored on dentrites affect the timing of spikes in just the right way to make you smarter? and why does it implies conscousness.

and most importantly, how can I use it to cast real magic and preddict billions of parallel universe timelines like doctor strange

4 Likes

It’s pure obscurantism, like almost all talk about CONSCIOUSNESS!!!
Some people just want to feel special :).

1 Like

Any sufficiently advanced technology is indistinguishable from magic.

1 Like

Can we please ban magic here?

1 Like

dont get me wrong, I love these kinds of out-of-the box theories, but only if they leads us somewhere. ORCH-OR is a bit far fetched but its implications are interesting enough that it deserves attention.

specially, it needs to be elaborated on. most theories start handwavy as such but they cant stay in this state forever otherwize they are no better than conspiracy theories.

More magic please, but a skeptically viewed demonstration will be required.

1 Like

honestly, I hate the fact that the closest thing to magic I know of is backprop.

on hindsight, the whole HTM thing is a bit like that too, despite it being able to store sequences, it has not been elaborated much, the best it can do currently is telling if a certain scalar value is typical for the day of the week.

2 Likes

Transformers/attention heads are pretty close to magic!

1 Like