Error correction abilities of brain-inspired recurrent systems

A general question - are there any neuro/brain inspired systems that incorporate recurrence? Have they been properly studied? How resistant are SDR’s towards errors when going through multiple orders of recurrence?

1 Like

I think you are thinking about thalamo-cortical loops, they mediate cortical feedforward and feedback: https://en.wikipedia.org/wiki/Thalamocortical_radiations. Numenta was exploring them, but I don’t think it’s in the code yet. Thalamus is quite complex. But the whole process is so stochastic that I don’t think “error correction” is a valid concept here.

1 Like

To say the least! I’m of the view that most of the brain is evolved dynamic networks with emergent or chaotic behaviour. Which means you can study as much as you like, there is no ah-ha moment, no underlying principle, no point at which you suddenly find ‘the answer’. Just lots of details and lots of asking how does it do that, and why?

But cortex is different, because columns. The internals of columns are one thing, but the fact that columns scale with intelligence implies an organising principle between columns, and HTM is a peek into that world.

3 Likes

Since any input synapse can be triggered by any other cell activation at the previous time step, Temporal Memory itself is both brain-inspired and recurrent network.

About its error correction abilities… I assume there are quite a few examples.

PS Otherwise, a recent, promising development of RNN-s/Reservoirs is any stuff related to Legendre Memory Units aka LMU-s https://proceedings.neurips.cc/paper/2019/file/952285b9b7e7a1be5aa7849f32ffff05-Paper.pdf

1 Like

You might look into hopfield networks, which are simple NN from the '80s and theyve been extensively characterized.

1 Like

Ah, I think I phrased my question badly. I was wondering if anyone had rigorously studied how well SDRs can recover from errors (some encoding mechanisms where its easy to locate errors - like Hamming codes etc.) which might be implemented in the neurocortex.

Errors would arise if the neurocortex processes signals recursively - as for each iteration, a slight error would be propagated which would have to be handled. So I’m wondering if there’s already some literature covering that…

1 Like

There’s literature about how well SDRs handle noise in general, but I don’t think anything about errors arising from recursion specifically.

SDRs handle noise pretty well, and a little noise shouldn’t propagate over time much, because it’s unlikely to change which neurons (or dendritic segments) are suprathreshold next timestep. It doesn’t really matter if there’s a little noise, since neurons are inherently noisy. Maybe noise could build up over time, but that’s probably not an issue if “over time” means over the course of sensory integration, since it’s getting external inputs.

There are plenty of recurrent connections in the brain, but they’re in some very complicated circuits, so it really depends on what’s going on.

In temporal memory and TBT’s output layer, there are recurrent connections, but they don’t really cause cells to fire. Kinda the opposite actually. They select subsets of cells to fire, whereas the whole set of cells would fire without the recurrent connections.

(The neuroscience reason is the most strongly activated cells fire earliest, activating inhibiting circuits, preventing the less strongly activated cells from firing.)

In temporal memory, recurrent connections select cells in active minicolumns to fire, usually just one cell per minicolumn. If a minicolumn has no cell receiving the recurrent signal (no cell has a suprathreshold dendritic segment), all cells in the minicolumn fire.

In TBT’s output layer, only the cells which fired in the prior timestep will fire as a result of the recurrent connections.

In both cases, there’s a bunch of cells which are going to fire, except recurrent connections select a subset and only those ones fire. It’s about reducing ambiguity (which are represented by unions of SDRs).

It’s probably different from the kinds of recurrence in machine learning.

4 Likes