Can the brain do backprop?

I’ve been watching Geoff Hinton’s talk at Standford about “Can the brain do backprop?”. As I assume most of you know, the backpropagation algorithm is the backbone of pretty much all of ANN-based machine learning methods, where an error signal is propagated backwards through an ANN to change it’s parameters.

Geoff Hinton believes the brain can be doing backpropagation and has an interesting theory on how neurons can simultaneously represent both their output value and the derivative.

Basically, he says that the output of a neuron is represented by the frequency of spikes and the derivative of this frequency is used for backpropagation. So a neuron can propagate an error for a short time by decreasing or increasing it’s spike frequency. He also says representing a real-valued variable with a Poisson distribution of spikes is actually better than standard backprop.

This fits well with HTM since we know that changes in a property (like changes in position) is represented with multiple neurons instead of having the same neuron encode both the position and it’s change. The idea is that the change in frequency of spikes encodes the propagated error and not a change in the property itself.

If we consider only binary activations (as in current HTM), then we can entirely use the value of a neuron to do backprop, i.e. a neuron is a real-valued variable, if it’s above a certain threshold we consider it active, and the value of the neuron is the derivative of the error.

I wonder if there’s biological evidence to this hypothesis and if so how critical it is to the CLA (seems crucial).

2 Likes

I watched this video some time ago (I’ve forgotten a lot of the details). At the time I was closed minded to the idea that the cortex does back-propagation learning. Now I’m a bit more open.

Do you know if he is talking about the cortex, or another part of the brain?

They try to justify biological plausibility of back-prop in the cortex here.

1 Like

Only rewards and anti rewards can be back propagated in a unsupervised system.
Because you do not need to be trained by a out side supervisor that if some is
cutting of your arm that it hurts.
I am serious about this, and it is no joke.
The other form of back propagation is for adjusting the clarity of transmission lines
of nerves. Which is two transmission line together. The output of the fist is sent back
to the beginning of the fist transmission line and the input data is compared
to the back propagated data and then weight are adjusted until both are equal.
" A transmission GAN"

More daisy chain loop of nerves.

Yes, this is what I was thinking last time I watched it. Eg, If V4 was learning a face feature and its learning it via back-propagation then that means that the face feature must already exist in the brain (in order to generate the error), which makes learning it in pointless if it is already known.

As far as I understand it, the cortex learns just fine using unsupervised methods. It does use feedback, but from features the cortex has already learned from feed-forward learning.

The only place where back-prop makes sense is in the motor/action regions. This makes sense as a goal is known (so therefore has something to compare with to generate error). Through trial and error the brain compares sensory input with expected output (compares the trial with the goal). This is already a theory on how the cerebellum learns motor control. Maybe that’s what they mean :wink:

I’ll watch it again tonight and there’s probably a lot I’ve missed/forgotten.

@keghn_feem @sebjwallace

He explains in the video how you don’t really need an artificial supervisor signal in order to do backprop. Some examples are: autoencoders, GANs etc… where you generate an error signal from the input data itself not using an exernal supervisor.

Seems to me that in HTM we also have an error signal which is the overlap in SDRs between prediction and reality. You use this information to tweaks the model parameters.
I don’t think there can exist a learning model without an error signal and if so, backprop is an extremely good way of minimizing it.

Yes he’s refering to the neocortex.

The difference is that this is not feedback, it is just ordinary local Hebbian learning between pre/post-synaptic neurons (cheap, local, simple and fast).

Hhmm, back-prop would be over-kill when learning can be done in a purely feed-forward fashion. The error/delta of the weight can be calculated and modified simply using eg STDP.

Although feed-forward does this easily, I am curious what benefit back-prop provides.

In outer place where you do not need pack propagation is when you generate a very
small piece of data, internally with in. Then go look for it data stream. If it exist
then a reward is generated and data is saved.

I understand and I’m not saying its wrong. But there’s the possibility that the brain does backprop and if it does its crucial for HTM to model it.

Actually he talks about STDP as a positive evidence that the brain does backprop (t=36:45). Basically STDP acts as a derivative filter.

My intuition is that backprop gives a more efficient credit assignment rule. I.e. who is responsible for the error and how much?
I believe this is very much required as the size of the model increases.

1 Like

Neuroevolution can be just as good as back propagation.

Can the brain do neuroevolution? :wink:

3 Likes

How do we teach a machine to program itself ? — NEAT learning.

I remember Hinton saying once that gradient decent is essentially evolution. It tries many variations of weights, keep those that are good, change those that are bad. Search & selection.

They are awesome. Genetic algorithms on neural networks (neuroevolution) can have a lot more going on. This is my favorite GA work wherein they do generative encoding of the genome (weights) by mimicking the idea of chemical gradients during embryotic development. HyperNEAT User's Page These guys really went to town on neuro-evolution.

1 Like

???

There are other ways.
Like there could be this big dense volume neurons connected every
which way. oscillating like a constant storm.
A data stream is feed in from the real world, and another channel
that feed back in. And also, it is paired with highly organized transmission lines.
It work by searching through well ordered transmission line for the answers. It just
has to remember where to look depending on the situation before outputting into
the real world, Or back into the dense mass of chaotic nerves.
All the answers are already there. Just need to find them.
Is that what the internet is all about?

Reservoir computing?

1 Like

That doesn’t describe neuroevolution, nor does reservoir computer. My question wasn’t actually serious (I was just poking fun at your deviation from the question at hand). But if you want a serious answer, then The Cerebral Code is probably the best theory for how a brain could hook into the process of evolution.

1 Like

No neuroscientist I’ve ever talked to has said this. Hinton himself said we should look for another way less than a year ago, didn’t he? Now he’s flipped? I’m confused.

Numenta does not pursue backprop because there’s no evidence of it in the brain (that we have seen anyway). If we had some experimental neuroscience evidence of this, thats another story. If anyone has any references to scientific papers providing evidence, please post them!

I don’t think it’s worth the time theorizing how HTM could perform backprop without solid evidence of it’s biological plausibility. We don’t consider implementing backprop in Numenta’s HTM model because we don’t see a need for it.

6 Likes

It’s a crucial question to ask for anybody who is totally and completely invested in modern-day deep learning, for sure. My problem is that even assuming the existence of a common cortical circuit in the neocortex, the brain consists of a myriad of highly coupled unique functional and structural pieces that nearly all interact and thus at least influence each other’s function, if not, are inseparable from them. What exactly does Hinton mean to suggest? Does he mean to say cortex could be doing backprop? Where in cortex, exactly? That is to say, where in the cortical column and it’s connections to other brain regions does he suggest it is happening? Does it arise through a combination of structures? The number of possible manifestations of such a claim seem unwieldy at best.

Or is he suggesting every single neuron in the brain is constantly doing this brand of spike-frequency backprop? In that case, what about the many different functional and structural types of neurons in the CNS? Does that change anything? Also, not all neurons change their spike frequency in the same way. For instance, thalamic relay neurons have a special capability of shifting their mode of firing between tonic and bursting as a property of their t-type calcium channels. This switching is orchestrated by other components in the thalamocortical system like the thalamic reticulate nucleus providing inhibitory input to the relay neurons which hyperpolarizes them and causes the inactivation gate of the t-type calcium channels to open thus switching the firing mode from tonic to bursting. In contrast, the brainstem modulatory inputs read acetylcholine which depolarizes the relay neurons and has the opposite effect. Input from cortex and it’s effect on the mode of firing in relay neurons is more complicated and I’ll save the long explanation but through processes known as facilitation and synaptic depression the switch of firing modes is believed to modulate the degree of detail and manner in which information from the cortex ought to be relayed (tonic firing to express a linear relationship with firing frequency and thus signal strength and bursting which loses this signal strength information but helps strongly reinforce relevant synapses quickly). I find it very unlikely that a topic as multi-form and complex as synaptic firing rate modulation has an answer as simple as “because backprop…backprop everywhere.”

My problem with ANNs and their relationship to neuroscience is that they don’t actually have one. It seems to me the only detail that links ANNs to neuroscience is an extremely high-level concept of distinct computational units connecting to other distinct computational units that pass signals of some kind to each other. No further connection to neuroscience exists whether in the details of the computational units themselves or in the architecture of their connections. So forgive me if I find searching for backprop in the brain to be almost silly. Not to mention, there are many aspects of HTM that are also lacking (or even inconsistent) with neuroscience literature. Most obviously, neocortex comes in 6 layers…standard HTM networks have ventured to explain perhaps the function of layers 2/3. Layer 4 typically receives input from the thalamus and layers 5 and 6 are widely known to project back to the thalamus and other sub-cortical structures; this functionality cannot be ignored. Moreover, HTM has yet to explain layer 1 which is arguably inseparable from the idea of multiple, distinct cortical areas of the same “brain” unit talking to each other (likely in a hierarchical processing fashion) which has yet to be realized to my knowledge. Different modes of firing are also not modeled in HTM networks. In fact, the whole temporal component of neuron firing rates are ignored, to my knowledge. An HTM neuron has either “fired” or “not fired” for a timestep and that information is not carried to the next timestep with regards to whether it should or should not be fired in the next timestep. Boosting could potentially make my statement false, but that is in an effort to implement homeostatic excitability control and not realistic temporal firing characteristics, either way

ANNs of any modern type perform a single function, defined by labeled data, which they approximate through nonlinear optimization techniques (backprop). Every “neuron” in the ANN has dedicated all it’s representational and computational resources to this function. It makes sense in such an environment that a comprehensible error signal can be generated and used when your model has a singular, crystal clear goal in mind (curve fitting). In my experience, it is never so clear-cut in the brain. As I mentioned before, structures in the brain in general have connections going in many directions and do all kinds of different things simultaneously. Goal-driven learning akin to backprop doesn’t make sense to me at this low of a level considering the possible breadth of different purposes to which each multi-polar neuron contributes in general. In HTM, each dendritic branch is believed to be an independent pattern detector. HTM neurons (and real life multi-polar neurons) are associated with a large number of dendriric branches thus potentially recognize a large number of different patterns. It stands a chance in the end because of large, sparse pattern encoding. Biological (and HTM) neurons work together, but they do so independently of one another. In contrast, the characteristics (weights) of ANN neurons are dependent on the characteristics of every other neuron in the ANN that come before it. If you flip a weight in an ANN then it will impact the function of every neuron it talks to, which will impact the function of the neurons those talk to, cascading forward. The whole entire ANN has been decided with regards to the single optimization function. To my knowledge, there is no evidence in neuroscience of such a global supervisory influence (perhaps whose purpose is to perform some kind of optimization) on synaptic plasticity and the existence of one is not consistent with the concept of neurons acting independently.

5 Likes

The video I saw was from 2016, I didn’t know he changed his mind (glad to see a video/paper).

I fully agree, HTM is a biological theory and must follow empirical biological facts. I didn’t claim I had evidence for it, only that Hinton was theorizing about how the brain might do backprop and that he was not satfisfied with the standard arguments against that.

2 Likes
3 Likes

Great discussion. Yes, Blake and Hinton are saying the brain does backprop. I for my self will say there is no such thing as unsupervised learning. The brain learns to predict. The learning signal the feedback is the constant stream of input from the environment. i.e. If I do this will my arm move? Holy cow yes! Or, darn it did not move, try something else.

I am not clear if we predict up the hierarchy or down the hierarchy. Isn’t prediction part of Jeff’s book On Intelligence?