Genetically determined connections

I’ve gotten so used to the idea from ML that synapses between neurons are weighted. After learning from Numenta’s research (and other’s) that pyramidal neurons actually have binary weights, it got me thinking about non-cortical neurons and their synapses.

For all the genetically determined neural networks that do not implement learning, are their synapses weighted? I’ve always assumed they are, but it almost seems like having a weight is far too fragile for the noisy and robust brain. Then again I can’t imagine all neurons having a connection of 1.

From what I’ve read about the nervous system and the more primitive parts of the brain, I haven’t read anything about the synapses being weighted. Is there some other way computation between neurons via synapses is performed other than weights?

Anyway, I was hoping someone might be able to shed some light on this for me.

If you look at the original STDP paper (Bi & Poo 1998) and any paper that measures postsynaptic potentials, it’s pretty clear that synapses can be and often are weighted. They also come in a huge diversity of different sizes, and this has some relationship to the evoked synaptic potential. And this is true for pyramidal neurons too (Bi & Poo was hippocampal pyramids), it’s just that we don’t know whether or to what degree the weight matters, computationally.

You get a lot of power even just from binary weights as Numenta, Kanerva, etc have shown. So the weights may be a sort of epiphenomenon emerging from the need for a permanence value.

However, a great many computational neuroscience models of pretty much every part of the brain make a lot of use of graded synaptic weights, and they do appear to be present in the brain, so I’d be surprised if evolution didn’t find a use for them.

We debate this often. Many synapse do exhibit learning that looks like weight change, yet there is lots of data that suggests weights are not that important.

First when Hebbian learning was first proposed no one believed that new synapses were forming. It is now known that up to 40% of the synapses on a pyramidal cell can change on a daily basis. Some synapses are permanent and others come and go. Second, individual synapses are stochastic, the amount of neurotransmitter released varies dramatically, sometimes none is released. This says to me that any theory of cortical function can’t rely on any accuracy of weight in the synapse. I once read a paper that studied Hebbian plasticity (sorry I don’t recall the author). One of the observations in the paper was that Hebbian learning seemed more to increase the “permanence” of the synapse than how much transmitter it released. Combining all these led to our model.

It may be that synapse weight may play a role but from a modeling point of view I haven’t seen a need for it.

6 Likes

Might neurotrophines can play the role of weight? My understanding is that, although the neurotrophines chemical signaling is fundamental in embryological development (guiding the axonal growth cone), they also play a key role in the adult brain (especially BDNF). My crazy-hypothesis is that is just a “fault-tolerantce” mechanism: a particular active synapse might induce axonal growth and the formation of “parallel” synapses… but synapse “permanence” weight might be also plausible.

I remember you saying this ages ago in a video and to me it makes perfect sense. When I look at a weight matrix and see these very specific weights (i.e 0.4582) I just think the brain is way too messy and sloppy for such delicate specificity. Especially in the case of genetically determined synapses - does the genome really hold all the genes to specific weights? It would be very unlikely.

The only way I can imagine weighting occurring in non-plastic systems is if each synapse is binary, but the number of synapses represent the weight. If neuron A has 10 synaptic connections to neuron C, while neuron B has 5 synapses to C, then of course A is stronger than B.

Just an idea - perhaps the reason why there are so many synapses (with small amplitudes) on all non-plastic neurons is because communication can be flexible and robust. If A>B had only 2 synapses then if 1 was destroyed then its weight would be greatly diminished (by 50%). However, if A>B had 20 synapses then 1 was destroyed then its weight would diminish by 5%.

1 Like

Keep in mind that even the “hardwired” connections are developed by a combination of self-organization mechanisms, from chemical gradients (see reaction-diffusion systems) to early plasticity. Even “non-plastic” synapses are almost certain to have been plastic during some critical period due to fundamental limits of developmental precision.

3 Likes

Somewhere (can’t remember where) there was a discussion on whether multiple synapses between the same 2 cells {A,B} offer any significant computational effect. The temporal memory pseudocode in BAMI 0.5 assumes there isn’t and constrains the growth of new synapses to target only new cells.
How certain is this conclusion? Is there a debate about this in Numenta?

1 Like