Experimental observations conclude learning is mainly performed by neural dendrite trees as opposed to modifying solely through the strength of the synapses, as previously believed.
What kind of learning? I would have thought learning to avoid touching really hot things only takes one trial and is almost instantaneous, too slow to rewire dendrites.
And I would have thought learning a sequence (like a musical tune or dances steps) takes sequence memory like HTM, which doesn’t translate well into dendrites either. So what does?
The direct paper:
Synaptic plasticity is a long-lasting core hypothesis of brain learning that suggests local adaptation
between two connecting neurons and forms the foundation of machine learning.
Brain learning or machine learning? What are they on about? And it gets worse, with ongoing confusion between biological function and software function.
I think I can skip this one.
They seem to have picked up on one peoperty of dendritic modulation dyamnics and taken it maybe in the wrong direction and run up against biloogical implausibility. That said, at least they have recognised dendritic modulation dynamics have a role to play that is maybe currently overlooked…
While the papers authors are emphasizing the “new” role played by the dendrite I don’t think that is completely eliminates the role of the individual synapses.
I proposed the basic mechanism outlined in the paper as “residue of experience” in prior posts as an interaction between the synapse and in the dendrite to facilitate learning:
" Here is where I will strike out on my own with a proposal based on various hints I have seen in the literature; as the learning occurs I propose that the metabolites accumulate in the cell body, more than likely in the dendrite. This is what I call the ROE (residue of experience) - or a chemical memory independent of direct modifications of the synaptic connection nodes. This is not essential to the proposed process but a possible enhancement."
Here is the study that they’re basing their claims on.
It was published by several of the same authors, in 2018.
Basically, they found that dendrites can learn.
Sorry I find it hard to read, do they know how they learn? Is it something asymmetrical about branching points?
The engram paper was quite interesting in relation to this.
Thought it was just me. Had to read it a few times, between sleeps, lol.
Their 0.047 error rate is achived with a structure (10 separate 1:49:784) that has the equivalent of 6.25% sparsity at the input layer (due to 16 non overalpping encoding) vs 0.018 for a single fully connected structure (10:100:784). The number of weights in their 10 tree networks is 8,330 while the fully connected is nearly 10x larger. Not that different result to the fly hash encoder accuracy… Basically it’s a rationale that can do something.
Their interesting bit is when considering the nonlinear dendritic segment modulation and the effect that synaptic proximity of pairs create on the gain function/segment when they fire together. The effect this has in the training seen in 4c would appear to be that the effect of the pair firing is to change the timing dynamics and slow/delay the response (gain channel narrowing) but also to allow a weaker signal to fire (increase of ion pool). Thus it may only take a close proximity pair of synapses to fire a neuron… and the process is slowed down, which raises an off hand question if this process is not slowed down is that how one type of epilepsy occurs ?
The interesting bit on the “training” is that the 4mS (4d) to me looks like the effect that would occur in the 2-3 pulse burst in spindle waves… was this timing selection on purpose or just coincidence ?
Biology creates and adapts the mesh (more like a tree with lots of ivy growing up it) structure dyanamically which also has other non-temporal loops involved.
They buried that in the methods section.
They modeled the dendrite learning using Hebbian learning and spike-timing-dependent-plasticity, using the same formula as synaptic learning.
They based this formula on measurements from pyramidal neurons grown in-vitro.
Thanks, but do they know what chemistry or structural changes are responsible?
They don’t confirm any chemical changes, but thier hypothesis is that structurally it’s more about the distance between synapses witin a dendritic segment, which alllows a particular architecture to occur.
The real question that arises for me is that if a given dendrite has say 10 potential locations to create a new synapse, why, how or by what biological mechanism (potentiation) does the selection process result in the segment that has just fired from another synapse being the preference… This is part of the timing effects which are interlinked to the likes of spindle waves and memory compression/hierarchical formation.
To me part of whats covered in the paper is how the hierarchy is formed, it’s all about a particular aspect of timing and is driven by the HC/EC phasing that occurs within sequences in the short term buffer and spindle waves when we sleep. It breaks all standard language patterns, but mathematically it seems to work.
Then it’s really just another form of synaptic learning, by relative location vs. simple number of synapses. They seem to have language problems…
I put Bkaz’s questions to the papers researcher, Ido Kanter:
Do you know what chemistry or structural changes are responsible for this dendritic learning?
The reply came today:
Thanks for your interest in our work. Some people speculate the reported phenomena are related to microtubules, however, it is beyond our experimental capabilities and expertise.
Like almost anything about the brain, once you know what to look for you find that there is research ready to be read. You just have to know to ask the question.
“For some time, it was thought that dendritic spines were devoid of dynamic MTs, and that actin was the main regulator of spine morphology and dynamics associated with synaptic plasticity. However, within the last decade, the use of new visualization techniques has revealed that MT dynamics do play an essential role in dendritic spine development (Yau et al., 2016; Dent, 2017). In concert with MT transport, MT polymerization actively occurs and contributes to the development of the dendritic branches.”
“Long-term potentiation (LTP), a stimulation protocol that mimics memory formation ex vivo and can modify memories in living mice (Nabavi et al. , 2014 ), is affected by changes in MT dynamics.”
I hope this doesn’t mean we need to model MTs to use neurons as a functional model layer?
It may mean that adding dendrite structure allows HTM to outperform DL systems?
Just as we usually don’t need to model the chemistry and size of a synapse - we have a single weight variable, we may not need to model the internal dendritic structure - only the effects it may have on related synapses on a single dendrite.
I have always thought that dendrite paths could be an important part of a biologically faithful system, and have anticipated that with this post:
Note the mention of distance along the dendrite as a possible part of processing each dendrite structure.
I am guessing those MTs facilitate traffic in dendrites, may be by making them straighter / shorter?
Or poking out new branches in active areas - like Numenta adding segments?