It seems like LTD (long term depression) and LTP (long term potentiation) are the most common mechanism used by dendritic terminals to learn. I think that this mechanism is well understood by the neuroscience.
Nevertheless, it looks like the learning rules used by HTM is not following this. Especially LTD (which states that if the presynaptic neuron fires and the post-synaptic neuron don’t, AMPA receptors will be “reabsorbed” by the post-synaptic cell, which reduces the “permanence” of the synapse). In HTM this fact is mostly ignored. Perhaps distal punishment models somehow it. Nevertheless, in proximal dendrites there is no LTD at all. If my understating is correct, the rule used is the oposite: weaken the synapses of the cells that fires when the presynaptic cell don’t. This provides better results for SP/L2/… (there is a wider separation between SDRs) but lefts many unused synapses scattered across the system. In “scarce” memory conditions be able to prune such useless synapses might be interesting .
My question is: Why LTD is not used by HTM? I guess this is already proven by someone else. It seems like LTD has a high computational cost (whereas not for the biology since is a local process) but I don’t know if there is another hidden problem/s.