Backwards learning rule?

I have an Idea for an algorithm, just want to know if my suspicion is on the right track.

So theres hebbian learning where if the postsynaptic cell fires shortly after the presynaptic cell, the synapse gets strengthened.

But by any means is there any set of neurons in the brain that learn via a backwards form of synaptic plasticity? Like the synapse gets strengthened when the post-synaptic neuron fires first, then the pre-synaptic?

The idea is not fleshed out enough yet for me to give details, but I have a feeling that this could be happening somewhere in the brain.

http://www.scholarpedia.org/article/Spike-timing_dependent_plasticity#Diversity_of_STDP

1 Like

As told by Jeff Hawkins, the pyramidal cells are put in predictive state before an actual firing, then if the cell fires due to an input (proximal dendrites) the synaptic connections on the distal dendritic segment(s) are reinforced with a back-propagating action potential. And they are weakened if there is no input after a while.

I don’t know if the above completely replaces Hebbian learning as being a more complete description of what happens, or if both coexist.

Apart from that, I personally think that the brain may do more than just prediction at least in some other cells, with actual cell firing even without an (sensory?) input. Example: if we close our eyes and ears and have no external input, we can still have thoughts and they are not just “predictions”, there is lots of activity (including brain waves, etc.) without sensory input.

The current HTM system and similar sequential memory implementations only process data when there is input… (correct me if I am wrong). But I understand that they are just a first step in the longer journey.

I think that problem can be solved if you allow for a small portion of the predictive cells to “leak” and spontaneously activate.

oh so that is a yes.