Brains@Bay Meetup: Hebbian Learning in Neural Networks

This event is in Santa Clara, CA this Wednesday night at 6:30PM. Please RSVP if you are showing up in person.

I am not certain this live-stream is going to happen because of several unknown complicating factors. But I am going to try my hardest.


I won’t be able to attend this event, but I would be interested to have your thoughts on anti-Hebbian learning in addition to the classic Hebbian learning. It may be a side discussion during the meetup :wink:

This paper offers a global overview of Hebbian and anti-Hebbian Spike Timing Dependent Plasticity (STDP):

I think that anti-Hebbian learning rules will be necessary to model the role of apical dendrites of pyramidal neurons (not yet modeled in Numenta’s work if I am correct). This kind of learning is observed at L2/3 synapses onto L5 pyramidal neurons in the cortex.

It may reflect the fact that L2/3 inputs on those L5 cells have a modulatory role. And because those inputs are just modulatory, it is normal that they are not highly correlated with postsynaptic spikes. Anti-Hebbian learning makes it possible by not eliminating those synapses, contrary to what would happen if classic Hebbian learning was applied.

More on this at the end of this paper:


I will need to pull in @FFiebig to help answer this.

I’ll be there! :smiley:

1 Like

I’m in San Francisco for work this week so very excited that it lines up. Hoping to finish early enough to get down there on time.

1 Like

Anti-Hebbian Learning is interesting and somewhat peculiar, but evidently useful for certain kinds of computations used in independent component analysis, source separation, noise suppression etc. As always the brain is never one or the other. The correct answer is practically always: “It depends”, which means we need to learn a lot of neuroscience before we draw conclusions. This review paper by Feldman seems worthwhile, so I’ll give the paper a read and get back to you in the coming days.


Live now!

1 Like