Generalized Hebbian learning as a soft centroid clustering

As per title, I think if we generalize Hebbian + anti_Hebbian learning to include continuous-output neurons, it seems identical to a grey-scale version of centroid clustering. Where centroid is normalized output of a neuron. That is, input per synapse is compared to the following output of its neuron, and is reinforced or weakened in proportion to above | below- average of some similarity measure computed by such comparison.

Normal centroid clustering is binary: elements are included | excluded from a cluster. This neural version is grey-scale: synapses are scaled in proportion to similarity, although there is also extreme binary option: forming or pruning the synapse. Anyone heard of such interpretation before?

1 Like

If you have a vector to vector randomly initialized associative memory with continuous outputs one kind of clustering you can do is to process a vector from the training set. Then force by training the output vector elements to be +1 or -1 depending on which is closest to the observed output for the given input vector. I don’t think you would set the training rate so high.
That would give I suppose an interesting Hamming space clustering effect.
And would be a form of unsupervised learning. I can’t investigate every idea I can think of though.

2 Likes