As per title, I think if we generalize Hebbian + anti_Hebbian learning to include continuous-output neurons, it seems identical to a grey-scale version of centroid clustering. Where centroid is normalized output of a neuron. That is, input per synapse is compared to the following output of its neuron, and is reinforced or weakened in proportion to above | below- average of some similarity measure computed by such comparison.
Normal centroid clustering is binary: elements are included | excluded from a cluster. This neural version is grey-scale: synapses are scaled in proportion to similarity, although there is also extreme binary option: forming or pruning the synapse. Anyone heard of such interpretation before?