Gradient Clusteron Visualization

It looks like you considered synaptic distance to be their post-synaptic positions on the dendrite, and not their pre-synaptic positions in the input space. This was something I missed on my first glance at the paper.

Mechanically, this seems to be a way of magnifying the correlated response of a set of inputs in a nonlinear way without making large synaptic weights. This would enable one to catch the subtle and important co-firing of inputs for a particular pattern, and a sharp drop-off in synaptic response when the inputs for that pattern start to disappear. This gives a nonlinear drop-off in response instead of the linear drop-off which would be the case of traditional ANNs.

I was thinking of the clusteron in purely spatial clustering terms (like the visual field) and I couldn’t see the benefits. But if you want to isolate and select out particular firing correlations, then the clusteron can do it in a way I’m not sure can be done with other approaches.

It’s interesting that treating the dendrite as a first-class computational object leads to interesting algorithmic possibilities not available to traditional ANNs. In BrainBlocks, we treat a single dendrite as a hypothesis with its synapses aligned to the expected pattern. A limited number of dendrites or hypotheses can be created per neuron, each trained to a particular pattern. These dendrites are created as needed up to a limit.

Whereas, in the clusteron approach, a single dendrite can represent multiple hypotheses by clustering synapses together for each particular expected pattern. The clusteron approach gives you that nonlinear response in the detected pattern, but I think in BrainBlocks, we would only get a linear response in partial patterns. This latter is usually not a problem because the neurons are fired in WTA manner so a 50% response for a pattern is usually discarded in favor of something with higher response.

3 Likes