[Paper] Activation Relaxation: A Local Dynamical Approximation to Backpropagation in the Brain

Abstract
Can the powerful backpropagation of error (backprop) learning algorithm be formulated in a manner suitable for implementation in neural circuitry? The primary challenge is to ensure that any candidate formulation uses only local information, rather than relying on global (error) signals, as in orthodox backprop. Recently several algorithms for approximating backprop using only local signals have been proposed. However, these algorithms typically impose other requirements which challenge biological plausibility: for example, requiring complex and precise connectivity schemes (predictive coding), or multiple sequential backwards phases with information being stored across phases (equilibrium-prop). Here, we propose a novel local algorithm, Activation Relaxation (AR), which is motivated by constructing the backpropagation gradient as the equilibrium point of a dynamical system. Our algorithm converges robustly and exactly to the correct backpropagation gradients, requires only a single type of computational unit, utilises only a single backwards phase, and can perform credit assignment on arbitrary computation graphs. We illustrate these properties by training deep neural networks on visual classification tasks, and we describe simplifications to the algorithm which remove further obstacles to neurobiological implementation (for example, the weight-transport problem, and the use of nonlinear derivatives), while preserving performance.

5 Likes

Seems like the actual algorithm is just a very long way to do traditional back propagation.

We need something where losses are generated locally based on a model of the world.

This just takes the traditional error and trickles it down a network very slowly.

1 Like