Signal Propagation: A Framework for Learning and Inference In a Forward Pass

Signal Propagation: A Framework for Learning and Inference In a Forward Pass
Adam Kohan, Edward A. Rietman, Hava T. Siegelmann

“Abstract—We propose a new learning framework, signal prop-
agation (sigprop), for propagating a learning signal and updating
neural network parameters via a forward pass, as an alternative
to backpropagation. In sigprop, there is only the forward path
for inference and learning. So, there are no structural or
computational constraints necessary for learning to take place,
beyond the inference model itself, such as feedback connectivity,
weight transport, or a backward pass, which exist under back-
propagation based approaches. That is, sigprop enables global
supervised learning with only a forward path. This is ideal for
parallel training of layers or modules. In biology, this explains
how neurons without feedback connections can still receive a
global learning signal. In hardware, this provides an approach
for global supervised learning without backward connectivity.
Sigprop by construction has compatibility with models of learning
in the brain and in hardware than backpropagation, including
alternative approaches relaxing learning constraints. We also
demonstrate that sigprop is more efficient in time and memory
than they are. To further explain the behavior of sigprop, we
provide evidence that sigprop provides useful learning signals
in context to backpropagation. To further support relevance
to biological and hardware learning, we use sigprop to train
continuous time neural networks with Hebbian updates, and train
spiking neural networks with only the voltage or with biologically
and hardware compatible surrogate functions.”

A fatal blow to backpropagation?

3 Likes

Another (different) alternative from a ML godfather:

The Forward-Forward Algorithm: Some Preliminary Investigations
Geoffrey Hinton

Abstract
“The aim of this paper is to introduce a new learning procedure for neural networks
and to demonstrate that it works well enough on a few small problems to be worth
serious investigation. The Forward-Forward algorithm replaces the forward and
backward passes of backpropagation by two forward passes, one with positive
(i.e. real) data and the other with negative data which could be generated by the
network itself. Each layer has its own objective function which is simply to have
high goodness for positive data and low goodness for negative data. The sum of the
squared activities in a layer can be used as the goodness but there are many other
possibilities, including minus the sum of the squared activities. If the positive and
negative passes can be separated in time, the negative passes can be done offline,
which makes the learning much simpler in the positive pass and allows video to
be pipelined through the network without ever storing activities or stopping to
propagate derivatives.”

3 Likes

Yes these two are interesting. An important advantage vs backpropagation is the learning no longer depends on having smoothly differentiable activation functions, which allows for all kind of experiments.

2 Likes

Or have 2 randomly initialized neural networks and get their layers to agree a response to a particular input.
Probably the amount of “smarts” you can get from a neural network is evolution>backpropagation>forward methods.

2 Likes