Signal Propagation: A Framework for Learning and Inference In a Forward Pass

Another (different) alternative from a ML godfather:

The Forward-Forward Algorithm: Some Preliminary Investigations
Geoffrey Hinton

Abstract
“The aim of this paper is to introduce a new learning procedure for neural networks
and to demonstrate that it works well enough on a few small problems to be worth
serious investigation. The Forward-Forward algorithm replaces the forward and
backward passes of backpropagation by two forward passes, one with positive
(i.e. real) data and the other with negative data which could be generated by the
network itself. Each layer has its own objective function which is simply to have
high goodness for positive data and low goodness for negative data. The sum of the
squared activities in a layer can be used as the goodness but there are many other
possibilities, including minus the sum of the squared activities. If the positive and
negative passes can be separated in time, the negative passes can be done offline,
which makes the learning much simpler in the positive pass and allows video to
be pipelined through the network without ever storing activities or stopping to
propagate derivatives.”

3 Likes