Learning in the Machine:
Random Backpropagation and the Deep Learning Channel
(Extended Abstract)?
Abstract
Random backpropagation (RBP) is a variant of the
backpropagation algorithm for training neural networks, where the transpose of the forward matrices
are replaced by fixed random matrices in the calculation of the weight updates. It is remarkable both
because of its effectiveness, in spite of using random matrices to communicate error information,
and because it completely removes the requirement
of maintaining symmetric weights in a physical
neural system. To better understand RBP, we compare different algorithms in terms of the information available locally to each neuron. In the process, we derive several alternatives to RBP, including skipped RBP (SRBP), adaptive RBP (ARBP),
sparse RBP, and study their behavior through simulations. These simulations show that many variants
are also robust deep learning algorithms, but that
the derivative of the transfer function is important
in the learning rule. Finally, we prove several mathematical results including the convergence to fixed
points of linear chains of arbitrary length, the convergence to fixed points of linear autoencoders with
decorrelated data, the long-term existence of solutions for linear systems with a single hidden layer
and convergence in special cases, and the convergence to fixed points of non-linear chains, when the
derivative of the activation functions is included