Convergence and Alignment of Gradient Descent with Random Back Propagation Weights

06/10/2021
by   Ganlin Song, et al.
9

Stochastic gradient descent with backpropagation is the workhorse of artificial neural networks. It has long been recognized that backpropagation fails to be a biologically plausible algorithm. Fundamentally, it is a non-local procedure – updating one neuron's synaptic weights requires knowledge of synaptic weights or receptive fields of downstream neurons. This limits the use of artificial neural networks as a tool for understanding the biological principles of information processing in the brain. Lillicrap et al. (2016) propose a more biologically plausible "feedback alignment" algorithm that uses random and fixed backpropagation weights, and show promising simulations. In this paper we study the mathematical properties of the feedback alignment procedure by analyzing convergence and alignment for two-layer networks under squared error loss. In the overparameterized setting, we prove that the error converges to zero exponentially fast, and also that regularization is necessary in order for the parameters to become aligned with the random backpropagation weights. Simulations are given that are consistent with this analysis and suggest further generalizations. These results contribute to our understanding of how biologically plausible algorithms might carry out weight learning in a manner different from Hebbian learning, with performance that is comparable with the full non-local backpropagation algorithm.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/15/2019

Iterative temporal differencing with random synaptic feedback weights support error backpropagation for deep learning

This work shows that a differentiable activation function is not necessa...
research
11/17/2017

Deep supervised learning using local errors

Error backpropagation is a highly effective mechanism for learning high-...
research
04/08/2022

Biologically-inspired neuronal adaptation improves learning in neural networks

Since humans still outperform artificial neural networks on many tasks, ...
research
01/30/2019

Direct Feedback Alignment with Sparse Connections for Local Learning

Recent advances in deep neural networks (DNNs) owe their success to trai...
research
12/12/2018

Feedback alignment in deep convolutional networks

Ongoing studies have identified similarities between neural representati...
research
03/22/2022

Constrained Parameter Inference as a Principle for Learning

Learning in biological and artificial neural networks is often framed as...
research
06/02/2023

Layer-Wise Feedback Alignment is Conserved in Deep Neural Networks

In the quest to enhance the efficiency and bio-plausibility of training ...

Please sign up or login with your details

Forgot password? Click here to reset