Gradient-trained Weights in Wide Neural Networks Align Layerwise to Error-scaled Input Correlations

06/15/2021
by   Akhilan Boopathy, et al.
0

Recent works have examined how deep neural networks, which can solve a variety of difficult problems, incorporate the statistics of training data to achieve their success. However, existing results have been established only in limited settings. In this work, we derive the layerwise weight dynamics of infinite-width neural networks with nonlinear activations trained by gradient descent. We show theoretically that weight updates are aligned with input correlations from intermediate layers weighted by error, and demonstrate empirically that the result also holds in finite-width wide networks. The alignment result allows us to formulate backpropagation-free learning rules, named Align-zero and Align-ada, that theoretically achieve the same alignment as backpropagation. Finally, we test these learning rules on benchmark problems in feedforward and recurrent neural networks and demonstrate, in wide networks, comparable performance to backpropagation.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
11/17/2020

ZORB: A Derivative-Free Backpropagation Algorithm for Neural Networks

Gradient descent and backpropagation have enabled neural networks to ach...
research
01/08/2021

Infinite-dimensional Folded-in-time Deep Neural Networks

The method recently introduced in arXiv:2011.10115 realizes a deep neura...
research
05/29/2023

A Rainbow in Deep Network Black Boxes

We introduce rainbow networks as a probabilistic model of trained deep n...
research
01/11/2021

Correlated Weights in Infinite Limits of Deep Convolutional Neural Networks

Infinite width limits of deep neural networks often have tractable forms...
research
06/04/2023

Random Feedback Alignment Algorithms to train Neural Networks: Why do they Align?

Feedback alignment algorithms are an alternative to backpropagation to t...
research
09/08/2016

Learning to learn with backpropagation of Hebbian plasticity

Hebbian plasticity is a powerful principle that allows biological brains...
research
11/24/2020

The dynamics of learning with feedback alignment

Direct Feedback Alignment (DFA) is emerging as an efficient and biologic...

Please sign up or login with your details

Forgot password? Click here to reset