Layer-Wise Feedback Alignment is Conserved in Deep Neural Networks

06/02/2023
by   Zachary Robertson, et al.
0

In the quest to enhance the efficiency and bio-plausibility of training deep neural networks, Feedback Alignment (FA), which replaces the backward pass weights with random matrices in the training process, has emerged as an alternative to traditional backpropagation. While the appeal of FA lies in its circumvention of computational challenges and its plausible biological alignment, the theoretical understanding of this learning rule remains partial. This paper uncovers a set of conservation laws underpinning the learning dynamics of FA, revealing intriguing parallels between FA and Gradient Descent (GD). Our analysis reveals that FA harbors implicit biases akin to those exhibited by GD, challenging the prevailing narrative that these learning algorithms are fundamentally different. Moreover, we demonstrate that these conservation laws elucidate sufficient conditions for layer-wise alignment with feedback matrices in ReLU networks. We further show that this implies over-parameterized two-layer linear networks trained with FA converge to minimum-norm solutions. The implications of our findings offer avenues for developing more efficient and biologically plausible alternatives to backpropagation through an understanding of the principles governing learning dynamics in deep networks.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
11/24/2020

The dynamics of learning with feedback alignment

Direct Feedback Alignment (DFA) is emerging as an efficient and biologic...
research
06/10/2021

Convergence and Alignment of Gradient Descent with Random Back Propagation Weights

Stochastic gradient descent with backpropagation is the workhorse of art...
research
06/04/2023

Random Feedback Alignment Algorithms to train Neural Networks: Why do they Align?

Feedback alignment algorithms are an alternative to backpropagation to t...
research
10/26/2022

Scaling Laws Beyond Backpropagation

Alternatives to backpropagation have long been studied to better underst...
research
07/13/2021

Tourbillon: a Physically Plausible Neural Architecture

In a physical neural system, backpropagation is faced with a number of o...
research
02/10/2023

Forward Learning with Top-Down Feedback: Empirical and Analytical Characterization

"Forward-only" algorithms, which train neural networks while avoiding a ...
research
06/19/2018

Contrastive Hebbian Learning with Random Feedback Weights

Neural networks are commonly trained to make predictions through learnin...

Please sign up or login with your details

Forgot password? Click here to reset