Random Feedback Alignment Algorithms to train Neural Networks: Why do they Align?

06/04/2023
by   Dominique Chu, et al.
0

Feedback alignment algorithms are an alternative to backpropagation to train neural networks, whereby some of the partial derivatives that are required to compute the gradient are replaced by random terms. This essentially transforms the update rule into a random walk in weight space. Surprisingly, learning still works with those algorithms, including training of deep neural networks. This is generally attributed to an alignment of the update of the random walker with the true gradient - the eponymous gradient alignment – which drives an approximate gradient descend. The mechanism that leads to this alignment remains unclear, however. In this paper, we use mathematical reasoning and simulations to investigate gradient alignment. We observe that the feedback alignment update rule has fixed points, which correspond to extrema of the loss function. We show that gradient alignment is a stability criterion for those fixed points. It is only a necessary criterion for algorithm performance. Experimentally, we demonstrate that high levels of gradient alignment can lead to poor algorithm performance and that the alignment is not always driving the gradient descend.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
11/24/2020

The dynamics of learning with feedback alignment

Direct Feedback Alignment (DFA) is emerging as an efficient and biologic...
research
06/02/2023

Layer-Wise Feedback Alignment is Conserved in Deep Neural Networks

In the quest to enhance the efficiency and bio-plausibility of training ...
research
07/15/2019

Iterative temporal differencing with random synaptic feedback weights support error backpropagation for deep learning

This work shows that a differentiable activation function is not necessa...
research
06/15/2021

Gradient-trained Weights in Wide Neural Networks Align Layerwise to Error-scaled Input Correlations

Recent works have examined how deep neural networks, which can solve a v...
research
02/10/2020

Reducing the Computational Burden of Deep Learning with Recursive Local Representation Alignment

Training deep neural networks on large-scale datasets requires significa...
research
06/11/2019

Principled Training of Neural Networks with Direct Feedback Alignment

The backpropagation algorithm has long been the canonical training metho...
research
06/07/2021

Photonic Differential Privacy with Direct Feedback Alignment

Optical Processing Units (OPUs) – low-power photonic chips dedicated to ...

Please sign up or login with your details

Forgot password? Click here to reset