Convergence Analysis and Implicit Regularization of Feedback Alignment for Deep Linear Networks

10/20/2021
by   Manuela Girotti, et al.
0

We theoretically analyze the Feedback Alignment (FA) algorithm, an efficient alternative to backpropagation for training neural networks. We provide convergence guarantees with rates for deep linear networks for both continuous and discrete dynamics. Additionally, we study incremental learning phenomena for shallow linear networks. Interestingly, certain specific initializations imply that negligible components are learned before the principal ones, thus potentially negatively affecting the effectiveness of such a learning algorithm; a phenomenon we classify as implicit anti-regularization. We also provide initialization schemes where the components of the problem are approximately learned by decreasing order of importance, thus providing a form of implicit regularization.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
04/30/2019

Implicit Regularization of Discrete Gradient Dynamics in Deep Linear Neural Networks

When optimizing over-parameterized models, such as deep neural networks,...
research
05/12/2021

Convergence Analysis of Over-parameterized Deep Linear Networks, and the Principal Components Bias

Convolutional Neural networks of different architectures seem to learn t...
research
08/03/2020

Implicit Regularization in Deep Learning: A View from Function Space

We approach the problem of implicit regularization in deep learning from...
research
07/13/2023

Implicit regularization in AI meets generalized hardness of approximation in optimization – Sharp results for diagonal linear networks

Understanding the implicit regularization imposed by neural network arch...
research
08/31/2022

Incremental Learning in Diagonal Linear Networks

Diagonal linear networks (DLNs) are a toy simplification of artificial n...
research
10/22/2021

The Equilibrium Hypothesis: Rethinking implicit regularization in Deep Neural Networks

Modern Deep Neural Networks (DNNs) exhibit impressive generalization pro...
research
07/13/2020

Regularized linear autoencoders recover the principal components, eventually

Our understanding of learning input-output relationships with neural net...

Please sign up or login with your details

Forgot password? Click here to reset