Deep Linear Networks Dynamics: Low-Rank Biases Induced by Initialization Scale and L2 Regularization

06/30/2021
by   Arthur Jacot, et al.
0

For deep linear networks (DLN), various hyperparameters alter the dynamics of training dramatically. We investigate how the rank of the linear map found by gradient descent is affected by (1) the initialization norm and (2) the addition of L_2 regularization on the parameters. For (1), we study two regimes: (1a) the linear/lazy regime, for large norm initialization; (1b) a saddle-to-saddle regime for small initialization norm. In the (1a) setting, the dynamics of a DLN of any depth is similar to that of a standard linear model, without any low-rank bias. In the (1b) setting, we conjecture that throughout training, gradient descent approaches a sequence of saddles, each corresponding to linear maps of increasing rank, until reaching a minimal rank global minimum. We support this conjecture with a partial proof and some numerical experiments. For (2), we show that adding a L_2 regularization on the parameters corresponds to the addition to the cost of a L_p-Schatten (quasi)norm on the linear map with p=2/L (for a depth-L network), leading to a stronger low-rank bias as L grows. The effect of L_2 regularization on the loss surface depends on the depth: for shallow networks, all critical points are either strict saddles or global minima, whereas for deep networks, some local minima appear. We numerically observe that these local minima can generalize better than global ones in some settings.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/23/2016

Global Optimality of Local Search for Low Rank Matrix Recovery

We show that there are no spurious local minima in the non-convex factor...
research
12/17/2020

Towards Resolving the Implicit Bias of Gradient Descent for Matrix Factorization: Greedy Low-Rank Learning

Matrix factorization is a simple and natural test-bed to investigate the...
research
09/29/2022

Implicit Bias of Large Depth Networks: a Notion of Rank for Nonlinear Functions

We show that the representation cost of fully connected neural networks ...
research
05/25/2023

Implicit bias of SGD in L_2-regularized linear DNNs: One-way jumps from high to low rank

The L_2-regularized loss of Deep Linear Networks (DLNs) with more than o...
research
06/01/2023

Combining Explicit and Implicit Regularization for Efficient Learning in Deep Networks

Works on implicit regularization have studied gradient trajectories duri...
research
04/30/2019

Implicit Regularization of Discrete Gradient Dynamics in Deep Linear Neural Networks

When optimizing over-parameterized models, such as deep neural networks,...
research
08/31/2023

On the Implicit Bias of Adam

In previous literature, backward error analysis was used to find ordinar...

Please sign up or login with your details

Forgot password? Click here to reset