Gradient descent with identity initialization efficiently learns positive definite linear transformations by deep residual networks

02/16/2018
by   Peter L. Bartlett, et al.
0

We analyze algorithms for approximating a function f(x) = Φ x mapping ^d to ^d using deep linear neural networks, i.e. that learn a function h parameterized by matrices Θ_1,...,Θ_L and defined by h(x) = Θ_L Θ_L-1 ... Θ_1 x. We focus on algorithms that learn through gradient descent on the population quadratic loss in the case that the distribution over the inputs is isotropic. We provide polynomial bounds on the number of iterations for gradient descent to approximate the optimum, in the case where the initial hypothesis Θ_1 = ... = Θ_L = I has loss bounded by a small enough constant. On the other hand, we show that gradient descent fails to converge for Φ whose distance from the identity is a larger constant, and we show that some forms of regularization toward the identity in each layer do not help. If Φ is symmetric positive definite, we show that an algorithm that initializes Θ_i = I learns an ϵ-approximation of f using a number of updates polynomial in L, the condition number of Φ, and (d/ϵ). In contrast, we show that if the target Φ is symmetric and has a negative eigenvalue, then all members of a class of algorithms that perform gradient descent with identity initialization, and optionally regularize toward the identity in each layer, fail to converge. We analyze an algorithm for the case that Φ satisfies u^Φ u > 0 for all u, but may not be symmetric. This algorithm uses two regularizers: one that maintains the invariant u^Θ_L Θ_L-1 ... Θ_1 u > 0 for all u, and another that "balances" Θ_1 ... Θ_L so that they have the same singular values.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
11/02/2019

Global Convergence of Gradient Descent for Deep Linear Residual Networks

We analyze the global convergence of gradient descent for deep linear re...
research
07/09/2020

Learning Over-Parametrized Two-Layer ReLU Neural Networks beyond NTK

We consider the dynamic of gradient descent for learning a two-layer neu...
research
02/25/2022

An initial alignment between neural network and target is needed for gradient descent to learn

This paper introduces the notion of "Initial Alignment" (INAL) between a...
research
10/04/2018

A Convergence Analysis of Gradient Descent for Deep Linear Neural Networks

We analyze speed of convergence to global optimum for gradient descent t...
research
12/26/2017

Algorithmic Regularization in Over-parameterized Matrix Sensing and Neural Networks with Quadratic Activations

We show that the (stochastic) gradient descent algorithm provides an imp...
research
05/09/2021

Directional Convergence Analysis under Spherically Symmetric Distribution

We consider the fundamental problem of learning linear predictors (i.e.,...
research
02/05/2019

Exponentiated Gradient Meets Gradient Descent

The (stochastic) gradient descent and the multiplicative update method a...

Please sign up or login with your details

Forgot password? Click here to reset