Global Convergence Analysis of Deep Linear Networks with A One-neuron Layer

01/08/2022
by   Kun Chen, et al.
0

In this paper, we follow Eftekhari's work to give a non-local convergence analysis of deep linear networks. Specifically, we consider optimizing deep linear networks which have a layer with one neuron under quadratic loss. We describe the convergent point of trajectories with arbitrary starting point under gradient flow, including the paths which converge to one of the saddle points or the original point. We also show specific convergence rates of trajectories that converge to the global minimizer by stages. To achieve these results, this paper mainly extends the machinery in Eftekhari's work to provably identify the rank-stable set and the global minimizer convergent set. We also give specific examples to show the necessity of our definitions. Crucially, as far as we know, our results appear to be the first to give a non-local global analysis of linear neural networks from arbitrary initialized points, rather than the lazy training regime which has dominated the literature of neural networks, and restricted benign initialization in Eftekhari's work. We also note that extending our results to general linear networks without one hidden neuron assumption remains a challenging open problem.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
12/05/2017

Deep linear neural networks with arbitrary loss: All local minima are global

We consider deep linear networks with arbitrary differentiable loss. We ...
research
02/05/2019

Global convergence of neuron birth-death dynamics

Neural networks with a large number of parameters admit a mean-field des...
research
04/18/2022

A Convergence Analysis of Nesterov's Accelerated Gradient Method in Training Deep Linear Neural Networks

Momentum methods, including heavy-ball (HB) and Nesterov's accelerated g...
research
07/09/2019

Scaling Limit of Neural Networks with the Xavier Initialization and Convergence to a Global Minimum

We analyze single-layer neural networks with the Xavier initialization i...
research
05/09/2021

Directional Convergence Analysis under Spherically Symmetric Distribution

We consider the fundamental problem of learning linear predictors (i.e.,...
research
06/16/2020

A Note on the Global Convergence of Multilayer Neural Networks in the Mean Field Regime

In a recent work, we introduced a rigorous framework to describe the mea...
research
10/21/2022

When Expressivity Meets Trainability: Fewer than n Neurons Can Work

Modern neural networks are often quite wide, causing large memory and co...

Please sign up or login with your details

Forgot password? Click here to reset