Leveraging the two timescale regime to demonstrate convergence of neural networks

04/19/2023
by   Pierre Marion, et al.
0

We study the training dynamics of shallow neural networks, in a two-timescale regime in which the stepsizes for the inner layer are much smaller than those for the outer layer. In this regime, we prove convergence of the gradient flow to a global optimum of the non-convex optimization problem in a simple univariate setting. The number of neurons need not be asymptotically large for our result to hold, distinguishing our result from popular recent approaches such as the neural tangent kernel or mean-field regimes. Experimental illustration is provided, showing that the stochastic gradient descent behaves according to our description of the gradient flow and thus converges to a global optimum in the two-timescale regime, but can fail outside of this regime.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/01/2022

Phase diagram of Stochastic Gradient Descent in high-dimensional two-layer neural networks

Despite the non-convex optimization landscape, over-parametrized shallow...
research
05/19/2022

Mean-Field Analysis of Two-Layer Neural Networks: Global Optimality with Linear Convergence Rates

We consider optimizing two-layer neural networks in the mean-field regim...
research
02/02/2023

Over-parameterised Shallow Neural Networks with Asymmetrical Node Scaling: Global Convergence Guarantees and Feature Learning

We consider the optimisation of large and shallow neural networks via gr...
research
12/31/2020

Particle Dual Averaging: Optimization of Mean Field Neural Networks with Global Convergence Rate Analysis

We propose the particle dual averaging (PDA) method, which generalizes t...
research
10/06/2021

On the Global Convergence of Gradient Descent for multi-layer ResNets in the mean-field regime

Finding the optimal configuration of parameters in ResNet is a nonconvex...
research
01/05/2019

Analysis of a Two-Layer Neural Network via Displacement Convexity

Fitting a function by using linear combinations of a large number N of `...
research
02/06/2023

Rethinking Gauss-Newton for learning over-parameterized models

Compared to gradient descent, Gauss-Newton's method (GN) and variants ar...

Please sign up or login with your details

Forgot password? Click here to reset