A proof of convergence for the gradient descent optimization method with random initializations in the training of neural networks with ReLU activation for piecewise linear tar

08/10/2021
by   Arnulf Jentzen, et al.
0

Gradient descent (GD) type optimization methods are the standard instrument to train artificial neural networks (ANNs) with rectified linear unit (ReLU) activation. Despite the great success of GD type optimization methods in numerical simulations for the training of ANNs with ReLU activation, it remains - even in the simplest situation of the plain vanilla GD optimization method with random initializations and ANNs with one hidden layer - an open problem to prove (or disprove) the conjecture that the risk of the GD optimization method converges in the training of such ANNs to zero as the width of the ANNs, the number of independent random initializations, and the number of GD steps increase to infinity. In this article we prove this conjecture in the situation where the probability distribution of the input data is equivalent to the continuous uniform distribution on a compact interval, where the probability distributions for the random initializations of the ANN parameters are standard normal distributions, and where the target function under consideration is continuous and piecewise affine linear. Roughly speaking, the key ingredients in our mathematical convergence analysis are (i) to prove that suitable sets of global minima of the risk functions are twice continuously differentiable submanifolds of the ANN parameter spaces, (ii) to prove that the Hessians of the risk functions on these sets of global minima satisfy an appropriate maximal rank condition, and, thereafter, (iii) to apply the machinery in [Fehrman, B., Gess, B., Jentzen, A., Convergence rates for the stochastic gradient descent method for non-convex objective functions. J. Mach. Learn. Res. 21(136): 1–48, 2020] to establish convergence of the GD optimization method with random initializations.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/19/2021

A proof of convergence for gradient descent in the training of artificial neural networks for constant target functions

Gradient descent optimization algorithms are the standard ingredients th...
research
07/17/2016

Piecewise convexity of artificial neural networks

Although artificial neural networks have shown great promise in applicat...
research
07/09/2021

Convergence analysis for gradient flows in the training of artificial neural networks with ReLU activation

Gradient descent (GD) type optimization schemes are the standard methods...
research
08/18/2021

Existence, uniqueness, and convergence rates for gradient flows in the training of artificial neural networks with ReLU activation

The training of artificial neural networks (ANNs) with rectified linear ...
research
12/17/2021

On the existence of global minima and convergence analyses for gradient descent methods in the training of deep neural networks

In this article we study fully-connected feedforward deep ReLU ANNs with...
research
02/28/2023

On the existence of minimizers in shallow residual ReLU neural network optimization landscapes

Many mathematical convergence results for gradient descent (GD) based al...
research
12/27/2018

A discrete version of CMA-ES

Modern machine learning uses more and more advanced optimization techniq...

Please sign up or login with your details

Forgot password? Click here to reset