A proof of convergence for gradient descent in the training of artificial neural networks for constant target functions

02/19/2021
by   Patrick Cheridito, et al.
0

Gradient descent optimization algorithms are the standard ingredients that are used to train artificial neural networks (ANNs). Even though a huge number of numerical simulations indicate that gradient descent optimization methods do indeed convergence in the training of ANNs, until today there is no rigorous theoretical analysis which proves (or disproves) this conjecture. In particular, even in the case of the most basic variant of gradient descent optimization algorithms, the plain vanilla gradient descent method, it remains an open problem to prove or disprove the conjecture that gradient descent converges in the training of ANNs. In this article we solve this problem in the special situation where the target function under consideration is a constant function. More specifically, in the case of constant target functions we prove in the training of rectified fully-connected feedforward ANNs with one-hidden layer that the risk function of the gradient descent method does indeed converge to zero. Our mathematical analysis strongly exploits the property that the rectifier function is the activation function used in the considered ANNs. A key contribution of this work is to explicitly specify a Lyapunov function for the gradient flow system of the ANN parameters. This Lyapunov function is the central tool in our convergence proof of the gradient descent method.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/23/2021

Convergence rates for gradient descent in the training of overparameterized artificial neural networks with biases

In recent years, artificial neural networks have developed into a powerf...
research
02/12/2021

Exploiting Spline Models for the Training of Fully Connected Layers in Neural Network

The fully connected (FC) layer, one of the most fundamental modules in a...
research
03/16/2023

Controlled Descent Training

In this work, a novel and model-based artificial neural network (ANN) tr...
research
08/18/2021

Existence, uniqueness, and convergence rates for gradient flows in the training of artificial neural networks with ReLU activation

The training of artificial neural networks (ANNs) with rectified linear ...
research
03/03/2023

Implicit Stochastic Gradient Descent for Training Physics-informed Neural Networks

Physics-informed neural networks (PINNs) have effectively been demonstra...

Please sign up or login with your details

Forgot password? Click here to reset