Non-convergence of stochastic gradient descent in the training of deep neural networks

06/12/2020
by   Patrick Cheridito, et al.
0

Deep neural networks have successfully been trained in various application areas with stochastic gradient descent. However, there exists no rigorous mathematical explanation why this works so well. The training of neural networks with stochastic gradient descent has four different discretization parameters: (i) the network architecture; (ii) the size of the training data; (iii) the number of gradient steps; and (iv) the number of randomly initialized gradient trajectories. While it can be shown that the approximation error converges to zero if all four parameters are sent to infinity in the right order, we demonstrate in this paper that stochastic gradient descent fails to converge for rectified linear unit networks if their depth is much larger than their width and the number of random initializations does not increase to infinity fast enough.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
12/22/2022

Langevin algorithms for Markovian Neural Networks and Deep Stochastic control

Stochastic Gradient Descent Langevin Dynamics (SGLD) algorithms, which a...
research
06/06/2020

Frank-Wolfe optimization for deep networks

Deep neural networks is today one of the most popular choices in classif...
research
03/03/2020

Overall error analysis for the training of deep neural networks via stochastic gradient descent with random initialisation

In spite of the accomplishments of deep learning based algorithms in num...
research
02/12/2019

Towards moderate overparameterization: global convergence guarantees for training shallow neural networks

Many modern neural network architectures are trained in an overparameter...
research
02/28/2019

A block-random algorithm for learning on distributed, heterogeneous data

Most deep learning models are based on deep neural networks with multipl...
research
12/05/2018

Uncertainty Sampling is Preconditioned Stochastic Gradient Descent on Zero-One Loss

Uncertainty sampling, a popular active learning algorithm, is used to re...
research
05/20/2017

Stabilizing Adversarial Nets With Prediction Methods

Adversarial neural networks solve many important problems in data scienc...

Please sign up or login with your details

Forgot password? Click here to reset