Convergence proof for stochastic gradient descent in the training of deep neural networks with ReLU activation for constant target functions

12/13/2021
by   Martin Hutzenthaler, et al.
0

In many numerical simulations stochastic gradient descent (SGD) type optimization methods perform very effectively in the training of deep neural networks (DNNs) but till this day it remains an open problem of research to provide a mathematical convergence analysis which rigorously explains the success of SGD type optimization methods in the training of DNNs. In this work we study SGD type optimization methods in the training of fully-connected feedforward DNNs with rectified linear unit (ReLU) activation. We first establish general regularity properties for the risk functions and their generalized gradient functions appearing in the training of such DNNs and, thereafter, we investigate the plain vanilla SGD optimization method in the training of such DNNs under the assumption that the target function under consideration is a constant function. Specifically, we prove under the assumption that the learning rates (the step sizes of the SGD optimization method) are sufficiently small but not L^1-summable and under the assumption that the target function is a constant function that the expectation of the riskof the considered SGD process converges in the training of such DNNs to zero as the number of SGD steps increases to infinity.

READ FULL TEXT
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

06/25/2020

The Effect of Optimization Methods on the Robustness of Out-of-Distribution Detection Approaches

Deep neural networks (DNNs) have become the de facto learning mechanism ...
02/06/2021

The Implicit Biases of Stochastic Gradient Descent on Deep Neural Networks with Batch Normalization

Deep neural networks with batch normalization (BN-DNNs) are invariant to...
02/29/2020

Toward a theory of optimization for over-parameterized systems of non-linear equations: the lessons of deep learning

The success of deep learning is due, to a great extent, to the remarkabl...
03/22/2019

Scalable Data Augmentation for Deep Learning

Scalable Data Augmentation (SDA) provides a framework for training deep ...
07/22/2021

Local SGD Optimizes Overparameterized Neural Networks in Polynomial Time

In this paper we prove that Local (S)GD (or FedAvg) can optimize two-lay...
03/22/2018

Gradient Descent Quantizes ReLU Network Features

Deep neural networks are often trained in the over-parametrized regime (...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.