A proof of convergence for stochastic gradient descent in the training of artificial neural networks with ReLU activation for constant target functions

04/01/2021
by   Arnulf Jentzen, et al.
0

In this article we study the stochastic gradient descent (SGD) optimization method in the training of fully-connected feedforward artificial neural networks with ReLU activation. The main result of this work proves that the risk of the SGD process converges to zero if the target function under consideration is constant. In the established convergence result the considered artificial neural networks consist of one input layer, one hidden layer, and one output layer (with d ∈ℕ neurons on the input layer, H ∈ℕ neurons on the hidden layer, and one neuron on the output layer). The learning rates of the SGD process are assumed to be sufficiently small and the input data used in the SGD process to train the artificial neural networks is assumed to be independent and identically distributed.

READ FULL TEXT
research
12/01/2021

Asymptotic properties of one-layer artificial neural networks with sparse connectivity

A law of large numbers for the empirical distribution of parameters of a...
research
02/06/2023

Stochastic Gradient Descent-induced drift of representation in a two-layer neural network

Representational drift refers to over-time changes in neural activation ...
research
04/04/2022

Training Fully Connected Neural Networks is ∃ℝ-Complete

We consider the algorithmic problem of finding the optimal weights and b...
research
11/03/2020

Geometry Perspective Of Estimating Learning Capability Of Neural Networks

The paper uses statistical and differential geometric motivation to acqu...
research
09/23/2013

Implementation of a language driven Backpropagation algorithm

Inspired by the importance of both communication and feedback on errors ...
research
07/09/2021

Convergence analysis for gradient flows in the training of artificial neural networks with ReLU activation

Gradient descent (GD) type optimization schemes are the standard methods...

Please sign up or login with your details

Forgot password? Click here to reset