On the existence of minimizers in shallow residual ReLU neural network optimization landscapes

02/28/2023
by   Steffen Dereich, et al.
0

Many mathematical convergence results for gradient descent (GD) based algorithms employ the assumption that the GD process is (almost surely) bounded and, also in concrete numerical simulations, divergence of the GD process may slow down, or even completely rule out, convergence of the error function. In practical relevant learning problems, it thus seems to be advisable to design the ANN architectures in a way so that GD optimization processes remain bounded. The property of the boundedness of GD processes for a given learning problem seems, however, to be closely related to the existence of minimizers in the optimization landscape and, in particular, GD trajectories may escape to infinity if the infimum of the error function (objective function) is not attained in the optimization landscape. This naturally raises the question of the existence of minimizers in the optimization landscape and, in the situation of shallow residual ANNs with multi-dimensional input layers and multi-dimensional hidden layers with the ReLU activation, the main result of this work answers this question affirmatively for a general class of loss functions and all continuous target functions. In our proof of this statement, we propose a kind of closure of the search space, where the limits are called generalized responses, and, thereafter, we provide sufficient criteria for the loss function and the underlying probability distribution which ensure that all additional artificial generalized responses are suboptimal which finally allows us to conclude the existence of minimizers in the optimization landscape.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/06/2023

On the existence of optimal shallow feedforward networks with ReLU activation

We prove existence of global minima in the loss landscape for the approx...
research
12/17/2021

On the existence of global minima and convergence analyses for gradient descent methods in the training of deep neural networks

In this article we study fully-connected feedforward deep ReLU ANNs with...
research
07/09/2021

Convergence analysis for gradient flows in the training of artificial neural networks with ReLU activation

Gradient descent (GD) type optimization schemes are the standard methods...
research
08/18/2021

Existence, uniqueness, and convergence rates for gradient flows in the training of artificial neural networks with ReLU activation

The training of artificial neural networks (ANNs) with rectified linear ...
research
07/13/2022

Normalized gradient flow optimization in the training of ReLU artificial neural networks

The training of artificial neural networks (ANNs) is nowadays a highly r...

Please sign up or login with your details

Forgot password? Click here to reset