On the Omnipresence of Spurious Local Minima in Certain Neural Network Training Problems

02/23/2022
by   Constantin Christof, et al.
0

We study the loss landscape of training problems for deep artificial neural networks with a one-dimensional real output whose activation functions contain an affine segment and whose hidden layers have width at least two. It is shown that such problems possess a continuum of spurious (i.e., not globally optimal) local minima for all target functions that are not affine. In contrast to previous works, our analysis covers all sampling and parameterization regimes, general differentiable loss functions, arbitrary continuous nonpolynomial activation functions, and both the finite- and infinite-dimensional setting. It is further shown that the appearance of the spurious local minima in the considered training problems is a direct consequence of the universal approximation theorem and that the underlying mechanisms also cause, e.g., Lp-best approximation problems to be ill-posed in the sense of Hadamard for all networks that do not have a dense image. The latter result also holds without the assumption of local affine linearity and without any conditions on the hidden layers. The paper concludes with a numerical experiment which demonstrates that spurious local minima can indeed affect the convergence behavior of gradient-based solution algorithms in practice.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/27/2020

Piecewise linear activations substantially shape the loss surfaces of neural networks

Understanding the loss surface of a neural network is fundamentally impo...
research
12/28/2018

Over-Parameterized Deep Neural Networks Have No Strict Local Minima For Any Continuous Activations

In this paper, we study the loss surface of the over-parameterized fully...
research
02/07/2020

Ill-Posedness and Optimization Geometry for Nonlinear Neural Network Training

In this work we analyze the role nonlinear activation functions play at ...
research
09/23/2021

Arbitrary-Depth Universal Approximation Theorems for Operator Neural Networks

The standard Universal Approximation Theorem for operator neural network...
research
05/26/2016

No bad local minima: Data independent training error guarantees for multilayer neural networks

We use smoothed analysis techniques to provide guarantees on the trainin...
research
01/04/2007

Statistical tools to assess the reliability of self-organizing maps

Results of neural network learning are always subject to some variabilit...
research
02/25/2021

Spurious Local Minima Are Common for Deep Neural Networks with Piecewise Linear Activations

In this paper, it is shown theoretically that spurious local minima are ...

Please sign up or login with your details

Forgot password? Click here to reset