Neural Networks with Finite Intrinsic Dimension have no Spurious Valleys

02/18/2018
by   Luca Venturi, et al.
0

Neural networks provide a rich class of high-dimensional, non-convex optimization problems. Despite their non-convexity, gradient-descent methods often successfully optimize these models. This has motivated a recent spur in research attempting to characterize properties of their loss surface that may be responsible for such success. In particular, several authors have noted that over-parametrization appears to act as a remedy against non-convexity. In this paper, we address this phenomenon by studying key topological properties of the loss, such as the presence or absence of "spurious valleys", defined as connected components of sub-level sets that do not include a global minimum. Focusing on a class of two-layer neural networks defined by smooth (but generally non-linear) activation functions, our main contribution is to prove that as soon as the hidden layer size matches the intrinsic dimension of the reproducing space, defined as the linear functional space generated by the activations, no spurious valleys exist, thus allowing the existence of descent directions. Our setup includes smooth activations such as polynomials, both in the empirical and population risk, and generic activations in the empirical risk case.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/10/2017

Recovery Guarantees for One-hidden-layer Neural Networks

In this paper, we consider regression problems with one-hidden-layer neu...
research
10/15/2021

Gradient Descent on Infinitely Wide Neural Networks: Global Convergence and Generalization

Many supervised machine learning methods are naturally cast as optimizat...
research
03/27/2020

Piecewise linear activations substantially shape the loss surfaces of neural networks

Understanding the loss surface of a neural network is fundamentally impo...
research
06/06/2018

Deep Neural Networks with Multi-Branch Architectures Are Less Non-Convex

Several recently proposed architectures of neural networks such as ResNe...
research
02/18/2018

Local Geometry of One-Hidden-Layer Neural Networks for Logistic Regression

We study the local geometry of a one-hidden-layer fully-connected neural...
research
11/04/2016

Topology and Geometry of Half-Rectified Network Optimization

The loss surface of deep neural networks has recently attracted interest...
research
02/04/2021

Concentration of Non-Isotropic Random Tensors with Applications to Learning and Empirical Risk Minimization

Dimension is an inherent bottleneck to some modern learning tasks, where...

Please sign up or login with your details

Forgot password? Click here to reset