The Goldilocks zone: Towards better understanding of neural network loss landscapes

07/06/2018
by   Stanislav Fort, et al.
0

We explore the loss landscape of fully-connected neural networks using random, low-dimensional hyperplanes and hyperspheres. Evaluating the Hessian, H, of the loss function on these hypersurfaces, we observe 1) an unusual excess of the number of positive eigenvalues of H, and 2) a large value of Tr(H) / |H| at a well defined range of configuration space radii, corresponding to a thick, hollow, spherical shell we refer to as the Goldilocks zone. We observe this effect for fully-connected neural networks over a range of network widths and depths on MNIST and CIFAR-10 with the ReLU non-linearity. The effect is not observed for the non-linearity. Using our observations, we demonstrate a close connection between the Goldilocks zone, measures of local convexity/prevalence of positive curvature, and the suitability of a network initialization. We show that the high and stable accuracy reached when optimizing on random, low-dimensional hypersurfaces is directly related to the overlap between the hypersurface and the Goldilocks zone. We note that common initialization techniques initialize neural networks in this particular region of unusually high convexity, and offer a geometric intuition for their success. We take steps towards an analytic description of the general features of the loss function geometry, exploring its anisotropy and strong radial dependence. We support our theoretical results with experiments. Furthermore, we demonstrate that initializing a neural network at a number of points and selecting for high measures of local convexity such as Tr(H) / |H|, number of positive eigenvalues of H, or low initial loss, leads to statistically significantly faster training on MNIST. Based on our observations, we hypothesize that the Goldilocks zone contains a high density of suitable initialization configurations.

READ FULL TEXT

page 2

page 8

research
10/14/2019

Emergent properties of the local geometry of neural loss landscapes

The local geometry of high dimensional neural network loss landscapes ca...
research
06/13/2018

Weight Initialization without Local Minima in Deep Nonlinear Neural Networks

In this paper, we propose a new weight initialization method called even...
research
04/03/2023

Charting the Topography of the Neural Network Landscape with Thermal-Like Noise

The training of neural networks is a complex, high-dimensional, non-conv...
research
10/30/2018

Piecewise Strong Convexity of Neural Networks

We study the loss surface of a fully connected neural network with ReLU ...
research
01/28/2019

Stiffness: A New Perspective on Generalization in Neural Networks

We investigate neural network training and generalization using the conc...
research
02/06/2019

Negative eigenvalues of the Hessian in deep neural networks

The loss function of deep networks is known to be non-convex but the pre...
research
02/08/2023

Unsupervised Learning of Initialization in Deep Neural Networks via Maximum Mean Discrepancy

Despite the recent success of stochastic gradient descent in deep learni...

Please sign up or login with your details

Forgot password? Click here to reset