Essentially No Barriers in Neural Network Energy Landscape

03/02/2018
by   Felix Draxler, et al.
0

Training neural networks involves finding minima of a high-dimensional non-convex loss function. Knowledge of the structure of this energy landscape is sparse. Relaxing from linear interpolations, we construct continuous paths between minima of recent neural network architectures on CIFAR10 and CIFAR100. Surprisingly, the paths are essentially flat in both the training and test landscapes. This implies that neural networks have enough capacity for structural changes, or that these changes are small between minima. Also, each minimum has at least one vanishing Hessian eigenvalue in addition to those resulting from trivial invariance.

READ FULL TEXT

page 1

page 8

research
04/20/2023

Interpolation property of shallow neural networks

We study the geometry of global minima of the loss landscape of overpara...
research
06/26/2023

Black holes and the loss landscape in machine learning

Understanding the loss landscape is an important problem in machine lear...
research
06/12/2019

Semi-flat minima and saddle points by embedding neural networks to overparameterization

We theoretically study the landscape of the training error for neural ne...
research
06/21/2017

The energy landscape of a simple neural network

We explore the energy landscape of a simple neural network. In particula...
research
04/26/2018

The loss landscape of overparameterized neural networks

We explore some mathematical features of the loss landscape of overparam...
research
02/07/2022

Deep Networks on Toroids: Removing Symmetries Reveals the Structure of Flat Regions in the Landscape Geometry

We systematize the approach to the investigation of deep neural network ...
research
02/20/2021

Learning Neural Network Subspaces

Recent observations have advanced our understanding of the neural networ...

Please sign up or login with your details

Forgot password? Click here to reset