Sharp Minima Can Generalize For Deep Nets

03/15/2017
by   Laurent Dinh, et al.
0

Despite their overwhelming capacity to overfit, deep learning architectures tend to generalize relatively well to unseen data, allowing them to be deployed in practice. However, explaining why this is the case is still an open area of research. One standing hypothesis that is gaining popularity, e.g. Hochreiter & Schmidhuber (1997); Keskar et al. (2017), is that the flatness of minima of the loss function found by stochastic gradient based methods results in good generalization. This paper argues that most notions of flatness are problematic for deep models and can not be directly applied to explain generalization. Specifically, when focusing on deep networks with rectifier units, we can exploit the particular geometry of parameter space induced by the inherent symmetries that these architectures exhibit to build equivalent models corresponding to arbitrarily sharper minima. Furthermore, if we allow to reparametrize a function, the geometry of its parameters can change drastically without affecting its generalization properties.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/30/2017

Towards Understanding Generalization of Deep Learning: Perspective of Loss Landscapes

It is widely observed that deep learning models with learned parameters ...
research
10/16/2017

Generalization in Deep Learning

This paper explains why deep learning can generalize well, despite large...
research
06/07/2019

Understanding Generalization through Visualizations

The power of neural networks lies in their ability to generalize to unse...
research
05/21/2018

SmoothOut: Smoothing Out Sharp Minima for Generalization in Large-Batch Deep Learning

In distributed deep learning, a large batch size in Stochastic Gradient ...
research
02/14/2023

The Geometry of Neural Nets' Parameter Spaces Under Reparametrization

Model reparametrization – transforming the parameter space via a bijecti...
research
02/06/2019

A Scale Invariant Flatness Measure for Deep Network Minima

It has been empirically observed that the flatness of minima obtained fr...
research
06/16/2017

A Closer Look at Memorization in Deep Networks

We examine the role of memorization in deep learning, drawing connection...

Please sign up or login with your details

Forgot password? Click here to reset