Theory IIIb: Generalization in Deep Networks

06/29/2018
by   Tomaso Poggio, et al.
2

A main puzzle of deep neural networks (DNNs) revolves around the apparent absence of "overfitting", defined in this paper as follows: the expected error does not get worse when increasing the number of neurons or of iterations of gradient descent. This is surprising because of the large capacity demonstrated by DNNs to fit randomly labeled data and the absence of explicit regularization. Recent results by Srebro et al. provide a satisfying solution of the puzzle for linear networks used in binary classification. They prove that minimization of loss functions such as the logistic, the cross-entropy and the exp-loss yields asymptotic, "slow" convergence to the maximum margin solution for linearly separable datasets, independently of the initial conditions. Here we prove a similar result for nonlinear multilayer DNNs near zero minima of the empirical loss. The result holds for exponential-type losses but not for the square loss. In particular, we prove that the weight matrix at each layer of a deep network converges to a minimum norm solution up to a scale factor (in the separable case). Our analysis of the dynamical system corresponding to gradient descent of a multilayer network suggests a simple criterion for ranking the generalization performance of different zero minimizers of the empirical loss.

READ FULL TEXT

page 1

page 10

research
03/12/2019

Theory III: Dynamics and Generalization in Deep Networks

We review recent observations on the dynamical systems induced by gradie...
research
12/30/2017

Theory of Deep Learning III: explaining the non-overfitting puzzle

A main puzzle of deep networks revolves around the absence of overfittin...
research
10/04/2018

Gradient descent aligns the layers of deep linear networks

This paper establishes risk convergence and asymptotic weight matrix ali...
research
05/19/2019

A type of generalization error induced by initialization in deep neural networks

How different initializations and loss functions affect the learning of ...
research
12/08/2018

Weighted Risk Minimization & Deep Learning

Importance weighting is a key ingredient in many algorithms for causal i...
research
09/18/2018

On the Learning Dynamics of Deep Neural Networks

While a lot of progress has been made in recent years, the dynamics of l...
research
03/24/2021

Asymptotic Freeness of Layerwise Jacobians Caused by Invariance of Multilayer Perceptron: The Haar Orthogonal Case

Free Probability Theory (FPT) provides rich knowledge for handling mathe...

Please sign up or login with your details

Forgot password? Click here to reset