Deep learning: a statistical viewpoint

03/16/2021
by   Peter L. Bartlett, et al.
13

The remarkable practical success of deep learning has revealed some major surprises from a theoretical perspective. In particular, simple gradient methods easily find near-optimal solutions to non-convex optimization problems, and despite giving a near-perfect fit to training data without any explicit effort to control model complexity, these methods exhibit excellent predictive accuracy. We conjecture that specific principles underlie these phenomena: that overparametrization allows gradient methods to find interpolating solutions, that these methods implicitly impose regularization, and that overparametrization leads to benign overfitting. We survey recent theoretical progress that provides examples illustrating these principles in simpler settings. We first review classical uniform convergence results and why they fall short of explaining aspects of the behavior of deep learning methods. We give examples of implicit regularization in simple settings, where gradient methods lead to minimal norm functions that perfectly fit the training data. Then we review prediction methods that exhibit benign overfitting, focusing on regression problems with quadratic loss. For these methods, we can decompose the prediction rule into a simple component that is useful for prediction and a spiky component that is useful for overfitting but, in a favorable setting, does not harm prediction accuracy. We focus specifically on the linear regime for neural networks, where the network can be approximated by a linear model. In this regime, we demonstrate the success of gradient flow, and we consider benign overfitting with two-layer networks, giving an exact asymptotic analysis that precisely demonstrates the impact of overparametrization. We conclude by highlighting the key challenges that arise in extending these insights to realistic deep learning settings.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/26/2019

Benign Overfitting in Linear Regression

The phenomenon of benign overfitting is one of the key mysteries uncover...
research
02/01/2023

Implicit Regularization Leads to Benign Overfitting for Sparse Linear Regression

In deep learning, often the training process finds an interpolator (a so...
research
06/01/2022

Realistic Deep Learning May Not Fit Benignly

Studies on benign overfitting provide insights for the success of overpa...
research
06/10/2022

The Slingshot Mechanism: An Empirical Study of Adaptive Optimizers and the Grokking Phenomenon

The grokking phenomenon as reported by Power et al. ( arXiv:2201.02177 )...
research
12/30/2017

Theory of Deep Learning III: explaining the non-overfitting puzzle

A main puzzle of deep networks revolves around the absence of overfittin...
research
01/27/2022

The Implicit Bias of Benign Overfitting

The phenomenon of benign overfitting, where a predictor perfectly fits n...
research
01/30/2023

Deep networks for system identification: a Survey

Deep learning is a topic of considerable current interest. The availabil...

Please sign up or login with your details

Forgot password? Click here to reset