Memorizing without overfitting: Bias, variance, and interpolation in over-parameterized models

10/26/2020
by   Jason W. Rocks, et al.
76

The bias-variance trade-off is a central concept in supervised learning. In classical statistics, increasing the complexity of a model (e.g., number of parameters) reduces bias but also increases variance. Until recently, it was commonly believed that optimal performance is achieved at intermediate model complexities which strike a balance between bias and variance. Modern Deep Learning methods flout this dogma, achieving state-of-the-art performance using "over-parameterized models" where the number of fit parameters is large enough to perfectly fit the training data. As a result, understanding bias and variance in over-parameterized models has emerged as a fundamental problem in machine learning. Here, we use methods from statistical physics to derive analytic expressions for bias and variance in three minimal models for over-parameterization (linear regression and two-layer neural networks with linear and nonlinear activation functions), allowing us to disentangle properties stemming from the model architecture and random sampling of data. All three models exhibit a phase transition to an interpolation regime where the training error is zero, with linear neural-networks possessing an additional phase transition between regimes with zero and nonzero bias. The test error diverges at the interpolation transition for all three models. However, beyond the transition, it decreases again for the neural network models due to a decrease in both bias and variance with model complexity. We also show that over-parameterized models can overfit even in the absence of noise. We synthesize these results to construct a holistic understanding of generalization error and the bias-variance trade-off in over-parameterized models.

READ FULL TEXT

page 11

page 14

page 17

research
03/10/2022

Bias-variance decomposition of overparameterized regression with random linear features

In classical statistics, the bias-variance trade-off describes how varyi...
research
03/25/2021

The Geometry of Over-parameterized Regression and Adversarial Perturbations

Classical regression has a simple geometric description in terms of a pr...
research
03/17/2021

Understanding Generalization in Adversarial Training via the Bias-Variance Decomposition

Adversarially trained models exhibit a large generalization gap: they ca...
research
10/06/2021

The Variability of Model Specification

It's regarded as an axiom that a good model is one that compromises betw...
research
11/04/2020

Understanding Double Descent Requires a Fine-Grained Bias-Variance Decomposition

Classical learning theory suggests that the optimal generalization perfo...
research
06/29/2022

Fairness via In-Processing in the Over-parameterized Regime: A Cautionary Tale

The success of DNNs is driven by the counter-intuitive ability of over-p...
research
06/28/2019

Bias-Variance Trade-Off in Hierarchical Probabilistic Models Using Higher-Order Feature Interactions

Hierarchical probabilistic models are able to use a large number of para...

Please sign up or login with your details

Forgot password? Click here to reset