Understanding Double Descent Requires a Fine-Grained Bias-Variance Decomposition

11/04/2020
by   Ben Adlam, et al.
0

Classical learning theory suggests that the optimal generalization performance of a machine learning model should occur at an intermediate model complexity, with simpler models exhibiting high bias and more complex models exhibiting high variance of the predictive function. However, such a simple trade-off does not adequately describe deep learning models that simultaneously attain low bias and variance in the heavily overparameterized regime. A primary obstacle in explaining this behavior is that deep learning algorithms typically involve multiple sources of randomness whose individual contributions are not visible in the total variance. To enable fine-grained analysis, we describe an interpretable, symmetric decomposition of the variance into terms associated with the randomness from sampling, initialization, and the labels. Moreover, we compute the high-dimensional asymptotic behavior of this decomposition for random feature kernel regression, and analyze the strikingly rich phenomenology that arises. We find that the bias decreases monotonically with the network width, but the variance terms exhibit non-monotonic behavior and can diverge at the interpolation boundary, even in the absence of label noise. The divergence is caused by the interaction between sampling and initialization and can therefore be eliminated by marginalizing over samples (i.e. bagging) or over the initial parameters (i.e. ensemble learning).

READ FULL TEXT
research
01/31/2022

Fluctuations, Bias, Variance Ensemble of Learners: Exact Asymptotics for Convex Losses in High-Dimension

From the sampling of data to the initialisation of parameters, randomnes...
research
10/11/2020

What causes the test error? Going beyond bias-variance via ANOVA

Modern machine learning methods are often overparametrized, allowing ada...
research
10/26/2020

Memorizing without overfitting: Bias, variance, and interpolation in over-parameterized models

The bias-variance trade-off is a central concept in supervised learning....
research
03/17/2021

Understanding Generalization in Adversarial Training via the Bias-Variance Decomposition

Adversarially trained models exhibit a large generalization gap: they ca...
research
10/06/2020

Kernel regression in high dimension: Refined analysis beyond double descent

In this paper, we provide a precise characterize of generalization prope...
research
02/08/2022

Understanding the bias-variance tradeoff of Bregman divergences

This paper builds upon the work of Pfau (2013), which generalized the bi...
research
09/09/2019

Bias-Variance Games

Firms engaged in electronic commerce increasingly rely on machine learni...

Please sign up or login with your details

Forgot password? Click here to reset