The generalization error of random features regression: Precise asymptotics and double descent curve

08/14/2019
by   Song Mei, et al.
1

Deep learning methods operate in regimes that defy the traditional statistical mindset. The neural network architectures often contain more parameters than training samples, and are so rich that they can interpolate the observed labels, even if the latter are replaced by pure noise. Despite their huge complexity, the same architectures achieve small generalization error on real data. This phenomenon has been rationalized in terms of a so-called `double descent' curve. As the model complexity increases, the generalization error follows the usual U-shaped curve at the beginning, first decreasing and then peaking around the interpolation threshold (when the model achieves vanishing training error). However, it descends again as model complexity exceeds this threshold. The global minimum of the generalization error is found in this overparametrized regime, often when the number of parameters is much larger than the number of samples. Far from being a peculiar property of deep neural networks, elements of this behavior have been demonstrated in much simpler settings, including linear regression with random covariates. In this paper we consider the problem of learning an unknown function over the d-dimensional sphere S^d-1, from n i.i.d. samples ( x_i, y_i) ∈ S^d-1× R, i < n. We perform ridge regression on N random features of the form σ( w_a^ T x), a < N. This can be equivalently described as a two-layers neural network with random first-layer weights. We compute the precise asymptotics of the generalization error, in the limit N, n, d →∞ with N/d and n/d fixed. This provides the first analytically tractable model that captures all the features of the double descent phenomenon.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/02/2020

Double Trouble in Double Descent : Bias and Variance(s) in the Lazy Regime

Deep neural networks can achieve remarkable generalization performances ...
research
08/15/2020

The Neural Tangent Kernel in High Dimensions: Triple Descent and a Multi-Scale Theory of Generalization

Modern deep learning models employ considerably more parameters than req...
research
03/19/2019

Surprises in High-Dimensional Ridgeless Least Squares Interpolation

Interpolators -- estimators that achieve zero training error -- have att...
research
07/25/2020

The Interpolation Phase Transition in Neural Networks: Memorization and Generalization under Lazy Training

Modern neural networks are often operated in a strongly overparametrized...
research
08/03/2020

Multiple Descent: Design Your Own Generalization Curve

This paper explores the generalization loss of linear regression in vari...
research
12/11/2020

Beyond Occam's Razor in System Identification: Double-Descent when Modeling Dynamics

System identification aims to build models of dynamical systems from dat...
research
08/23/2022

The Value of Out-of-Distribution Data

More data helps us generalize to a task. But real datasets can contain o...

Please sign up or login with your details

Forgot password? Click here to reset