Optimistic Rates: A Unifying Theory for Interpolation Learning and Regularization in Linear Regression

12/08/2021
by   Lijia Zhou, et al.
0

We study a localized notion of uniform convergence known as an "optimistic rate" (Panchenko 2002; Srebro et al. 2010) for linear regression with Gaussian data. Our refined analysis avoids the hidden constant and logarithmic factor in existing results, which are known to be crucial in high-dimensional settings, especially for understanding interpolation learning. As a special case, our analysis recovers the guarantee from Koehler et al. (2021), which tightly characterizes the population risk of low-norm interpolators under the benign overfitting conditions. Our optimistic rate bound, though, also analyzes predictors with arbitrary training error. This allows us to recover some classical statistical guarantees for ridge and LASSO regression under random designs, and helps us obtain a precise understanding of the excess risk of near-interpolators in the over-parameterized regime.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/17/2021

Uniform Convergence of Interpolators: Gaussian Width, Norm Bounds, and Benign Overfitting

We consider interpolation learning in high-dimensional linear regression...
research
12/31/2019

Asymptotic Risk of Least Squares Minimum Norm Estimator under the Spike Covariance Model

One of the recent approaches to explain good performance of neural netwo...
research
10/21/2022

A Non-Asymptotic Moreau Envelope Theory for High-Dimensional Generalized Linear Models

We prove a new generalization bound that shows for any class of linear p...
research
03/14/2022

The TAP free energy for high-dimensional linear regression

We derive a variational representation for the log-normalizing constant ...
research
10/21/2021

On Optimal Interpolation In Linear Regression

Understanding when and why interpolating methods generalize well has rec...
research
06/10/2020

On Uniform Convergence and Low-Norm Interpolation Learning

We consider an underdetermined noisy linear regression model where the m...
research
07/22/2019

Fast rates for empirical risk minimization with cadlag losses with bounded sectional variation norm

Empirical risk minimization over sieves of the class F of cadlag functio...

Please sign up or login with your details

Forgot password? Click here to reset