On the Universality of the Double Descent Peak in Ridgeless Regression

10/05/2020
by   David Holzmüller, et al.
0

We prove a non-asymptotic distribution-independent lower bound for the expected mean squared generalization error caused by label noise in ridgeless linear regression. Our lower bound generalizes a similar known result to the overparameterized (interpolating) regime. In contrast to most previous works, our analysis applies to a broad class of input distributions with almost surely full-rank feature matrices, which allows us to cover various types of deterministic or random feature maps. Our lower bound is asymptotically sharp and implies that in the presence of label noise, ridgeless linear regression does not perform well around the interpolation threshold for any of these feature maps. We analyze the imposed assumptions in detail and provide a theory for analytic (random) feature maps. Using this theory, we can show that our assumptions are satisfied for input distributions with a (Lebesgue) density and feature maps given by random deep neural networks with analytic activation functions like sigmoid, tanh, softplus or GELU. As further examples, we show that feature maps from random Fourier features and polynomial kernels also satisfy our assumptions. We complement our theory with further experimental and analytic results.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
12/17/2013

Compact Random Feature Maps

Kernel approximation using randomized feature maps has recently gained a...
research
03/24/2023

Random sampling and unisolvent interpolation by almost everywhere analytic functions

We prove a.s. (almost sure) unisolvency of interpolation by continuous r...
research
06/05/2020

Triple descent and the two kinds of overfitting: Where why do they appear?

A recent line of research has highlighted the existence of a double desc...
research
05/30/2018

On the Spectrum of Random Features Maps of High Dimensional Data

Random feature maps are ubiquitous in modern statistical machine learnin...
research
05/31/2022

Optimal Activation Functions for the Random Features Regression Model

The asymptotic mean squared test error and sensitivity of the Random Fea...
research
05/15/2021

Universality and Optimality of Structured Deep Kernel Networks

Kernel based methods yield approximation models that are flexible, effic...
research
02/28/2017

Deep Semi-Random Features for Nonlinear Function Approximation

We propose semi-random features for nonlinear function approximation. Th...

Please sign up or login with your details

Forgot password? Click here to reset