DeepAI
Log In Sign Up

Benign, Tempered, or Catastrophic: A Taxonomy of Overfitting

07/14/2022
by   Neil Mallinar, et al.
13

The practical success of overparameterized neural networks has motivated the recent scientific study of interpolating methods, which perfectly fit their training data. Certain interpolating methods, including neural networks, can fit noisy training data without catastrophically bad test performance, in defiance of standard intuitions from statistical learning theory. Aiming to explain this, a body of recent work has studied benign overfitting, a phenomenon where some interpolating methods approach Bayes optimality, even in the presence of noise. In this work we argue that while benign overfitting has been instructive and fruitful to study, many real interpolating methods like neural networks do not fit benignly: modest noise in the training set causes nonzero (but non-infinite) excess risk at test time, implying these models are neither benign nor catastrophic but rather fall in an intermediate regime. We call this intermediate regime tempered overfitting, and we initiate its systematic study. We first explore this phenomenon in the context of kernel (ridge) regression (KR) by obtaining conditions on the ridge parameter and kernel eigenspectrum under which KR exhibits each of the three behaviors. We find that kernels with powerlaw spectra, including Laplace kernels and ReLU neural tangent kernels, exhibit tempered overfitting. We then empirically study deep neural networks through the lens of our taxonomy, and find that those trained to interpolation are tempered, while those stopped early are benign. We hope our work leads to a more refined understanding of overfitting in modern learning.

READ FULL TEXT

page 1

page 2

page 3

page 4

02/05/2018

To understand deep learning we need to understand kernel learning

Generalization performance of classifiers in deep learning has recently ...
02/14/2022

Benign Overfitting in Two-layer Convolutional Neural Networks

Modern neural networks often have great expressive power and can be trai...
06/11/2020

A new measure for overfitting and its implications for backdooring of deep learning

Overfitting describes the phenomenon that a machine learning model fits ...
11/22/2022

Learning Deep Neural Networks by Iterative Linearisation

The excellent real-world performance of deep neural networks has receive...
03/16/2021

Deep learning: a statistical viewpoint

The remarkable practical success of deep learning has revealed some majo...
09/04/2022

Towards Understanding the Overfitting Phenomenon of Deep Click-Through Rate Prediction Models

Deep learning techniques have been applied widely in industrial recommen...
11/09/2021

Harmless interpolation in regression and classification with structured features

Overparametrized neural networks tend to perfectly fit noisy training da...