Overfitting or perfect fitting? Risk bounds for classification and regression rules that interpolate

06/13/2018
by   Mikhail Belkin, et al.
8

Many modern machine learning models are trained to achieve zero or near-zero training error in order to obtain near-optimal (but non-zero) test error. This phenomenon of strong generalization performance for "overfitted" / interpolated classifiers appears to be ubiquitous in high-dimensional data, having been observed in deep networks, kernel machines, boosting and random forests. Their performance is robust even when the data contain large amounts of label noise. Very little theory is available to explain these observations. The vast majority of theoretical analyses of generalization allows for interpolation only when there is little or no label noise. This paper takes a step toward a theoretical foundation for interpolated classifiers by analyzing local interpolating schemes, including geometric simplicial interpolation algorithm and weighted k-nearest neighbor schemes. Consistency or near-consistency is proved for these schemes in classification and regression problems. These schemes have an inductive bias that benefits from higher dimension, a kind of "blessing of dimensionality". Finally, connections to kernel machines, random forests, and adversarial examples in the interpolated regime are discussed.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
12/31/2021

Benign Overfitting in Adversarially Robust Linear Classification

"Benign overfitting", where classifiers memorize noisy training data yet...
research
01/18/2023

Strong inductive biases provably prevent harmless interpolation

Classical wisdom suggests that estimators should avoid fitting noise to ...
research
02/05/2018

To understand deep learning we need to understand kernel learning

Generalization performance of classifiers in deep learning has recently ...
research
02/08/2022

Is interpolation benign for random forests?

Statistical wisdom suggests that very complex models, interpolating trai...
research
10/05/2018

Statistical Optimality of Interpolated Nearest Neighbor Algorithms

In the era of deep learning, understanding over-fitting phenomenon becom...
research
03/19/2022

Deep Learning Generalization, Extrapolation, and Over-parameterization

We study the generalization of over-parameterized deep networks (for ima...
research
05/25/2019

Global Minima of DNNs: The Plenty Pantry

A common strategy to train deep neural networks (DNNs) is to use very la...

Please sign up or login with your details

Forgot password? Click here to reset