Mind the spikes: Benign overfitting of kernels and neural networks in fixed dimension

05/23/2023
by   Moritz Haas, et al.
0

The success of over-parameterized neural networks trained to near-zero training error has caused great interest in the phenomenon of benign overfitting, where estimators are statistically consistent even though they interpolate noisy training data. While benign overfitting in fixed dimension has been established for some learning methods, current literature suggests that for regression with typical kernel methods and wide neural networks, benign overfitting requires a high-dimensional setting where the dimension grows with the sample size. In this paper, we show that the smoothness of the estimators, and not the dimension, is the key: benign overfitting is possible if and only if the estimator's derivatives are large enough. We generalize existing inconsistency results to non-interpolating models and more kernels to show that benign overfitting with moderate derivatives is impossible in fixed dimension. Conversely, we show that benign overfitting is possible for regression with a sequence of spiky-smooth kernels with large derivatives. Using neural tangent kernels, we translate our results to wide neural networks. We prove that while infinite-width networks do not overfit benignly with the ReLU activation, this can be fixed by adding small high-frequency fluctuations to the activation function. Our experiments verify that such neural networks, while overfitting, can indeed generalize well even on low-dimensional data sets.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/24/2023

From Tempered to Benign Overfitting in ReLU Neural Networks

Overparameterized neural networks (NNs) are observed to generalize well ...
research
03/07/2023

Benign Overfitting for Two-layer ReLU Networks

Modern deep learning models with great expressive power can be trained t...
research
06/10/2023

Any-dimensional equivariant neural networks

Traditional supervised learning aims to learn an unknown mapping by fitt...
research
07/14/2022

Benign, Tempered, or Catastrophic: A Taxonomy of Overfitting

The practical success of overparameterized neural networks has motivated...
research
05/29/2023

Generalization Ability of Wide Residual Networks

In this paper, we study the generalization ability of the wide residual ...
research
05/01/2023

Differentiable Neural Networks with RePU Activation: with Applications to Score Estimation and Isotonic Regression

We study the properties of differentiable neural networks activated by r...
research
01/27/2022

The Implicit Bias of Benign Overfitting

The phenomenon of benign overfitting, where a predictor perfectly fits n...

Please sign up or login with your details

Forgot password? Click here to reset