Noisy Interpolation Learning with Shallow Univariate ReLU Networks

07/28/2023
by   Nirmit Joshi, et al.
0

We study the asymptotic overfitting behavior of interpolation with minimum norm (ℓ_2 of the weights) two-layer ReLU networks for noisy univariate regression. We show that overfitting is tempered for the L_1 loss, and any L_p loss for p<2, but catastrophic for p≥ 2.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
11/10/2021

Tight bounds for minimum l1-norm interpolation of noisy data

We provide matching upper and lower bounds of order σ^2/log(d/n) for the...
research
02/14/2023

Interpolation Learning With Minimum Description Length

We prove that the Minimum Description Length learning rule exhibits temp...
research
06/18/2019

Gradient Dynamics of Shallow Univariate ReLU Networks

We present a theoretical and empirical study of the gradient dynamics of...
research
06/16/2023

Training shallow ReLU networks on noisy data using hinge loss: when do we overfit and is it benign?

We study benign overfitting in two-layer ReLU networks trained using gra...
research
03/02/2023

Benign Overfitting in Linear Classifiers and Leaky ReLU Networks from KKT Conditions for Margin Maximization

Linear classifiers and leaky ReLU networks trained by gradient flow on t...
research
05/24/2023

From Tempered to Benign Overfitting in ReLU Neural Networks

Overparameterized neural networks (NNs) are observed to generalize well ...

Please sign up or login with your details

Forgot password? Click here to reset