A law of adversarial risk, interpolation, and label noise

07/08/2022
by   Daniel Paleka, et al.
0

In supervised learning, it has been shown that label noise in the data can be interpolated without penalties on test accuracy under many circumstances. We show that interpolating label noise induces adversarial vulnerability, and prove the first theorem showing the dependence of label noise and adversarial risk in terms of the data distribution. Our results are almost sharp without accounting for the inductive bias of the learning algorithm. We also show that inductive bias makes the effect of label noise much stronger.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/07/2022

Fast rates for noisy interpolation require rethinking the effects of inductive bias

Good generalization performance on high-dimensional data crucially hinge...
research
03/09/2023

Efficient Testable Learning of Halfspaces with Adversarial Label Noise

We give the first polynomial-time algorithm for the testable learning of...
research
01/18/2023

Strong inductive biases provably prevent harmless interpolation

Classical wisdom suggests that estimators should avoid fitting noise to ...
research
07/08/2020

How benign is benign overfitting?

We investigate two causes for adversarial vulnerability in deep neural n...
research
12/17/2020

Characterizing the Evasion Attackability of Multi-label Classifiers

Evasion attack in multi-label learning systems is an interesting, widely...
research
10/07/2021

Double Descent in Adversarial Training: An Implicit Label Noise Perspective

Here, we show that the robust overfitting shall be viewed as the early p...
research
04/26/2021

An Exploration into why Output Regularization Mitigates Label Noise

Label noise presents a real challenge for supervised learning algorithms...

Please sign up or login with your details

Forgot password? Click here to reset