Robust learning under clean-label attack

03/01/2021
by   Avrim Blum, et al.
13

We study the problem of robust learning under clean-label data-poisoning attacks, where the attacker injects (an arbitrary set of) correctly-labeled examples to the training set to fool the algorithm into making mistakes on specific test instances at test time. The learning goal is to minimize the attackable rate (the probability mass of attackable test instances), which is more difficult than optimal PAC learning. As we show, any robust algorithm with diminishing attackable rate can achieve the optimal dependence on ϵ in its PAC sample complexity, i.e., O(1/ϵ). On the other hand, the attackable rate might be large even for some optimal PAC learners, e.g., SVM for linear classifiers. Furthermore, we show that the class of linear hypotheses is not robustly learnable when the data distribution has zero margin and is robustly learnable in the case of positive margin but requires sample complexity exponential in the dimension. For a general hypothesis class with bounded VC dimension, if the attacker is limited to add at most t>0 poison examples, the optimal robust learning sample complexity grows almost linearly with t.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/12/2019

VC Classes are Adversarially Robustly Learnable, but Only Improperly

We study the question of learning an adversarially robust predictor. We ...
research
02/09/2023

Tree Learning: Optimal Algorithms and Sample Complexity

We study the problem of learning a hierarchical tree representation of d...
research
05/24/2020

Proper Learning, Helly Number, and an Optimal SVM Bound

The classical PAC sample complexity bounds are stated for any Empirical ...
research
03/02/2022

Adversarially Robust Learning with Tolerance

We study the problem of tolerant adversarial PAC learning with respect t...
research
04/07/2020

On the Complexity of Learning from Label Proportions

In the problem of learning with label proportions, which we call LLP lea...
research
09/15/2022

Adversarially Robust Learning: A Generic Minimax Optimal Learner and Characterization

We present a minimax optimal learner for the problem of learning predict...
research
10/20/2021

Transductive Robust Learning Guarantees

We study the problem of adversarially robust learning in the transductiv...

Please sign up or login with your details

Forgot password? Click here to reset