Lower Bounds for Adversarially Robust PAC Learning

06/13/2019
by   Dimitrios I. Diochnos, et al.
0

In this work, we initiate a formal study of probably approximately correct (PAC) learning under evasion attacks, where the adversary's goal is to misclassify the adversarially perturbed sample point x, i.e., h(x)≠ c(x), where c is the ground truth concept and h is the learned hypothesis. Previous work on PAC learning of adversarial examples have all modeled adversarial examples as corrupted inputs in which the goal of the adversary is to achieve h(x) ≠ c(x), where x is the original untampered instance. These two definitions of adversarial risk coincide for many natural distributions, such as images, but are incomparable in general. We first prove that for many theoretically natural input spaces of high dimension n (e.g., isotropic Gaussian in dimension n under ℓ_2 perturbations), if the adversary is allowed to apply up to a sublinear o(||x||) amount of perturbations on the test instances, PAC learning requires sample complexity that is exponential in n. This is in contrast with results proved using the corrupted-input framework, in which the sample complexity of robust learning is only polynomially more. We then formalize hybrid attacks in which the evasion attack is preceded by a poisoning attack. This is perhaps reminiscent of "trapdoor attacks" in which a poisoning phase is involved as well, but the evasion phase here uses the error-region definition of risk that aims at misclassifying the perturbed instances. In this case, we show PAC learning is sometimes impossible all together, even when it is possible without the attack (e.g., due to the bounded VC dimension).

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/05/2018

PAC-learning in the presence of evasion adversaries

The existence of evasion attacks during the test phase of machine learni...
research
02/19/2021

A PAC-Bayes Analysis of Adversarial Robustness

We propose the first general PAC-Bayesian generalization bounds for adve...
research
05/18/2021

Learning and Certification under Instance-targeted Poisoning

In this paper, we study PAC learnability and certification under instanc...
research
02/10/2021

Adversarial Robustness: What fools you makes you stronger

We prove an exponential separation for the sample complexity between the...
research
03/02/2022

Adversarially Robust Learning with Tolerance

We study the problem of tolerant adversarial PAC learning with respect t...
research
09/09/2018

The Curse of Concentration in Robust Learning: Evasion and Poisoning Attacks from Concentration of Measure

Many modern machine learning classifiers are shown to be vulnerable to a...
research
02/24/2020

On the Sample Complexity of Adversarial Multi-Source PAC Learning

We study the problem of learning from multiple untrusted data sources, a...

Please sign up or login with your details

Forgot password? Click here to reset