Adversarial examples from computational constraints

05/25/2018
by   Sébastien Bubeck, et al.
2

Why are classifiers in high dimension vulnerable to "adversarial" perturbations? We show that it is likely not due to information theoretic limitations, but rather it could be due to computational constraints. First we prove that, for a broad set of classification tasks, the mere existence of a robust classifier implies that it can be found by a possibly exponential-time algorithm with relatively few training examples. Then we give a particular classification task where learning a robust classifier is computationally intractable. More precisely we construct a binary classification task in high dimensional space which is (i) information theoretically easy to learn robustly for large perturbations, (ii) efficiently learnable (non-robustly) by a simple linear separator, (iii) yet is not efficiently robustly learnable, even for small perturbations, by any algorithm in the statistical query (SQ) model. This example gives an exponential separation between classical learning and robust learning in the statistical query model. It suggests that adversarial examples may be an unavoidable byproduct of computational limitations of learning algorithms.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
11/15/2018

Adversarial Examples from Cryptographic Pseudo-Random Generators

In our recent work (Bubeck, Price, Razenshteyn, arXiv:1805.10204) we arg...
research
01/02/2019

Adversarial Robustness May Be at Odds With Simplicity

Current techniques in machine learning are so far are unable to learn cl...
research
02/09/2015

Analysis of classifiers' robustness to adversarial perturbations

The goal of this paper is to analyze an intriguing phenomenon recently d...
research
02/04/2019

Computational Limitations in Robust Classification and Win-Win Results

We continue the study of computational limitations in learning robust cl...
research
05/28/2019

Adversarially Robust Learning Could Leverage Computational Hardness

Over recent years, devising classification algorithms that are robust to...
research
09/09/2018

The Curse of Concentration in Robust Learning: Evasion and Poisoning Attacks from Concentration of Measure

Many modern machine learning classifiers are shown to be vulnerable to a...
research
05/28/2021

Towards optimally abstaining from prediction

A common challenge across all areas of machine learning is that training...

Please sign up or login with your details

Forgot password? Click here to reset