Adversarially Robust Learning Could Leverage Computational Hardness

05/28/2019
by   Sanjam Garg, et al.
4

Over recent years, devising classification algorithms that are robust to adversarial perturbations has emerged as a challenging problem. In particular, deep neural nets (DNNs) seem to be susceptible to small imperceptible changes over test instances. In this work, we study whether there is any learning task for which it is possible to design classifiers that are only robust against polynomial-time adversaries. Indeed, numerous cryptographic tasks (e.g. encryption of long messages) are only be secure against computationally bounded adversaries, and are indeed mpossible for computationally unbounded attackers. Thus, it is natural to ask if the same strategy could help robust learning. We show that computational limitation of attackers can indeed be useful in robust learning by demonstrating a classifier for a learning task in which computational and information theoretic adversaries of bounded perturbations have very different power. Namely, while computationally unbounded adversaries can attack successfully and find adversarial examples with small perturbation, polynomial time adversaries are unable to do so unless they can break standard cryptographic hardness assumptions. Our results, therefore, indicate that perhaps a similar approach to cryptography (relying on computational hardness) holds promise for achieving computationally robust machine learning. We also show that the existence of such learning task in which computational robustness beats information theoretic robustness implies (average case) hard problems in NP.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/02/2018

Can Adversarially Robust Learning Leverage Computational Hardness?

Making learners robust to adversarial perturbation at test time (i.e., e...
research
02/04/2019

Computational Limitations in Robust Classification and Win-Win Results

We continue the study of computational limitations in learning robust cl...
research
11/15/2018

Adversarial Examples from Cryptographic Pseudo-Random Generators

In our recent work (Bubeck, Price, Razenshteyn, arXiv:1805.10204) we arg...
research
07/14/2020

Adversarial Examples and Metrics

Adversarial examples are a type of attack on machine learning (ML) syste...
research
05/25/2018

Adversarial examples from computational constraints

Why are classifiers in high dimension vulnerable to "adversarial" pertur...
research
11/12/2019

On Robustness to Adversarial Examples and Polynomial Optimization

We study the design of computationally efficient algorithms with provabl...
research
05/31/2019

Human-Usable Password Schemas: Beyond Information-Theoretic Security

Password users frequently employ passwords that are too simple, or they ...

Please sign up or login with your details

Forgot password? Click here to reset