Limitations of adversarial robustness: strong No Free Lunch Theorem

10/08/2018
by   Elvis Dohmatob, et al.
0

This manuscript presents some new results on adversarial robustness in machine learning, a very important yet largely open problem. We show that if conditioned on a class label the data distribution satisfies the generalized Talagrand transportation-cost inequality (for example, this condition is satisfied if the conditional distribution has density which is log-concave), any classifier can be adversarially fooled with high probability once the perturbations are slightly greater than the natural noise level in the problem. We call this result The Strong "No Free Lunch" Theorem as some recent results (Tsipras et al. 2018, Fawzi et al. 2018, etc.) on the subject can be immediately recovered as very particular cases. Our theoretical bounds are demonstrated on both simulated and real data (MNIST). These bounds readily extend to distributional ro- bustness (with 0/1 loss). We conclude the manuscript with some speculation on possible future research directions.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/01/2020

Understanding the Intrinsic Robustness of Image Distributions using Conditional Generative Models

Starting with Gilmer et al. (2018), several works have demonstrated the ...
research
02/24/2021

Two Problems about Monomial Bent Functions

In 2008, Langevin and Leander determined the dual function of three clas...
research
06/14/2020

Bias no more: high-probability data-dependent regret bounds for adversarial bandits and MDPs

We develop a new approach to obtaining high probability regret bounds fo...
research
07/12/2023

Robbed withdrawal

In this article we show that Theorem 2 in Lie et al. (2023) is incorrect...
research
07/10/2020

A Strong XOR Lemma for Randomized Query Complexity

We give a strong direct sum theorem for computing xor ∘ g. Specifically,...
research
09/18/2020

Approximate Majority With Catalytic Inputs

Third-state dynamics (Angluin et al. 2008; Perron et al. 2009) is a well...
research
06/23/2020

Learning Based Distributed Tracking

Inspired by the great success of machine learning in the past decade, pe...

Please sign up or login with your details

Forgot password? Click here to reset