Universal Lower-Bounds on Classification Error under Adversarial Attacks and Random Corruption

06/17/2020
by   Elvis Dohmatob, et al.
0

We theoretically analyse the limits of robustness to test-time adversarial and noisy examples in classification. Our work focuses on deriving bounds which uniformly apply to all classifiers (i.e all measurable functions from features to labels) for a given problem. Our contributions are three-fold. (1) In the classical framework of adversarial attacks, we use optimal transport theory to derive variational formulae for the Bayes-optimal error a classifier can make on a given classification problem, subject to adversarial attacks. The optimal adversarial attack is then an optimal transport plan for a certain binary cost-function induced by the specific attack model, and can be computed via a simple algorithm based on maximal matching on bipartite graphs. (2) We derive explicit lower-bounds on the Bayes-optimal error in the case of the popular distance-based attacks. These bounds are universal in the sense that they depend on the geometry of the class-conditional distributions of the data, but not on a particular classifier. Our results are in sharp contrast with the existing literature, wherein adversarial vulnerability of classifiers is derived as a consequence of nonzero ordinary test error. (3) For our third contribution, we study robustness to random noise corruption, wherein the attacker (or nature) is allowed to inject random noise into examples at test time. We establish nonlinear data-processing inequalities induced by such corruptions, and use them to obtain lower-bounds on the Bayes-optimal error for noisy problem.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
09/26/2019

Lower Bounds on Adversarial Robustness from Optimal Transport

While progress has been made in understanding the robustness of machine ...
research
12/05/2019

Adversarial Risk via Optimal Transport and Optimal Couplings

The accuracy of modern machine learning algorithms deteriorates severely...
research
02/21/2023

Characterizing the Optimal 0-1 Loss for Multi-class Classification with a Test-time Attacker

Finding classifiers robust to adversarial examples is critical for their...
research
02/19/2018

Are Generative Classifiers More Robust to Adversarial Attacks?

There is a rising interest in studying the robustness of deep neural net...
research
02/19/2021

A PAC-Bayes Analysis of Adversarial Robustness

We propose the first general PAC-Bayesian generalization bounds for adve...
research
06/02/2018

Optimal Clustering under Uncertainty

Classical clustering algorithms typically either lack an underlying prob...
research
05/14/2023

From Soft-Minoration to Information-Constrained Optimal Transport and Spiked Tensor Models

Let P_Z be a given distribution on ℝ^n. For any y∈ℝ^n, we may interpret ...

Please sign up or login with your details

Forgot password? Click here to reset