Characterizing the Optimal 0-1 Loss for Multi-class Classification with a Test-time Attacker

02/21/2023
by   Sihui Dai, et al.
2

Finding classifiers robust to adversarial examples is critical for their safe deployment. Determining the robustness of the best possible classifier under a given threat model for a given data distribution and comparing it to that achieved by state-of-the-art training methods is thus an important diagnostic tool. In this paper, we find achievable information-theoretic lower bounds on loss in the presence of a test-time attacker for multi-class classifiers on any discrete dataset. We provide a general framework for finding the optimal 0-1 loss that revolves around the construction of a conflict hypergraph from the data and adversarial constraints. We further define other variants of the attacker-classifier game that determine the range of the optimal loss more efficiently than the full-fledged hypergraph construction. Our evaluation shows, for the first time, an analysis of the gap to optimal robustness for classifiers in the multi-class setting on benchmark datasets.

READ FULL TEXT
research
04/16/2021

Lower Bounds on Cross-Entropy Loss in the Presence of Test-time Adversaries

Understanding the fundamental limits of robust supervised learning has e...
research
06/17/2020

Universal Lower-Bounds on Classification Error under Adversarial Attacks and Random Corruption

We theoretically analyse the limits of robustness to test-time adversari...
research
06/28/2021

Scalable Optimal Classifiers for Adversarial Settings under Uncertainty

We consider the problem of finding optimal classifiers in an adversarial...
research
04/12/2021

A Backdoor Attack against 3D Point Cloud Classifiers

Vulnerability of 3D point cloud (PC) classifiers has become a grave conc...
research
09/26/2019

Lower Bounds on Adversarial Robustness from Optimal Transport

While progress has been made in understanding the robustness of machine ...
research
10/18/2020

Poisoned classifiers are not only backdoored, they are fundamentally broken

Under a commonly-studied "backdoor" poisoning attack against classificat...
research
07/26/2020

Robust Collective Classification against Structural Attacks

Collective learning methods exploit relations among data points to enhan...

Please sign up or login with your details

Forgot password? Click here to reset