Computational Asymmetries in Robust Classification

06/25/2023
by   Samuele Marro, et al.
0

In the context of adversarial robustness, we make three strongly related contributions. First, we prove that while attacking ReLU classifiers is 𝑁𝑃-hard, ensuring their robustness at training time is Σ^2_P-hard (even on a single example). This asymmetry provides a rationale for the fact that robust classifications approaches are frequently fooled in the literature. Second, we show that inference-time robustness certificates are not affected by this asymmetry, by introducing a proof-of-concept approach named Counter-Attack (CA). Indeed, CA displays a reversed asymmetry: running the defense is 𝑁𝑃-hard, while attacking it is Σ_2^P-hard. Finally, motivated by our previous result, we argue that adversarial attacks can be used in the context of robustness certification, and provide an empirical evaluation of their effectiveness. As a byproduct of this process, we also release UG100, a benchmark dataset for adversarial attacks.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
08/18/2022

Shadows Aren't So Dangerous After All: A Fast and Robust Defense Against Shadow-Based Adversarial Attacks

Robust classification is essential in tasks like autonomous vehicle sign...
research
01/30/2023

Identifying Adversarially Attackable and Robust Samples

This work proposes a novel perspective on adversarial attacks by introdu...
research
06/21/2023

Adversarial Attacks Neutralization via Data Set Randomization

Adversarial attacks on deep-learning models pose a serious threat to the...
research
08/19/2019

Human uncertainty makes classification more robust

The classification performance of deep neural networks has begun to asym...
research
02/04/2019

Theoretical evidence for adversarial robustness through randomization: the case of the Exponential family

This paper investigates the theory of robustness against adversarial att...
research
03/17/2023

Robust Mode Connectivity-Oriented Adversarial Defense: Enhancing Neural Network Robustness Against Diversified ℓ_p Attacks

Adversarial robustness is a key concept in measuring the ability of neur...
research
06/03/2021

Attack Prediction using Hidden Markov Model

It is important to predict any adversarial attacks and their types to en...

Please sign up or login with your details

Forgot password? Click here to reset