Analysis of classifiers' robustness to adversarial perturbations

02/09/2015
by   Alhussein Fawzi, et al.
0

The goal of this paper is to analyze an intriguing phenomenon recently discovered in deep networks, namely their instability to adversarial perturbations (Szegedy et. al., 2014). We provide a theoretical framework for analyzing the robustness of classifiers to adversarial perturbations, and show fundamental upper bounds on the robustness of classifiers. Specifically, we establish a general upper bound on the robustness of classifiers to adversarial perturbations, and then illustrate the obtained upper bound on the families of linear and quadratic classifiers. In both cases, our upper bound depends on a distinguishability measure that captures the notion of difficulty of the classification task. Our results for both classes imply that in tasks involving small distinguishability, no classifier in the considered set will be robust to adversarial perturbations, even if a good accuracy is achieved. Our theoretical framework moreover suggests that the phenomenon of adversarial instability is due to the low flexibility of classifiers, compared to the difficulty of the classification task (captured by the distinguishability). Moreover, we show the existence of a clear distinction between the robustness of a classifier to random noise and its robustness to adversarial perturbations. Specifically, the former is shown to be larger than the latter by a factor that is proportional to √(d) (with d being the signal dimension) for linear classifiers. This result gives a theoretical explanation for the discrepancy between the two robustness properties in high dimensional problems, which was empirically observed in the context of neural networks. To the best of our knowledge, our results provide the first theoretical work that addresses the phenomenon of adversarial instability recently observed for deep networks. Our analysis is complemented by experimental results on controlled and real-world data.

READ FULL TEXT

page 4

page 10

page 11

research
01/02/2019

Adversarial Robustness May Be at Odds With Simplicity

Current techniques in machine learning are so far are unable to learn cl...
research
05/26/2017

Analysis of universal adversarial perturbations

Deep networks have recently been shown to be vulnerable to universal per...
research
08/31/2016

Robustness of classifiers: from adversarial to random noise

Several recent works have shown that state-of-the-art classifiers are vu...
research
03/25/2022

Origins of Low-dimensional Adversarial Perturbations

In this note, we initiate a rigorous study of the phenomenon of low-dime...
research
01/27/2019

An Information-Theoretic Explanation for the Adversarial Fragility of AI Classifiers

We present a simple hypothesis about a compression property of artificia...
research
05/25/2018

Adversarial examples from computational constraints

Why are classifiers in high dimension vulnerable to "adversarial" pertur...
research
05/29/2018

Classification Stability for Sparse-Modeled Signals

Despite their impressive performance, deep convolutional neural networks...

Please sign up or login with your details

Forgot password? Click here to reset