Are adversarial examples inevitable?

09/06/2018
by   Ali Shafahi, et al.
0

A wide range of defenses have been proposed to harden neural networks against adversarial attacks. However, a pattern has emerged in which the majority of adversarial defenses are quickly broken by new attacks. Given the lack of success at generating robust defenses, we are led to ask a fundamental question: Are adversarial attacks inevitable? This paper analyzes adversarial examples from a theoretical perspective, and identifies fundamental bounds on the susceptibility of a classifier to adversarial attacks. We show that, for certain classes of problems, adversarial examples are inescapable. Using experiments, we explore the implications of theoretical guarantees for real-world problems and discuss how factors such as dimensionality and image complexity limit a classifier's robustness against adversarial examples.

READ FULL TEXT

page 1

page 2

research
11/22/2017

MagNet and "Efficient Defenses Against Adversarial Attacks" are Not Robust to Adversarial Examples

MagNet and "Efficient Defenses..." were recently proposed as a defense t...
research
05/05/2019

Better the Devil you Know: An Analysis of Evasion Attacks using Out-of-Distribution Adversarial Examples

A large body of recent work has investigated the phenomenon of evasion a...
research
10/26/2021

A Frequency Perspective of Adversarial Robustness

Adversarial examples pose a unique challenge for deep learning systems. ...
research
02/25/2020

Gödel's Sentence Is An Adversarial Example But Unsolvable

In recent years, different types of adversarial examples from different ...
research
06/21/2021

Delving into the pixels of adversarial samples

Despite extensive research into adversarial attacks, we do not know how ...
research
11/12/2017

Machine vs Machine: Minimax-Optimal Defense Against Adversarial Examples

Recently, researchers have discovered that the state-of-the-art object c...
research
09/06/2020

Detection Defense Against Adversarial Attacks with Saliency Map

It is well established that neural networks are vulnerable to adversaria...

Please sign up or login with your details

Forgot password? Click here to reset