Synthesizing Robust Adversarial Examples

07/24/2017
by   Anish Athalye, et al.
0

Neural network-based classifiers parallel or exceed human-level accuracy on many common tasks and are used in practical systems. Yet, neural networks are susceptible to adversarial examples, carefully perturbed inputs that cause networks to misbehave in arbitrarily chosen ways. When generated with standard methods, these examples do not consistently fool a classifier in the physical world due to viewpoint shifts, camera noise, and other natural transformations. Adversarial examples generated using standard techniques require complete control over direct input to the classifier, which is impossible in many real-world systems. We introduce the first method for constructing real-world 3D objects that consistently fool a neural network across a wide distribution of angles and viewpoints. We present a general-purpose algorithm for generating adversarial examples that are robust across any chosen distribution of transformations. We demonstrate its application in two dimensions, producing adversarial images that are robust to noise, distortion, and affine transformation. Finally, we apply the algorithm to produce arbitrary physical 3D-printed adversarial objects, demonstrating that our approach works end-to-end in the real world. Our results show that adversarial examples are a practical concern for real-world systems.

READ FULL TEXT

page 2

page 3

research
11/27/2020

Robust and Natural Physical Adversarial Examples for Object Detectors

Recently, many studies show that deep neural networks (DNNs) are suscept...
research
05/05/2019

Better the Devil you Know: An Analysis of Evasion Attacks using Out-of-Distribution Adversarial Examples

A large body of recent work has investigated the phenomenon of evasion a...
research
09/13/2021

Improving Robustness of Adversarial Attacks Using an Affine-Invariant Gradient Estimator

Adversarial examples can deceive a deep neural network (DNN) by signific...
research
11/20/2017

Verifying Neural Networks with Mixed Integer Programming

Neural networks have demonstrated considerable success in a wide variety...
research
09/02/2021

Real World Robustness from Systematic Noise

Systematic error, which is not determined by chance, often refers to the...
research
04/01/2017

SafetyNet: Detecting and Rejecting Adversarial Examples Robustly

We describe a method to produce a network where current methods such as ...
research
07/12/2017

NO Need to Worry about Adversarial Examples in Object Detection in Autonomous Vehicles

It has been shown that most machine learning algorithms are susceptible ...

Please sign up or login with your details

Forgot password? Click here to reset