Synthesizing Robust Adversarial Examples

07/24/2017 ∙ by Anish Athalye, et al. ∙ 0

Neural network-based classifiers parallel or exceed human-level accuracy on many common tasks and are used in practical systems. Yet, neural networks are susceptible to adversarial examples, carefully perturbed inputs that cause networks to misbehave in arbitrarily chosen ways. When generated with standard methods, these examples do not consistently fool a classifier in the physical world due to viewpoint shifts, camera noise, and other natural transformations. Adversarial examples generated using standard techniques require complete control over direct input to the classifier, which is impossible in many real-world systems. We introduce the first method for constructing real-world 3D objects that consistently fool a neural network across a wide distribution of angles and viewpoints. We present a general-purpose algorithm for generating adversarial examples that are robust across any chosen distribution of transformations. We demonstrate its application in two dimensions, producing adversarial images that are robust to noise, distortion, and affine transformation. Finally, we apply the algorithm to produce arbitrary physical 3D-printed adversarial objects, demonstrating that our approach works end-to-end in the real world. Our results show that adversarial examples are a practical concern for real-world systems.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 2

page 3

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

The existence of adversarial examples for neural networks (Szegedy et al., 2013; Biggio et al., 2013) was initially largely a theoretical concern. Recent work has demonstrated the applicability of adversarial examples in the physical world, showing that adversarial examples on a printed page remain adversarial when captured using a cell phone camera in an approximately axis-aligned setting (Kurakin et al., 2016). But while minute, carefully-crafted perturbations can cause targeted misclassification in neural networks, adversarial examples produced using standard techniques fail to fool classifiers in the physical world when the examples are captured over varying viewpoints and affected by natural phenomena such as lighting and camera noise (Luo et al., 2016; Lu et al., 2017). These results indicate that real-world systems may not be at risk in practice because adversarial examples generated using standard techniques are not robust in the physical world.

  classified as turtle       classified as rifle       classified as other

Figure 1: Randomly sampled poses of a 3D-printed turtle adversarially perturbed to classify as a rifle at every viewpoint222See https://youtu.be/YXy6oX1iNoA

for a video where every frame is fed through the ImageNet classifier: the turtle is consistently classified as a rifle.

. An unperturbed model is classified correctly as a turtle nearly 100% of the time.

We show that neural network-based classifiers are vulnerable to physical-world adversarial examples that remain adversarial over a different viewpoints. We introduce a new algorithm for synthesizing adversarial examples that are robust over a chosen distribution of transformations, which we apply for reliably producing robust adversarial images as well as physical-world adversarial objects. Figure 2 shows an example of an adversarial object constructed using our approach, where a 3D-printed turtle is consistently classified as rifle (a target class that was selected at random) by an ImageNet classifier. In this paper, we demonstrate the efficacy and generality of our method, demonstrating conclusively that adversarial examples are a practical concern in real-world systems.

1.1 Challenges

Methods for transforming ordinary two-dimensional images into adversarial examples, including techniques such as the L-BFGS attack (Szegedy et al., 2013), FGSM (Goodfellow et al., 2015), and the CW attack (Carlini & Wagner, 2017c), are well-known. While adversarial examples generated through these techniques can transfer to the physical world (Kurakin et al., 2016), the techniques have limited success in affecting real-world systems where the input may be transformed before being fed to the classifier. Prior work has shown that adversarial examples generated using these standard techniques often lose their adversarial nature once subjected to minor transformations (Luo et al., 2016; Lu et al., 2017).

Prior techniques attempting to synthesize adversarial examples robust over any chosen distribution of transformations in the physical world have had limited success (Evtimov et al., 2017). While some progress has been made, concurrent efforts have demonstrated a small number of data points on nonstandard classifiers, and only in the two-dimensional case, with no clear generalization to three dimensions (further discussed in Section 4).

Prior work has focused on generating two-dimensional adversarial examples, even for the physical world (Sharif et al., 2016; Evtimov et al., 2017), where “viewpoints” can be approximated by an affine transformations of an original image. However, 3D objects must remain adversarial in the face of complex transformations not applicable to 2D physical-world objects, such as 3D rotation and perspective projection.

1.2 Contributions

We demonstrate the existence of robust adversarial examples and adversarial objects in the physical world. We propose a general-purpose algorithm for reliably constructing adversarial examples robust over a chosen distribution of transformations, and we demonstrate the efficacy of this algorithm in both the 2D and 3D case. We succeed in computing and fabricating physical-world 3D adversarial objects that are robust over a large, realistic distribution of 3D viewpoints, demonstrating that the algorithm successfully produces adversarial three-dimensional objects that are adversarial in the physical world. Specifically, our contributions are as follows:

  • We develop Expectation Over Transformation (EOT), the first algorithm that produces robust adversarial examples: single adversarial examples that are simultaneously adversarial over an entire distribution of transformations.

  • We consider the problem of constructing 3D adversarial examples under the EOT framework, viewing the 3D rendering process as part of the transformation, and we show that the approach successfully synthesizes adversarial objects.

  • We fabricate the first 3D physical-world adversarial objects and show that they fool classifiers in the physical world, demonstrating the efficacy of our approach end-to-end and showing the existence of robust physical-world adversarial objects.

2 Approach

First, we present the Expectation Over Transformation (EOT) algorithm, a general framework allowing for the construction of adversarial examples that remain adversarial over a chosen transformation distribution . We then describe our end-to-end approach for generating adversarial objects using a specialized application of EOT in conjunction with differentiating through the 3D rendering process.

2.1 Expectation Over Transformation

When constructing adversarial examples in the white-box case (that is, with access to a classifier and its gradient), we know in advance a set of possible classes and a space of valid inputs to the classifier; we have access to the function and its gradient , for any class and input . In the standard case, adversarial examples are produced by maximizing the log-likelihood of the target class over a

-radius ball around the original image (which we represent as a vector of

pixels each in ):

subject to

This approach has been shown to be effective at generating adversarial examples. However, prior work has shown that these adversarial examples fail to remain adversarial under image transformations that occur in the real world, such as angle and viewpoint changes (Luo et al., 2016; Lu et al., 2017).

To address this issue, we introduce Expectation Over Transformation (EOT). The key insight behind EOT is to model such perturbations within the optimization procedure. Rather than optimizing the log-likelihood of a single example, EOT uses a chosen distribution of transformation functions taking an input controlled by the adversary to the “true” input perceived by the classifier. Furthermore, rather than simply taking the norm of to constrain the solution space, given a distance function , EOT instead aims to constrain the expected effective distance between the adversarial and original inputs, which we define as:

We use this new definition because we want to minimize the (expected) perceived distance as seen by the classifier. This is especially important in cases where has a different domain and codomain, e.g. when is a texture and is a rendering corresponding to the texture, we care to minimize the visual difference between and rather than minimizing the distance in texture space.

Thus, we have the following optimization problem:

subject to

In practice, the distribution can model perceptual distortions such as random rotation, translation, or addition of noise. However, the method generalizes beyond simple transformations; transformations in can perform operations such as 3D rendering of a texture.

We maximize the objective via stochastic gradient descent. We approximate the gradient of the expected value through sampling transformations independently at each gradient descent step and differentiating through the transformation.

2.2 Choosing a distribution of transformations

Given its ability to synthesize robust adversarial examples, we use the EOT framework for generating 2D examples, 3D models, and ultimately physical-world adversarial objects. Within the framework, however, there is a great deal of freedom in the actual method by which examples are generated, including choice of , distance metric, and optimization method.

2.2.1 2D case

In the 2D case, we choose to approximate a realistic space of possible distortions involved in printing out an image and taking a natural picture of it. This amounts to a set of random transformations of the form , which are more thoroughly described in Section 3. These random transformations are easy to differentiate, allowing for a straightforward application of EOT.

2.2.2 3D case

We note that the domain and codomain of need not be the same. To synthesize 3D adversarial examples, we consider textures (color patterns) corresponding to some chosen 3D object (shape), and we choose a distribution of transformation functions that take a texture and render a pose of the 3D object with the texture applied. The transformation functions map a texture to a rendering of an object, simulating functions including rendering, lighting, rotation, translation, and perspective projection of the object. Finding textures that are adversarial over a realistic distribution of poses allows for transfer of adversarial examples to the physical world.

To solve this optimization problem, EOT requires the ability to differentiate though the 3D rendering function with respect to the texture. Given a particular pose and choices for all other transformation parameters, a simple 3D rendering process can be modeled as a matrix multiplication and addition: every pixel in the rendering is some linear combination of pixels in the texture (plus some constant term). Given a particular choice of parameters, the rendering of a texture can be written as for some coordinate map and background .

Standard 3D renderers, as part of the rendering pipeline, compute the texture-space coordinates corresponding to on-screen coordinates; we modify an existing renderer to return this information. Then, instead of differentiating through the renderer, we compute and then differentiate through . We must re-compute and using the renderer for each pose, because EOT samples new poses at each gradient descent step.

2.3 Optimizing the objective

Once EOT has been parameterized, i.e. once a distribution is chosen, the issue of actually optimizing the induced objective function remains. Rather than solving the constrained optimization problem given above, we use the Lagrangian-relaxed form of the problem, as Carlini & Wagner (2017c) do in the standard single-viewpoint case:

In order to encourage visual imperceptibility of the generated images, we set to be the norm in the LAB color space, a perceptually uniform color space where Euclidean distance roughly corresponds with perceptual distance (McLaren, 1976)

. Using distance in LAB space as a proxy for human perceptual distance is a standard technique in computer vision. Note that the

can be sampled and estimated in conjunction with

; in general, the Lagrangian formulation gives EOT the ability to constrain the search space (in our case, using LAB distance) without computing a complex projection. Our optimization, then, is:

We use projected gradient descent to maximize the objective, and clip to the set of valid inputs (e.g. for images).

3 Evaluation

First, we describe our procedure for quantitatively evaluating the efficacy of EOT for generating 2D, 3D, and physical-world adversarial examples. Then, we show that we can reliably produce transformation-tolerant adversarial examples in both the 2D and 3D case. We show that we can synthesize and fabricate 3D adversarial objects, even those with complex shapes, in the physical world: these adversarial objects remain adversarial regardless of viewpoint, camera noise, and other similar real-world factors. Finally, we present a qualitative analysis of our results and discuss some challenges in applying EOT in the physical world.

3.1 Procedure

In our experiments, we use TensorFlow’s standard pre-trained InceptionV3 classifier 

(Szegedy et al., 2015) which has 78.0% top-1 accuracy on ImageNet. In all of our experiments, we use randomly chosen target classes, and we use EOT to synthesize adversarial examples over a chosen distribution. We measure the distance per pixel between the original and adversarial example (in LAB space), and we also measure classification accuracy (percent of randomly sampled viewpoints classified as the true class) and adversariality (percent of randomly sampled viewpoints classified as the adversarial class) for both the original and adversarial example. When working in simulation, we evaluate over a large number of transformations sampled randomly from the distribution; in the physical world, we evaluate over a large number of manually-captured images of our adversarial objects taken over different viewpoints.

Given a source object , a set of correct classes , a target class , and a robust adversarial example , we quantify the effectiveness of the adversarial example over a distribution of transformations as follows. Let be a function indicating whether the image was classified as the class :

We quantify the effectiveness of a robust adversarial example by measuring adversariality, which we define as:

This is equal to the probability that the example is classified as the target class for a transformation sampled from the distribution

. We approximate the expectation by sampling a large number of values from the distribution at test time.

3.2 Robust 2D adversarial examples

In the 2D case, we consider the distribution of transformations that includes rescaling, rotation, lightening or darkening by an additive factor, adding Gaussian noise, and translation of the image.

We take the first 1000 images in the ImageNet validation set, randomly choose a target class for each image, and use EOT to synthesize an adversarial example that is robust over the chosen distribution. We use a fixed

in our Lagrangian to constrain visual similarity. For each adversarial example, we evaluate over 1000 random transformations sampled from the distribution at evaluation time. Table 

1 summarizes the results. The adversarial examples have a mean adversariality of 96.4%, showing that our approach is highly effective in producing robust adversarial examples. Figure 2 shows one synthesized adversarial example. See the appendix for more examples.

Images Classification Accuracy Adversariality
mean stdev mean stdev mean
Original 70.0% 36.4% 0.01% 0.3% 0
Adversarial 0.9% 2.0% 96.4% 4.4%
Table 1: Evaluation of 2D adversarial examples with random targets. We evaluate each example over randomly sampled transformations to calculate classification accuracy and adversariality (percent classified as the adversarial class).
Original: Persian cat
97% / 0%
99% / 0%
19% / 0%
95% / 0%
Adv: jacamar
0% / 91%
0% / 96%
0% / 83%
0% / 97%
Figure 2: A 2D adversarial example showing classifier confidence in true / adversarial classes over randomly sampled poses.

3.3 Robust 3D adversarial examples

We produce 3D adversarial examples by modeling the 3D rendering as a transformation under EOT. Given a textured 3D object, we optimize the texture such that the rendering is adversarial from any viewpoint. We consider a distribution that incorporates different camera distances, lighting conditions, translation and rotation of the object, and solid background colors. We approximate the expectation over transformation by taking the mean loss over batches of size 40; furthermore, due to the computational expense of computing new poses, we reuse up to 80% of the batch at each iteration, but enforce that each batch contain at least 8 new poses. As previously mentioned, the parameters of the distribution we use is specified in the appendix, sampled as independent continuous random variables (that are uniform except for Gaussian noise). We searched over several

values in our Lagrangian for each example / target class pair. In our final evaluation, we used the example with the smallest that still maintained ¿90% adversariality over 100 held out, random transformations.

We consider 10 3D models, obtained from 3D asset sites, that represent different ImageNet classes: barrel, baseball, dog, orange, turtle, clownfish, sofa, teddy bear, car, and taxi.

We choose 20 random target classes per 3D model, and use EOT to synthesize adversarial textures for the 3D models with minimal parameter search (four pre-chosen values were tested across each (3D model, target) pair). For each of the 200 adversarial examples, we sample 100 random transformations from the distribution at evaluation time. Table 2 summarizes results, and Figure 3 shows renderings of drawn samples, along with classification probabilities. See the appendix for more examples.

The adversarial objects have a mean adversariality of 83.4% with a long left tail, showing that EOT usually produces highly adversarial objects. See the appendix for a plot of the distribution of adversariality over the 200 examples.

Images Classification Accuracy Adversariality
mean stdev mean stdev mean
Original 68.8% 31.2% 0.01% 0.1% 0
Adversarial 1.1% 3.1% 83.4% 21.7%
Table 2: Evaluation of 3D adversarial examples with random targets. We evaluate each example over 100 randomly sampled poses to calculate classification accuracy and adversariality (percent classified as the adversarial class).
Original: turtle
97% / 0%
96% / 0%
96% / 0%
20% / 0%
Adv: jigsaw puzzle
0% / 100%
0% / 99%
0% / 99%
0% / 83%
Figure 3: A 3D adversarial example showing classifier confidence in true / adversarial classes over randomly sampled poses.

3.4 Physical adversarial examples

In the case of the physical world, we cannot capture the “true” distribution unless we perfectly model all physical phenomena. Therefore, we must approximate the distribution and perform EOT over the proxy distribution. We find that this works well in practice: we produce objects that are optimized for the proxy distribution, and we find that they generalize to the “true” physical-world distribution and remain adversarial.

Beyond modeling the 3D rendering process, we need to model physical-world phenomena such as lighting effects and camera noise. Furthermore, we need to model the 3D printing process: in our case, we use commercially available full-color 3D printing. With the 3D printing technology we use, we find that color accuracy varies between prints, so we model printing errors as well. We approximate all of these phenomena by a distribution of transformations under EOT: in addition to the transformations considered for 3D in simulation, we consider camera noise, additive and multiplicative lighting, and per-channel color inaccuracies.

We evaluate physical adversarial examples over two 3D-printed objects: one of a turtle (where we consider any of the 5 turtle classes in ImageNet as the “true” class), and one of a baseball. The unperturbed 3D-printed objects are correctly classified as the true class with 100% accuracy over a large number of samples. Figure 4 shows example photographs of unperturbed objects, along with their classifications.

Figure 4: A sample of photos of unperturbed 3D prints. The unperturbed 3D-printed objects are consistently classified as the true class.

We choose target classes for each of the 3D models at random — “rifle” for the turtle, and “espresso” for the baseball — and we use EOT to synthesize adversarial examples. We evaluate the performance of our two 3D-printed adversarial objects by taking 100 photos of each object over a variety of viewpoints333Although the viewpoints were simply the result of walking around the objects, moving them up/down, etc., we do not call them “random” since they were not in fact generated numerically or sampled from a concrete distribution, in contrast with the rendered 3D examples.. Figure 5 shows a random sample of these images, along with their classifications. Table 3 gives a quantitative analysis over all images, showing that our 3D-printed adversarial objects are strongly adversarial over a wide distribution of transformations. See the appendix for more examples.

Figure 5: Random sample of photographs of the two 3D-printed adversarial objects. The 3D-printed adversarial objects are strongly adversarial over a wide distribution of viewpoints.
Object Adversarial Misclassified Correct
Turtle 82% 16% 2%
Baseball 59% 31% 10%
Table 3: Quantitative analysis of the two adversarial objects, over 100 photos of each object over a wide distribution of viewpoints. Both objects are classified as the adversarial target class in the majority of viewpoints.

3.5 Discussion

Our quantitative analysis demonstrates the efficacy of EOT and confirms the existence of robust physical-world adversarial examples and objects. Now, we present a qualitative analysis of the results.

Perturbation budget.

The perturbation required to produce successful adversarial examples depends on the distribution of transformations that is chosen. Generally, the larger the distribution, the larger the perturbation required. For example, making an adversarial example robust to rotation of up to requires less perturbation than making an example robust to rotation, translation, and rescaling. Similarly, constructing robust 3D adversarial examples generally requires a larger perturbation to the underlying texture than required for constructing 2D adversarial examples.

Modeling perception.

The EOT algorithm as presented in Section 2 presents a general method to construct adversarial examples over a chosen perceptual distribution, but notably gives no guarantees for observations of the image outside of the chosen distribution. In constructing physical-world adversarial objects, we use a crude approximation of the rendering and capture process, and this succeeds in ensuring robustness in a diverse set of environments; see, for example, Figure 6, which shows the same adversarial turtle in vastly different lighting conditions. When a stronger guarantee is needed, a domain expert may opt to model the perceptual distribution more precisely in order to better constrain the search space.

Figure 6: Three pictures of the same adversarial turtle (all classified as “rifle”), demonstrating the need for a wide distribution and the efficacy of EOT in finding examples robust across wide distributions of physical-world effects like lighting.
Error in printing.

We find significant error in the color accuracy of even state of the art commercially available color 3D printing; Figure 7 shows a comparison of a 3D-printed model along with a printout of the model’s texture, printed on a standard laser color printer. Still, by modeling this color error as part of the distribution of transformations in a coarse-grained manner, EOT was able to overcome the problem and produce robust physical-world adversarial objects. We predict that we could have produced adversarial examples with smaller perturbation with a higher-fidelity printing process or a more fine-grained model incorporating the printer’s color gamut.

Semantically relevant misclassification.

Interestingly, for the majority of viewpoints where the adversarial target class is not the top-1 predicted class, the classifier also fails to correctly predict the source class. Instead, we find that the classifier often classifies the object as an object that is semantically similar to the adversarial target; while generating the adversarial turtle to be classified as a rifle, for example, the second most popular class (after “rifle”) was “revolver,” followed by “holster” and then “assault rifle.” Similarly, when generating the baseball to be classified as an espresso, the example was often classified as “coffee” or “bakery.”

Breaking defenses.

The existence of robust adversarial examples implies that defenses based on randomly transforming the input are not secure: adversarial examples generated using EOT can circumvent these defenses. Athalye et al. (2018) investigates this further and circumvents several published defenses by applying Expectation over Transformation.

Limitations.

There are two possible failure cases of the EOT algorithm. As with any adversarial attack, if the attacker is constrained to too small of a ball, EOT will be unable to create an adversarial example. Another case is when the distribution of transformations the attacker chooses is too “large”. As a simple example, it is impossible to make an adversarial example robust to the function that randomly perturbs each pixel value to the interval uniformly at random.

Imperceptibility.

Note that we consider a “targeted adversarial example” to be an input that has been perturbed to misclassify as a selected class, is within the constraint bound imposed, and can be still clearly identified as the original class. While many of the generated examples are truly imperceptible from their corresponding original inputs, others exhibit noticeable perturbations. In all cases, however, the visual constraint ( metric) maintains identifiability as the original class.

Figure 7: A side-by-side comparison of a 3D-printed model (left) along with a printout of the corresponding texture, printed on a standard laser color printer (center) and the original digital texture (right), showing significant error in color accuracy in printing.

4 Related Work

4.1 Adversarial examples

State of the art neural networks are vulnerable to adversarial examples (Szegedy et al., 2013; Biggio et al., 2013). Researchers have proposed a number of methods for synthesizing adversarial examples in the white-box setting (with access to the gradient of the classifier), including L-BFGS (Szegedy et al., 2013), the Fast Gradient Sign Method (FGSM) (Goodfellow et al., 2015), Jacobian-based Saliency Map Attack (JSMA) (Papernot et al., 2016b), a Lagrangian relaxation formulation (Carlini & Wagner, 2017c), and DeepFool (Moosavi-Dezfooli et al., 2015), all for what we call the single-viewpoint case where the adversary directly controls the input to the neural network. Projected Gradient Descent (PGD) can be seen as a universal first-order adversary (Madry et al., 2017). A number of approaches find adversarial examples in the black-box setting, with some relying on the transferability phenomena and making use of substitute models (Papernot et al., 2017, 2016a) and others applying black-box gradient estimation (Chen et al., 2017).

Moosavi-Dezfooli et al. (2017) show the existence of universal (image-agnostic) adversarial perturbations, small perturbation vectors that can be applied to any image to induce misclassification. Their work solves a different problem than we do: they propose an algorithm that finds perturbations that are universal over images; in our work, we give an algorithm that finds a perturbation to a single image or object that is universal over a chosen distribution of transformations. In preliminary experiments, we found that universal adversarial perturbations, like standard adversarial perturbations to single images, are not inherently robust to transformation.

4.2 Defenses

Some progress has been made in defending against adversarial examples in the white-box setting, but a complete solution has not yet been found. Many proposed defenses (Papernot et al., 2016c; Hendrik Metzen et al., 2017; Hendrycks & Gimpel, 2017; Meng & Chen, 2017; Zantedeschi et al., 2017; Buckman et al., 2018; Ma et al., 2018; Guo et al., 2018; Dhillon et al., 2018; Xie et al., 2018; Song et al., 2018; Samangouei et al., 2018) have been found to be vulnerable to iterative optimization-based attacks (Carlini & Wagner, 2016, 2017c, 2017b, 2017a; Athalye et al., 2018).

Some of these defenses that can be viewed as “input transformation” defenses are circumvented through application of EOT.

4.3 Physical-world adversarial examples

In the first work on physical-world adversarial examples, Kurakin et al. (2016) demonstrate the transferability of FGSM-generated adversarial misclassification on a printed page. In their setup, a photo is taken of a printed image with QR code guides, and the resultant image is warped, cropped, and resized to become a square of the same size as the source image before classifying it. Their results show the existence of 2D physical-world adversarial examples for approximately axis-aligned views, demonstrating that adversarial perturbations produced using FGSM can transfer to the physical world and are robust to camera noise, rescaling, and lighting effects. Kurakin et al. (2016)

do not synthesize targeted physical-world adversarial examples, they do not evaluate other real-world 2D transformations such as rotation, skew, translation, or zoom, and their approach does not translate to the 3D case.

Sharif et al. (2016)

develop a real-world adversarial attack on a state-of-the-art face recognition algorithm, where adversarial eyeglass frames cause targeted misclassification in portrait photos. The algorithm produces robust perturbations through optimizing over a fixed set of inputs: the attacker collects a set of images and finds a perturbation that minimizes cross entropy loss over the set. The algorithm solves a different problem than we do in our work: it produces adversarial perturbations universal over portrait photos taken head-on from a single viewpoint, while EOT produces 2D/3D adversarial examples robust over transformations. Their approach also includes a mechanism for enhancing perturbations’ printability using a color map to address the limited color gamut and color inaccuracy of the printer. Note that this differs from our approach in achieving printability: rather than creating a color map, we find an adversarial example that is robust to color inaccuracy. Our approach has the advantage of working in settings where color accuracy varies between prints, as was the case with our 3D-printer.

Concurrently to our work, Evtimov et al. (2017) proposed a method for generating robust physical-world adversarial examples in the 2D case by optimizing over a fixed set of manually-captured images. However, the approach is limited to the 2D case, with no clear translation to 3D, where there is no simple mapping between what the adversary controls (the texture) and the observed input to the classifier (an image). Furthermore, the approach requires the taking and preprocessing of a large number of photos in order to produce each adversarial example, which may be expensive or even infeasible for many objects.

Brown et al. (2016) apply our EOT algorithm to produce an “adversarial patch”, a small image patch that can be applied to any scene to cause targeted misclassification in the physical world.

Real-world adversarial examples have also been demonstrated in contexts other than image classification/detection, such as speech-to-text (Carlini et al., 2016).

5 Conclusion

Our work demonstrates the existence of robust adversarial examples, adversarial inputs that remain adversarial over a chosen distribution of transformations. By introducing EOT, a general-purpose algorithm for creating robust adversarial examples, and by modeling 3D rendering and printing within the framework of EOT, we succeed in fabricating three-dimensional adversarial objects. With access only to low-cost commercially available 3D printing technology, we successfully print physical adversarial objects that are classified as a chosen target class over a variety of angles, viewpoints, and lighting conditions by a standard ImageNet classifier. Our results suggest that adversarial examples and objects are a practical concern for real world systems, even when the examples are viewed from a variety of angles and viewpoints.

Acknowledgments

We wish to thank Ilya Sutskever for providing feedback on early parts of this work, and we wish to thank John Carrington and ZVerse for providing financial and technical support with 3D printing. We are grateful to Tatsu Hashimoto, Daniel Kang, Jacob Steinhardt, and Aditi Raghunathan for helpful comments on early drafts of this paper.

References

Appendix A Distributions of Transformations

Under the EOT framework, we must choose a distribution of transformations, and the optimization produces an adversarial example that is robust under the distribution of transformations. Here, we give the specific parameters we chose in the 2D (Table 4), 3D (Table 5), and physical-world case (Table 6).

Appendix B Robust 2D Adversarial Examples

We give a random sample out of our 1000 2D adversarial examples in Figures 8 and 9.

Appendix C Robust 3D Adversarial Examples

We give a random sample out of our 200 3D adversarial examples in Figures 10 and 11 and 12. We give a histogram of adversariality (percent classified as the adversarial class) over all 200 examples in Figure 13.

Appendix D Physical Adversarial Examples

Figure 14 gives all 100 photographs of our adversarial 3D-printed turtle, and Figure 15 gives all 100 photographs of our adversarial 3D-printed baseball.

Transformation Minimum Maximum
Scale
Rotation
Lighten / Darken
Gaussian Noise (stdev)
Translation any in-bounds
Table 4: Distribution of transformations for the 2D case, where each parameter is sampled uniformly at random from the specified range.
Transformation Minimum Maximum
Camera distance
X/Y translation
Rotation any
Background (0.1, 0.1, 0.1) (1.0, 1.0, 1.0)
Table 5: Distribution of transformations for the 3D case when working in simulation, where each parameter is sampled uniformly at random from the specified range.
Transformation Minimum Maximum
Camera distance
X/Y translation
Rotation any
Background (0.1, 0.1, 0.1) (1.0, 1.0, 1.0)
Lighten / Darken (additive)
Lighten / Darken (multiplicative)
Per-channel (additive)
Per-channel (multiplicative)
Gaussian Noise (stdev)
Table 6: Distribution of transformations for the physical-world 3D case, approximating rendering, physical-world phenomena, and printing error.
Original: European fire salamander
: 93%
: 0%
: 91%
: 0%
: 93%
: 0%
: 93%
: 0%
Adv: guacamole
: 0%
: 99%
: 0%
: 99%
: 0%
: 96%
: 0%
: 95%
Original: caldron
: 75%
: 0%
: 83%
: 0%
: 54%
: 0%
: 80%
: 0%
Adv: velvet
: 0%
: 94%
: 0%
: 94%
: 1%
: 91%
: 0%
: 100%
Original: altar
: 87%
: 0%
: 38%
: 0%
: 59%
: 0%
: 2%
: 0%
Adv: African elephant
: 0%
: 93%
: 0%
: 87%
: 3%
: 73%
: 0%
: 92%
Figure 8: A random sample of 2D adversarial examples.
Original: barracouta
: 91%
: 0%
: 95%
: 0%
: 92%
: 0%
: 92%
: 0%
Adv: tick
: 0%
: 88%
: 0%
: 99%
: 0%
: 98%
: 0%
: 95%
Original: tiger cat
: 85%
: 0%
: 91%
: 0%
: 69%
: 0%
: 96%
: 0%
Adv: tiger
: 32%
: 54%
: 11%
: 84%
: 59%
: 22%
: 14%
: 79%
Original: speedboat
: 14%
: 0%
: 1%
: 0%
: 1%
: 0%
: 1%
: 0%
Adv: crossword puzzle
: 3%
: 91%
: 0%
: 100%
: 0%
: 100%
: 0%
: 100%
Figure 9: A random sample of 2D adversarial examples.
Original: barrel
: 96%
: 0%
: 99%
: 0%
: 96%
: 0%
: 97%
: 0%
Adv: guillotine
: 1%
: 10%
: 0%
: 95%
: 0%
: 91%
: 3%
: 4%
Original: baseball
: 100%
: 0%
: 100%
: 0%
: 100%
: 0%
: 100%
: 0%
Adv: green lizard
: 0%
: 66%
: 0%
: 94%
: 0%
: 87%
: 0%
: 94%
Original: turtle
: 94%
: 0%
: 98%
: 0%
: 90%
: 0%
: 97%
: 0%
Adv: Bouvier des Flandres
: 1%
: 1%
: 0%
: 6%
: 0%
: 21%
: 0%
: 84%
Figure 10: A random sample of 3D adversarial examples.
Original: baseball
: 100%
: 0%
: 100%
: 0%
: 100%
: 0%
: 100%
: 0%
Adv: Airedale
: 0%
: 94%
: 0%
: 6%
: 0%
: 96%
: 0%
: 18%
Original: orange
: 73%
: 0%
: 29%
: 0%
: 20%
: 0%
: 85%
: 0%
Adv: power drill
: 0%
: 89%
: 4%
: 75%
: 0%
: 98%
: 0%
: 84%
Original: dog
: 1%
: 0%
: 32%
: 0%
: 12%
: 0%
: 0%
: 0%
Adv: bittern
: 0%
: 97%
: 0%
: 91%
: 0%
: 98%
: 0%
: 97%
Figure 11: A random sample of 3D adversarial examples.
Original: teddybear
: 90%
: 0%
: 1%
: 0%
: 98%
: 0%
: 5%
: 0%
Adv: sock
: 0%
: 99%
: 0%
: 99%
: 0%
: 98%
: 0%
: 99%
Original: clownfish
: 46%
: 0%
: 14%
: 0%
: 2%
: 0%
: 65%
: 0%
Adv: panpipe
: 0%
: 100%
: 0%
: 1%
: 0%
: 12%
: 0%
: 0%
Original: sofa
: 15%
: 0%
: 73%
: 0%
: 1%
: 0%
: 70%
: 0%
Adv: sturgeon
: 0%
: 100%
: 0%
: 100%
: 0%
: 100%
: 0%
: 100%
Figure 12: A random sample of 3D adversarial examples.
Figure 13: A histogram of adversariality (percent of 100 samples classified as the adversarial class) across the 200 3D adversarial examples.

  classified as turtle       classified as rifle       classified as other

Figure 14: All 100 photographs of our physical-world 3D adversarial turtle.

  classified as baseball       classified as espresso       classified as other

Figure 15: All 100 photographs of our physical-world 3D adversarial baseball.