Bridging Adversarial Robustness and Gradient Interpretability

03/27/2019 ∙ by Beomsu Kim, et al. ∙ KAIST 수리과학과 Satrec Initiative Co., Ltd. 0

Adversarial training is a training scheme designed to counter adversarial attacks by augmenting the training dataset with adversarial examples. Surprisingly, several studies have observed that loss gradients from adversarially trained DNNs are visually more interpretable than those from standard DNNs. Although this phenomenon is interesting, there are only few works that have offered an explanation. In this paper, we attempted to bridge this gap between adversarial robustness and gradient interpretability. To this end, we identified that loss gradients from adversarially trained DNNs align better with human perception because adversarial training restricts gradients closer to the image manifold. We then demonstrated that adversarial training causes loss gradients to be quantitatively meaningful. Finally, we showed that under the adversarial training framework, there exists an empirical trade-off between test accuracy and loss gradient interpretability and proposed two potential approaches to resolving this trade-off.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 4

page 8

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Adversarial attack

is an imperceptible perturbation to the input image which causes a deep neural network (DNN) to misclassify the perturbed input image with high confidence

(Goodfellow et al., 2015). Such perturbed inputs are called adversarial examples. Numerous defence approaches have been proposed to create adversarially robust DNNs that are resistant to adversarial attacks. One of the most common and successful defence methods is adversarial training, which augments the training dataset with adversarial examples (Szegedy et al., 2013).

Surprisingly, several studies have observed that loss gradients from adversarially trained DNNs are visually more interpretable than those from standard DNNs, i.e., DNN trained on natural images (Ross & Doshi-Velez, 2018; Tsipras et al., 2018; Zhang et al., 2019). These studies have used the visual interpretability of gradients as evidence that adversarial training causes DNNs to learn meaningful feature representations which align well with salient data characteristics or human perception.

However, asking whether gradient visualization is meaningful to a human may be entirely different from determining whether it is an accurate description of the internal representation of the DNN. For instance, consider the DNN decision interpretation methods Deconvolution (Zeiler & Fergus, 2014)

and Guided Backpropagation

(Springenberg et al., 2015)

. They visualize the importance of each input pixel in the DNN decision process with heat maps produced by calculating imputed versions of the gradient. Although they produce sharp visualizations, they have been proven to be doing partial image recovery which is unrelated to DNN decisions

(Nie et al., 2018).

In light of such studies, judging the interpretability of a DNN with the visual quality of its loss gradient runs the risk of being incorrect. There have been few attempts to analyze adversarial training in the context of DNN interpretability. For example, Chalasani et al. (2018) have investigated the effect of adversarial training on network weights. However, to the best of our knowledge, there is no work which provides thorough analyses of the effect of adversarial training on the visual and quantitative interpretability of DNN loss gradients.

In this paper, we attempted to bridge this gap between adversarial robustness and gradient interpretability through a series of experiments that addresses the following questions:111All codes for the experiments in this paper are public on
https://github.com/1202kbs/Robustness-and-Interpretability.
“why do loss gradients from adversarially trained networks align better with human perception?”, “is there a relation between the strength of adversary used for training and the perceptual quality of gradients?” and most importantly, “does adversarial training really cause loss gradients to better reflect the internal representation of the DNN?”. We then ended the paper by identifying a trade-off between test accuracy and gradient interpretability and proposing two potential approaches to resolving this trade-off. Specifically, we have the following three contributions:

  1. Visual Interpretability. We showed that loss gradients from adversarially trained networks align better with human perception because adversarial training causes loss gradients to lie closer to the image manifold. We also provided a conjecture for this phenomenon and showed its plausibility with a toy dataset.

  2. Quantitative Interpretability. We showed that loss gradients from adversarially trained networks are quantitatively meaningful. To this end, we established a formal framework for quantifying how accurately gradients reflect the internal representation of the DNN. We then verified whether gradients from adversarially trained networks are quantitatively meaningful using this framework.

  3. Accuracy and Interpretability Trade-off. We showed with CNNs trained on CIFAR-10 that under the adversarial training framework, there exists an empirical trade-off between test accuracy and loss gradient interpretability. Then based on the experiment results, we proposed two potential approaches to resolving this trade-off.

2 Visual Interpretability of Loss Gradients

We start by answering why adversarially trained DNNs have loss gradients that align better with human perception. All experiment settings in this paper can be found in the Appendix.

2.1 Adversarial Training Confines Gradient to Image Manifold

To identify why adversarial training enhances the perceptual quality of loss gradients, we hypothesized adversarial examples from adversarially trained networks lie closer to the image manifold. Note that the loss gradient is the difference between the original image and the adversarial image. Hence if the adversarial image lies closer to the image manifold, the loss gradient should align better with human perception.

Following previous works Tsipras et al. (2018) and Stutz et al. (2018), we first trained DNNs against adversarial attacks which maximize the training loss / cross entropy (XEnt) loss or the CW surrogate objective proposed by Carlini & Wagner (2017). These objectives are maximized using Projected Gradient Descent (PGD) (Kurakin et al., 2016) such that the adversarial examples stay within or -distance of from the original image. We say an adversary, or an adversarial attack is stronger if its is larger. For consistency of observations, we trained the DNNs on three datasets: MNIST (LeCun et al., 1998), FMNIST (Xiao et al., 2017) and CIFAR-10 (Krizhevsky & Hinton, 2009). Specific adversarial training procedures are described in Appendix B.1.

Then, we trained a VAE-GAN (Larsen et al., 2016) on each dataset. Using its encoder and decoder , we projected an image or an adversarial example to the approximated manifold by .222 We can also define the projection as where , but either definition led to the same results. Next, we computed to quantify how close is to the image manifold. Note that this concept of using a generative model to obtain an image’s projection to the manifold has also been applied frequently in the context of adversarial defense (Song et al., 2017; Meng & Chen, 2017; Samangoeui et al., 2018).

Figure 1 compares distributions of for the test images and their adversarial examples from standard or adversarially trained DNNs. We only analyzed -bounded attacks maximizing the XEnt loss since -bounded attacks represent the original direction of the loss gradient while

-bounded attacks modify the gradient through clipping. Attacks which failed to change the prediction are removed since they are likely to be zero matrices due loss function saturation.

It can be observed from Figure 1 that, for all datasets, adversarial examples for adversarially trained DNNs lie closer to the image manifold than those for standard DNNs. This suggests adversarial training restricts loss gradients to the image manifold. Hence gradients from adversarially trained DNNs are more visually interpretable. Interestingly, it can also be observed that using stronger attacks during training reduces the distance between adversarial examples and their projections even further. That is, the adversarial examples from more robust DNNs look more natural, as shown in Figure 2. We now provide a conjecture for these phenomena.

(a) MNIST
(b) FMMNIST
(c) CIFAR-10
Figure 1: Distributions of of adversarial examples for standard and adversarially trained DNNs. We also show distributions of of test images for reference. The subcaptions indicate the dataset used. The adversaries that the DNNs trained against are denoted in legend by norm, and objective. Best viewed in electronic format (zoomed in).
Figure 2: Visualization of adversarial examples for standard and adversarially trained DNNs. The adversaries that the DNNs trained against are denoted in the figure title by norm, and objective.

2.2 A Conjecture Under the Boundary Tilting Perspective

In this section, we propose a conjecture for why adversarial training confines the gradient to the manifold. Our conjecture is based on the boundary tilting perspective on the phenomenon of adversarial examples (Tanay & Griffin, 2016). Figure 2(a) illustrates the boundary tilting perspective. Adversarial examples exist because the decision boundary of a standard DNN is “tilted”

along directions of low variance in the data (standard decision boundary). Under certain situations, the decision boundary will lie close to the data such that a small perturbation directed toward the boundary will cause the data to cross the boundary. Moreover, since the decision boundary is tilted, the perturbed data will leave the image manifold.

(a) Illustration of the boundary tilting perspective.
(b) Results on a 2-dimensional toy dataset.
Figure 3:

(a) An illustration of the boundary tilting perspective. Each arrow indicates adversarial perturbation against the decision boundary of corresponding color. (b) Experiment results on a 2-dimensional toy dataset. The blue and red regions indicate points where the network classified as class 1 and class 2. The adversarial attacks are taken against points in class 2. The bottom left and right figures show the instances where the network is trained against a weak and strong adversary.

Under the boundary tilting perspective and observations of Section 2.1, we present a new conjecture on the relationship between adversarial training and visual interpretability of loss gradients: adversarial training removes tilting along directions of low variance in the data (robust decision boundary). Intuitively, this makes sense because a network is robust when only large- attacks are able to cause a nontrivial drop in accuracy, and this happens when data points of different classes are mirror images of each other with respect to the decision boundary. Since the loss gradient is generally perpendicular to the decision boundary, it will be confined to the image manifold and thus adversarial examples stay within the image manifold.

As a sanity check, we tested our conjecture with a 2-dimensional toy dataset. Specifically, we trained three two-layer ReLU network to classify points from two distinct bivariate Gaussian distributions. The first network is trained on original data and the latter two networks are trained against weak and strong adversaries. We then compared the resulting decision boundaries and the distribution of adversarial examples. Specific adversarial training procedures are described in Appendix

B.2.

The training data and the results are shown in Figure 2(b). The decision boundary of the standard network is tilted along directions of low variance in the data. Naturally, the adversarial examples leave the data manifold. On the other hand, adversarial training removes the tilt. Hence adversarial perturbations move data points to the manifold of the other class. We also observed training against a stronger adversary removes the tilt to a larger degree. This causes adversarial examples to align better with the data manifold. Hence the decision boundary tilting perspective may also account for why adversarial training with stronger attack reduces the distance between an adversarial example and its projection even further (Figure 1). We leave more theoretical justification and deeper experimental validation of our observations and hypothesis for future work.

If our conjecture is true, adversarial training prevents the decision boundary from tilting. Hence adversarial examples are restricted to the image manifold and thus loss gradients align better with human perception. However, as we have discussed in Section 1, visual sharpness of gradients do not imply that they accurately represent the features learned by the DNN. We address this issue in the next section by quantitatively evaluating the interpretability of loss gradients.

3 Interpretability of Adversarially Trained Networks

To the best of our knowledge, there are no works on quantifying the interpretability of loss gradients. However, quantitative interpretability of logit gradients and its variants have been thoroughly studied in the context of attribution methods

(Bach et al., 2015; Samek et al., 2017; Adebayo et al., 2018; Hooker et al., 2018).333Attribution methods are DNN decision interpretation methods which assign a signed attribution value to each input feature (pixel). An attribution value quantifies the amount of influence of the corresponding feature on the final decision. Since each pixel is assigned an attribution value, we can visualize the attributions by arranging them to have the same shape as the input. Since the loss gradient highlights the input features which affect the loss most strongly, and thus are important for the DNN’s prediction, we may also treat it as an attribution method. This allows us to extend the reasoning and techniques in works on attribution method evaluation to the loss gradient.444Since the loss gradient is a linear combination of logit gradients, we can reasonably expect most observations for loss gradients to hold for logit gradients as well (and vice versa). However, loss gradients have the nice property of being generally perpendicular to the decision boundary, and this gives us more insights such as Section 2.2. Hence we have chosen to examine loss gradients, not logit gradients.

3.1 Formal Description of the Quantitative Evaluation Framework

We denote vectors or vector-valued functions by boldface letters. Let

and

denote families of DNNs, attribution methods and attribution method evaluation metrics. A DNN

maps an image to a vector of class logits , where is the input dimension and is the number of classes. Then an attribution method maps the tuple to a vector of attribution scores . Finally, an attribution method evaluation metric assigns to each tuple a scalar indicating how accurately reflects the internal representation of . A higher value of indicates that the attribution method better describes the internal representation of the DNN.

Here, note that the value of depends on both and . Previous works have focused on improving the value of for fixed by developing better and more complex (Bach et al., 2015; Sundararajan et al., 2017; Smilkov et al., 2017; Shrikumar et al., 2017). However, we can also improve the value of for fixed by varying through changing the network topology, training scheme, etc. It is only recently that the latter approach have started to receive attention. In the next section, we investigate this approach in the context of adversarial training.

3.2 Effect of Adversarial Training on Loss Gradient Interpretability

Here we experimentally evaluate whether loss gradients from adversarially trained DNNs are truly more interpretable than those from standard DNNs. Using the definitions from the previous section, we can rephrase this goal as follows: Let be the family of standard DNNs and let be the family of adversarially trained DNNs, all of the same architecture. Given an attribution method , we want to verify whether . In particular, we are interested in the case when where is the XEnt loss. We denote this attribution method by . We also evaluate a variant Gradient Input (Shrikumar et al., 2017): . We denote Gradient Input using the XEnt loss by .

Figure 4: Effect of adversarial training on interpretability of and . The x-axis indicates (attack strength) used during training. The values of are scaled into such that -bounded attacks and -bounded attacks are comparable. Note that is standard training. The y-axis indicates quantitative interpretability, as explained in the text. We also show the linear correlation coefficient and Spearman’s rank correlation coefficient for each combination of adversarial training setting (norm and objective) and attribution method (G for and GX for ).

We quantified the interpretability of each methods via two attribution method evaluation metrics Remove and Retrain (ROAR) and Keep and Retrain (KAR) (Hooker et al., 2018)

. Specifically, we measured how the performance of the classifier changed as features were occluded based on the ordering assigned by the attribution method. For ROAR, we replaced a fraction of all CIFAR-10 pixels estimated to be

most important with a constant value. For KAR, we replaced the pixels estimated to be least important. We then retrained a CNN on the modified dataset and measured the change in test accuracy. We trained three CNNs per attribution method for each fraction and measured test accuracy as the average of these three CNNs.

Since ROAR removes most important input features, a better attribution method should cause more accuracy degradation. Conversely, since KAR removes least important features, a better attribution method should cause less accuracy degradation. For reference, we also evaluated the random baseline which assigns random attribution values to input features. We then defined the interpretability scores under ROAR and KAR by

Figure 4 shows the values of and for each adversarial training setting. Specific adversarial training procedures are described in Appendix B.3. First, we observed that there is a strong positive correlation between the strength of attack used during adversarial training and interpretability with the exception of from -trained DNNs in KAR. This result is significant because it shows adversarial training indeed causes the gradient to better reflect the internal representation of the DNN. It implies that training with an appropriate “interpretability regularizer” may be enough to produce DNNs that can be interpreted with simple attribution methods such as gradient or Gradient Input. However, it does not imply we no longer need to develop complex attribution methods to interpret DNNs. This topic will be dealt with further in Section 3.3, in the context of trade-off between accuracy and loss gradient interpretability.

We also observed Gradient Input performs better than the loss gradient. We believe this is because the former method is a global attribution method while the latter is a local attribution method.555Local attribution methods return vectors that maximize the value which the attribution is taken with respect to. On the other hand, global attribution methods visualize the marginal effect of each feature on that value. For further details on the difference between local and global attribution methods, we refer the readers to Section 3.2 of Ancona et al. (2018). Since both ROAR and KAR evaluate attribution methods based on feature occlusion, Gradient Input should theoretically show better performance.

Finally, we remark that if our conjecture in Section 2.2 is true, there may be a close connection between gradient interpretability and the degree to which gradient is confined to the data manifold. In other words, DNNs with less tilted decision boundaries may yield loss gradients that are more visually and quantitatively meaningful.

3.3 Accuracy and Loss Gradient Interpretability Trade-off

Previous works have observed that there may be a trade-off between accuracy and adversarial robustness (Tsipras et al., 2018; Su et al., 2018). As we have shown in the previous section, there exists a positive correlation between the strength of adversarial attack used in the training process and gradient interpretability. Hence it is highly likely that there exists a negative correlation between network accuracy and gradient interpretability. To verify this, we trained CNNs on CIFAR-10 under various adversarial attack settings and evaluated their gradient interpretability. More detailed experiments setting used in this subsection can be found in Appendix B.3.

Figure 5: Relation between test accuracy and interpretability of and under the adversarial training framework. The x-axis indicates test accuracy on natural images. The y-axis indicates quantitative interpretability, as explained in the text. We also show the linear correlation coefficient and Spearman’s rank correlation coefficient for each combination of adversarial training setting (norm and objective) and attribution method (G for and GX for ).
(a) MNIST
(b) FMNIST
(c) CIFAR-10
Figure 6: Visualization of the loss gradient for standard and adversarially trained DNNs. To display the gradients, we summed up the gradients along the color channel and then capped low outlying values to 0.5th percentile and high outlying values to 99.5th percentile.

Figure 5 shows the relation between test accuracy and loss gradient interpretability. Indeed, there is a near-monotonic decreasing relation between interpretability and accuracy under both ROAR and KAR. We note that the only exception of this trend is from -trained DNNs in KAR. These results imply that adversarial training itself is not a perfect method for attaining gradient interpretability without sacrificing test accuracy.

We also observed that attributions from -trained networks are more resistant to this trade-off in ROAR. On the other hand, attributions from -trained networks were more resistant in KAR. This implies attributions from -trained DNNs are more effective at emphasizing important features while attributions from -trained DNNs are better at identifying less important features. This is somewhat consistent with the visual characteristics of loss gradients: as shown in Figure 6, gradients from -trained networks are very sparse but discontinuous while gradients from -trained networks are smooth but less sparse. Analyzing the effect of norm used to constrain the adversary on the neural network’s decision boundary and the gradient may also be an interesting line of research.

From the results, we see two potential approaches to resolving this trade-off. First, since the global attribution method performs better than the local attribution method , we can explore combinations of adversarial training with other global attribution methods such as Layer-wise Relevance Propagation (Bach et al., 2015), DeepLIFT (Shrikumar et al., 2017) or Integrated Gradient (Sundararajan et al., 2017). Second, since there is large performance gain in using -training over -training in KAR while there is only slight gain in using -training over -training in ROAR, we can seek better ways of applying -training.

4 Conclusion

Adversarial training is a training scheme designed to counter adversarial attacks by augmenting the training dataset with adversarial examples. Surprisingly, several studies have observed that loss gradients from adversarially trained DNNs are visually more interpretable than those from DNNs trained on natural images. Although this phenomenon is interesting, there are only few works that have offered an explanation. In this paper, we attempted to bridge this gap between adversarial robustness and gradient interpretability. To this end, we identified that loss gradients from adversarially trained DNNs align better with human perception because adversarial training restricts loss gradients closer to the image manifold. We also provided a conjecture for this phenomenon and verified its plausibility with a toy dataset. We then demonstrated that adversarial training indeed causes gradients to be quantitatively meaningful with two attribution method evaluation metrics. Finally, we showed with CNNs trained on CIFAR-10 that under the adversarial training framework, there exists an empirical trade-off between test accuracy and gradient interpretability. Then based on the experiment results, we proposed two potential approaches to resolving this trade-off.

References

  • Adebayo et al. (2018) Julius Adebayo, Justin Gilmer, Michael Muelly, Ian Goodfellow, Moritz Hardt, and Been Kim. Sanity checks for saliency maps. In Neural Information Processing Systems, 2018.
  • Ancona et al. (2018) Marco Ancona, Enea Ceolini, Cengiz Öztireli, and Markus Gross. Towards better understanding of gradient-based attribution methods for deep neural networks. In International Conference on Learning Representations, 2018.
  • Bach et al. (2015) Sebastian Bach, Alexander Binder, Grégoire Montavon, Frederick Klauschen, Klaus Robert Müller, and Wojciech Samek. On pixel-wise explanations for non-linear classifier decisions by layer-wise relevance propagation. PLoS ONE, 10(7):1–46, 2015. ISSN 19326203. doi: 10.1371/journal.pone.0130140.
  • Carlini & Wagner (2017) Nicholas Carlini and David Wagner. Towards evaluating the robustness of neural networks. In IEEE Symposium on Security and Privacy, 2017.
  • Chalasani et al. (2018) Prasad Chalasani, Somesh Jha, Aravind Sadagopan, and Xi Wu. Adversarial learning and explainability in structured datasets. arXiv preprint arXiv:1810.06583, 2018.
  • Goodfellow et al. (2015) Ian J. Goodfellow, Jonathan Shlens, and Christian Szegedy. Explaining and harnessing adversarial examples. In International Conference on Learning Representations, 2015.
  • Hooker et al. (2018) Sara Hooker, Dumitru Erhan, Pieter-Jan Kindermans, and Been Kim. Evaluating feature importance estimates. In

    ICML Workshop on Human Interpretability in Machine Learning

    , 2018.
  • Krizhevsky & Hinton (2009) Alex Krizhevsky and Geoffrey Hinton. Learning multiple layers of features from tiny images. 2009.
  • Kurakin et al. (2016) Alexey Kurakin, Ian Goodfellow, and Samy Bengio. Adversarial examples in the physical world. In ICLR Workshop, 2016.
  • Larsen et al. (2016) A. B. Lindbo Larsen, S. Kaae Sønderby, Hugo Larochelle, and Ole Winther. Autoencoding beyond pixels using a learned similarity metric. International Conference on Machine Learning, 2016.
  • LeCun et al. (1998) Yann LeCun, Corinna Cortes, and Christopher JC Burges.

    The mnist database of handwritten digits.

    1998.
  • Meng & Chen (2017) Dongyu Meng and Hao Chen. Magnet: a two-pronged defense against adversarial examples. In ACM Conference on Computer and Communications Security (CCS), 2017.
  • Miyato et al. (2018) Takeru Miyato, Toshiki Kataoka, Masanori Koyama, and Yuichi Yoshida. Spectral normalization for generative adversarial networks. In International Conference on Learning Representations, 2018.
  • Nie et al. (2018) Weili Nie, Yang Zhang, and Ankit Patel. A theoretical explanation for perplexing behaviors of backpropagation-based visualizations. In International Conference on Machine Learning, 2018.
  • Ross & Doshi-Velez (2018) Andrew S. Ross and Finale Doshi-Velez. Improving the adversarial robustness and interpretability of deep neural networks by regularizing their input gradients. In

    AAAI Conference on Artificial Intelligence

    , 2018.
  • Samangoeui et al. (2018) Pouya Samangoeui, Maya Kabkab, and Rama Chellappa. Defense-gan: Protecting classifiers against adversarial attacks using generative models. In International Conference on Learning Representations, 2018.
  • Samek et al. (2017) Wojciech Samek, Alexander Binder, Grégoire Montavon, Sebastian Lapuschkin, and Klaus-Robert Müller. Evaluating the visualization of what a deep neural network has learned. IEEE transactions on neural networks and learning systems, 28(11):2660–2673, 2017.
  • Shrikumar et al. (2017) Avanti Shrikumar, Peyton Greenside, and Anshul Kundaje. Learning important features through propagating activation differences. In International Conference on Machine Learning, 2017.
  • Smilkov et al. (2017) Daniel Smilkov, Nikhil Thorat, Been Kim, Fernanda Viégas, and Martin Wattenberg. Smoothgrad: removing noise by adding noise. In

    ICML Workshop on Visualization for Deep Learning

    , 2017.
  • Song et al. (2017) Yang Song, Taesup Kim, Sebastian Nowozin, Stefano Ermon, and Nate Kushman. Pixeldefend: Leveraging generative models to understand and defend against adversarial examples. In International Conference on Learning Representations, 2017.
  • Springenberg et al. (2015) Jost Tobias Springenberg, Alexey Dosovitskiy, Thomas Brox, and Martin Riedmiller. Striving for simplicity: The all convolutional net. International Conference on Learning Representations Workshop, 2015.
  • Stutz et al. (2018) David Stutz, Matthias Hein, and Bernt Schiele. Disentangling adversarial robustness and generalization. arXiv preprint arXiv:1812.00740, 2018.
  • Su et al. (2018) Dong Su, Huan Zhang, Hongge Chen, Jinfeng Yi, Pin-Yu Chen, and Yupeng Gao. Is robustness the cost of accuracy? – a comprehensive study of robustness of 18 deep image classification models. In ECCV, 2018.
  • Sundararajan et al. (2017) Mukund Sundararajan, Ankur Taly, and Qiqi Yan. Axiomatic attribution for deep networks. In International Conference on Machine Learning, 2017.
  • Szegedy et al. (2013) Christian Szegedy, Wojciech Zaremba, Ilya Sutskever, Joan Bruna, Dumitru Erhan, Ian J. Goodfellow, and Rob Fergus. Intriguing properties of neural networks. arXiv preprint arXiv:1312.6199, 2013.
  • Tanay & Griffin (2016) Thomas Tanay and Lewis Griffin. A boundary tilting perspective on the phenomenon of adversarial examples. arXiv preprint arXiv:1608.07690, 2016.
  • Tsipras et al. (2018) Dimitris Tsipras, Shibani Santurkar, Logan Engstrom, Alexander Turner, and Aleksander Mądry.

    Robustness may be at odds with accuracy.

    In International Conference on Learning Representations, 2018.
  • Xiao et al. (2017) Han Xiao, Kashif Rasul, and Roland Vollgraf. Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms. arXiv preprint arXiv:1708.07747, 2017.
  • Zeiler & Fergus (2014) Matthew D Zeiler and Rob Fergus. Visualizing and understanding convolutional networks. In

    European Conference on Computer Vision

    , pp. 818–833. Springer, 2014.
  • Zhang et al. (2019) Hongyang Zhang, Yaodong Yu, Jiantao Jiao, Eric P. Xing, Laurent El Ghaoui, and Michael I. Jordan. Theoretically principled trade-off between robustness and accuracy. arXiv preprint arXiv:1901.08573, 2019.

Appendix A Experiment Settings

a.1 Datasets

The toy dataset is comprised two classes. Each class consists of 3000 points sampled from a bivariate Gaussian distribution. The first distribution has mean and covariance matrix

The second distribution has and the same covariance matrix. All four datasets toy dataset, MNIST, FMNIST and CIFAR-10 were normalized into range .

a.2 Classification Networks

We trained all the models with Adam with default settings , and

. For the toy dataset, we trained a two-layer ReLU DNN for 10 epochs to achieve

test accuracy. For MNIST and FMNIST, we trained a ReLU CNN for 5 epochs to achieve and test accuracy, respectively. For CIFAR-10, we trained a ReLU CNN for 20 epochs to achieve test accuracy. The architectures for the classification models are given in the tables below. For dense layers, we write “Dense (number of units)”. For convolution layers, we write “Conv 2D (filter size, stride, number of filters)”

. For max-pooling layers, we write

“Max-pooling (window size, stride)”.

Toy Dataset DNN
Dense (128) ReLU
Dense (2)
MNIST / FMNIST CNN
Conv 2D (, 1, ) ReLU
Conv 2D (, 1, ) ReLU
Max-pooling (, 2)
Dense (1024) ReLU
Dense (10)
CIFAR-10 CNN
Conv 2D (, 1, ) ReLU
Conv 2D (, 1, ) ReLU
Max-pooling (, 2)
Conv 2D (, 1, ) ReLU
Conv 2D (, 1, ) ReLU
Max-pooling (, 2)
Dense (256) ReLU
Dense (10)

a.3 VAE-GANs

We used a common encoder structure for MNIST, FMNIST and CIFAR-10. We set the latent dimension for MNIST and FMNIST and for CIFAR-10. In contrast to Larsen et al. (2016), for the reconstruction loss, we use the

distance, not the discriminator’s features. The architectures for VAE-GANs are given in the tables below. We reuse the notation for classification networks. Additionally, BN indicates batch normalization and for transposed convolution layers, we write

“Deconv 2D (filter size, stride, number of filters)”.

Encoder
Conv 2D (, 2, ) BN ReLU
Conv 2D (, 2, ) BN ReLU
Conv 2D (, 2, ) BN ReLU
Dense () BN

MNIST / FMNIST. We used the decoder and discriminator structure given in Larsen et al. (2016). We trained the network with Adam with learning rate , , and 1 discriminator update per encoder and decoder update for 30 epochs for MNIST and 60 epochs for FMNIST.

Decoder
Dense (1024) BN ReLU
Deconv 2D (, 2, ) BN ReLU
Deconv 2D (, 1, ) BN ReLU
Deconv 2D (, 2, ) BN ReLU
Deconv 2D (, 2, ) TanH
Discriminator
Conv 2D (, 2, ) ReLU
Conv 2D (, 2, ) BN ReLU
Conv 2D (, 2, ) BN ReLU
Conv 2D (, 2, ) BN ReLU
Dense (512) BN ReLU
Dense (1) Sigmoid

CIFAR-10. We used the decoder and discriminator structure given in Miyato et al. (2018)

. In the discriminator, we used spectral normalization (SN) with leaky ReLU (lReLU) activation functions with slopes set to 0.1. We trained the network with Adam learning rate

, , and 5 discriminator updates per encoder and decoder update for 150 epochs.

Decoder
Dense (8192)
Deconv 2D (, 2, ) BN ReLU
Deconv 2D (, 2, ) BN ReLU
Deconv 2D (, 2, ) BN ReLU
Conv 2D (, 1, ) TanH
Discriminator
Conv 2D (, 1, ) SN lReLU
Conv 2D (, 2, ) SN lReLU
Conv 2D (, 1, ) SN lReLU
Conv 2D (, 2, ) SN lReLU
Conv 2D (, 1, ) SN lReLU
Conv 2D (, 2, ) SN lReLU
Conv 2D (, 1, ) SN lReLU
Dense (1) Sigmoid

Appendix B Adversarial Attack and Adversarial Training Settings

All adversarial attacks in this paper were optimized to maximize the cross entropy loss or the CW surrogate objective with 40 iterations of PGD. Following previous works Tsipras et al. (2018) and Stutz et al. (2018), during adversarial training, we trained on adversarial images only. That is, we did not mix natural and adversarial images. We describe the adversarial training procedure and attack settings used in each section.

b.1 Section 2.1.

For MNIST and FMNIST, we trained DNNs against -bounded attacks with and -bounded attacks with . For CIFAR-10, we trained DNNs against -bounded attacks with and -bounded attacks with . The adversarial examples used for analysis are -bounded attacks with which maximize the cross entropy loss.

b.2 Section 2.2.

We trained the networks against -bounded attacks with (weak) and (strong). We visualized -bounded attacks with which maximize the cross entropy loss.

b.3 Sections 3.2 and 3.3.

We trained DNNs against or -bounded attacks of varying . For

-bounded attacks, we linearly interpolated

between and with step size . For -bounded attacks, we linearly interpolated between and with step size .