on ImageNet. On the other hand, it is well known that even state-of-the-art DNNs [23, 65, 11, 18] are highly vulnerable to imperceptible and well-sought perturbations, called adversarial examples . The existence of adversarial examples is an obstacle for the real use of DNNs from the aspect of security, and the analysis of this phenomenon is imperative.
We clarify the difference between the representation obtained by adversarially trained and standard trained models through various experiments. (a) Sensitivity map visualization of a standard trained (middle) and an adversarially trained (right) ResNet-50 by guided backpropagation. While the both models are sensitive to strong edges, the adversarilly trained one does not react to fine textures. (b) Layer visualization  of the standard trained (top row) and the adversarially trained (bottom row) ResNet-50. Each column represents the same depth visualization. Visualization results suggest that the adversarially trained model tries to capture larger scale patterns than the standard trained model.
Due to these strong demand, many works have addressed it from various points of view [55, 22, 56, 46, 21]. As a result of many efforts, many interesting findings of DNNs have been accumulated in this area. One of them is that training models to be robust to adversarial perturbations leads to a reduction of standard accuracy [29, 31, 32, 53, 59]. This phenomenon is contrary to the simple expectation [57, 22, 34] that training on adversarial examples as ultimate form of data augmentation can improve generalization performance in the standard classification. Tsipras et al.  suggested that this trade-off is caused by the fact that adversarially trained models obtain fundamentally different feature representations than standard trained models. Therefore, analyzing the features acquired by the adversarially trained models and clarifying what features are used for their decision making are useful not only for designing stronger defense methods but also for a better understanding of the phenomenon of adversarial perturbations. However, the nature of the feature representations acquired by adversarially trained models has not been sufficiently elucidated yet. In this work, we reveal the property of the adversarially robust features through comprehensive experiments and visualizations (Figure 1).
our contributions are summarized as follows:
We reveal that adversarially robust features look at things at a larger scale than standard models and pay less attention to fine textures through various visualization methods and quantitative evaluations where we expose them to various artificial transformations.
We conduct a comprehensive study on various datasets to reveal how adversarially robust features are beneficial for improving standard accuracy and show that they are useful as pre-trained models and especially have a positive effect on accuracy in low resolution datasets.
To promote reproducible research, we will release our code including reimplementation of multiple adversarial attacks and tools for visualizing CNN.
2 Related Work
2.1 Trade-off Between Robustness and Accuracy
So far, various defense methods [22, 44, 12, 33, 34, 58, 38] have been proposed against attack methods [22, 37, 43, 8, 30, 36, 32, 51, 3, 35]. However, most of them were subsequently shown to be ineffective [7, 2, 60] and adversarial training [22, 32], which is the simplest one, seems to be most effective so far. Unfortunately, it has been reported in many works that standard accuracy has to be sacrificed to train adversarially robust models [29, 17, 15, 62, 53, 32, 64]. Fawzi et al. 
proved upper bounds of the adversarial robustness of classifiers and exhibited the trade-off between standard accuracy and adversarial robustness for a specific classifier families on a synthetic task. Su et al. conducted a comprehensive experiment on 18 different network architectures that are submitted to ImageNet Challenge , and empirically showed that the higher accuracy models are, the more vulnerable to adversarial perturbations. Tsipras et al.  exhibited that the feature representations need to achieve high adversarial robustness are different from those need to achieve high standard accuracy, and suggested that this difference is the main cause of the trade-off between adversarial robustness and standard accuracy.
2.2 Interpretation of Deep Neural Networks
Explaining classification decisions could potentially shed light on the underlying mechanisms of blackbox systems like DNNs. Due to this potential, various types of visualization and analysis methods have been proposed [16, 66, 52, 4, 54, 50, 48, 47].
The simplest idea is visualizing each layer of trained CNNs directly. While the first layer can be visualized straightforwardly, it is hard to visualize deeper layers because of the large number of their dimensions. Erhan et al.  calculated the input which maximizes the activation of the layer of interest by gradient ascent. Zeiler et al.  projected the feature activations back to the input space by using deconvolutional network  which consists of unpooling and deconvolution.
Instead of visualizing layers directly, there is an approach to create a map that reasons where in the input image contribute to the classifier’s decision. Simonyan et al.  proposed “sensitivity map” by propagating the output of the target class backward, but this simple backpropagation leads to noisy results. To make the sensitivity map clearer, Springenberg et al.  proposed guided backpropagation technique that impose restrictions to the gradient and Smilkov et al.  proposed SmoothGrad that averages results of multiple noise added inputs. These pixel-space gradient visualization methods are high-resolution and highlight fine-grained details in the image, but are not class-discriminative. To localize the class-specific discriminative regions in exchange for its resolution, various methods to generate “class activation map” have been proposed (e.g., CAM , Grad-CAM  and Grad-CAM++ ). The sensitivity map and the class activation map are often used together to compensate for each other drawbacks.
In contrast to the visualization-based analysis, there are some works which try to elucidate the decision process of networks by investigating the reaction to architecture modifications or peculiar inputs. Brendel et al.  proposed a variant of the ResNet-50 architecture called BagNet which imitates the traditional bag-of-feature (BoF) classifiers and claimed that there is noｔ much difference between decision making mechanism of modal DNN classifier and that of traditional BoF classifier on the basis of that BagNet achieved comparable accuracy as state-of-the art deep neural networks. Geirhos et al.,  generated images whose texture is replaced by other one’s while maintain the composition and outline using style transfer, and reveals that typical CNN classifier’s decisions are biased towards texture information.
3.1 Adversarial Examples
Although various types of adversarial examples have been proposed as described in the previous section, hereinafter adversarial examples refer to perturbation-based ones unless otherwise noted. Given a -class classifier and a pair of a data point and a ground-truth label , the predicted label is obtained by , where is the -th component of that corresponds to the -th class. A set of adversarial perturbations is defined as follows:
where is a set of perturbations. Practically, an adversarial perturbation is given by the following optimization problem:
is a loss function, e.g., the cross-entropy loss. In this paper, we focus on the case whenis the set of -bounded perturbations, i.e. .
3.2 Adversarial Attack Methods
Here, we introduce two simple first-order methods that we use for adversarial training and robustness evaluation in this paper.
Fast Gradient Sign Method. One of the simplest methods to generate adversarial examples is fast gradient sign method (FGSM)  which is a fast single-step attack by maximizing the loss function in the linear approximation. The perturbation by FGSM is calculated as follows:
where denotes a single step size and denotes the projection onto the -ball of radius .
|Architecture||Setting||Top-1 Acc [%]||Top-5 Acc [%]||Robustness Acc [%]|
3.3 Adversarial Training
One of the most popular defense approaches is adversarial training [22, 32], which generates adversarial examples to the original training data during the training stage. Let us consider a classification task with underlying data distribution . The goal of standard training is often formulated as finding parameters of the classifier to minimize the expectation of the loss function:
On the contrary, the goal of adversarial training is formulated to minimize the expectation of the adversarial loss:
Although adversarial training is effective, this leads to an increase of the training time. Therefore, more efficient ways of adversarial attacks and adversarial defenses are being sought.
3.4 Visualization Methods
In this section, we introduce several visualization methods that we use in our experiments. All these methods use the gradient information via backpropagation because neural networks are differentiable. The difference from the training stage is that the optimization is performed on the input with the classifier’s parameters fixed.
Vanilla Gradient. Vanilla gradient (Vanilla Grad) is the simplest sensitivity map that calculates the gradient of the classifier output for the ground-truth label with respect to the input as follows:
The sensitivity map by Vanilla Grad tends to be noisy as shown in Figure 2.
Loss Gradient. Loss gradient (Loss Grad), used in , calculates the gradient of the loss function instead of the classifier output in Vanilla Grad as follows:
Especially in the classification task, the cross entropy loss is often used as the loss function, in which case Equation 8 is as follows:
where is the softmax function. The first term is the negative of Vanilla Grad and the second term contains the information other than the ground-truth label.
Guided Backpropagation.111Adebayo et al.  pointed out that Guided Backprop is misleading because of its independence on the model parameters, but we could not confirm this phenomenon. In detail, we discuss this problem in supplementary materials. Compared to Vanilla Grad and Loss Grad, guided backpropagation (Guided Backprop) 
set all the negative gradients to 0 like the ReLU function in order to focus on what kind of feature the neurons detect than what they do not detect. As a result of performing this operation, the sensitivity map becomes clearer than Vanilla Grad and Loss Grad as shown in Figure2.
Deeper Layer Visualization. Erhan et al.  proposed a method to synthesize an input that maximizes the activation of a layer of interest without searching for an input pattern from the datasets. Given a -th filter of a -th layer, the synthesized input is given by solving the following optimization problem via gradient ascent:
where is sum of the activations of the -th filter of the -th layer.
First, we conducted standard and adversarial training several times from scratch for AlexNet 
(with batch normalization) and ResNet-50 
on ImageNet. Then we used the momentum stochastic gradient descent (SGD) method for 90 epochs and set the momentum value to 0.9, the batch size to 256, and the initial learning rate to 0.1. The learning rate is multiplied by 0.1 for each 30 epochs. In adversarial training, we used-FGSM attack ().
Hereafter, standard training and adversarial training are abbreviated as STD and ADV respectively in tables and figures. Table 1 shows results of standard accuracy and robustness accuracy (defense success rate against adversarial attacks) on ImageNet in standard training and adversarial training settings. We can confirm the trade-off between standard accuracy and adversarial robustness, similar to prior works [53, 59].
4.1 What Do Adversarially Robust Features See?
To visualize the differences between standard accurate and adversarially robust features, we calculated the sensitivity maps of both the standard trained and the adversarially trained models by Vanilla Grad, Loss Grad and Guided Backprop and the results of ResNet-50 are shown in Figure 2. Vanilla and Loss Grad of the adversarial trained model are more perceivable for humans as previous works reported. On the other hand, in the results of Guided Backprop, both models are correctly capturing the features of the main objects in the input images, but it can be seen that the adversarially trained model does not respond to the fine texture (e.g., the wing of the insect and the hair of the dog).
Next, we performed layer visualization and the results are shown in Figure 3 and Figure 4. It can be confirmed that the values of the first layers of the adversarially trained models are smaller than that of the standard trained ones to be insensitive to weak edges and textures. The deeper layers of the standard trained model have the small color cluttering that cannot be confirmed in those of the adversarially trained model. Furthermore, it can be seen that the adversarially trained model captures an obviously larger structure than the standard trained model.
In order to confirm this fact quantitatively, we added some artificial transformations to the validation set of ImageNet and investigated changes in accuracy of the standard trained and the adversarially trained models. As artificial transformations, we applied Gaussian blur and median filter to reduce the spatial information, color reversal to modify the color information and stylization by style transfer techniques to change the texture while keeping the edge and silhouette information as Geirhos et al.  did. In the Gaussian blur setting, we set the kernel size to and changed the value of from to in increments of . In the median filter setting, we set the kernel size to and changed the number of times to apply the median filter from to .
The example images and the results of the Gaussian blur and the median filter settings are represented in Figure 5 and Figure 6, respectively. In the Gaussian blur setting, the accuracy of both models decreases while maintaining the difference of their accuracy. On the other hand, in the median filter setting, the adversarially trained model outperform the standard trained one as the times of median filter increases. This tendency can be confirmed in both AlexNet and ResNet-50. This results suggest that the adversarially trained model uses stronger edges such as the silhouette information of objects rather than the detailed texture information, since the Gaussian blur makes all edges blurry equally while the median filter only removes weak edges and noise.
|Architecture||Setting||Metric||Variants of ImageNet|
|AlexNet||STD||Top-1 Acc [%]|
|Top-5 Acc [%]|
|ADV||Top-1 Acc [%]|
|Top-5 Acc [%]|
|ResNet-50||STD||Top-1 Acc [%]|
|Top-5 Acc [%]|
|ADV||Top-1 Acc [%]|
|Top-5 Acc [%]|
|Dataset||# Pixel||# Class||# Train|
|Tiny ImageNet |
|Stanford Dogs |
Next, Table 2 shows the result of applying color reversal and style transfer. Although, in color reversal setting, the same range of accuracy reductions were observed in both models, the accuracy of adversarial trained model exceeded standard trained one in Stylized ImageNet. Since Stylized ImageNet stylizes the texture inside the objects, it suggests that the adversarially trained models make shape based decisions rather than texture comparing to the standard trained models. Following the fact that humans are able to achieve high accuracy using only the silhouette information as reported by Geirhos et al. , these results follows the report of Tsipras et al.  that adversarially trained models are better align to human perception than standard trained ones.
4.2 Benefits of Adversarially Robust Features
As described above, standard accurate and adversarially robust models obtain different sets of features. This result induces our next simple question: “Can adversarially robust features be utilized to improve accuracy by combining with standard accurate features?” To answer this question, we performed ensemble learning with combination of standard trained and adversarially trained models as pre-trained models three times for 10 datasets (Table 3). We employed the late fusion scheme that merges two classifiers in the last layer. Then, we trained models for 30 epochs with the learning rate 0.001 and set the other parameters to the same as described above. For fair comparison, we also trained not only plain models without ensemble learning (STD, ADV) but also the ensemble models of two different standard trained models (STD+STD) and that of two different adversarially trained models (ADV+ADV).
Figure 7 shows the results of ensemble learning for AlexNet and ResNet-50. As can be seen from the results, the STD+ADV models outperform the plain STD and plain ADV models in most datasets. On the other hand, the STD+STD models tend to be more accurate for the high resolution datasets (e.g., STL-10, Food-101, Flower-102, Stanford Dogs, CUB-200 and ImageNet) and the ADV+ADV models for the low resolution datasets (e.g., SVHN, CIFAR-10, CIFAR-100, Tiny ImageNet) While the the accuracy of the STD+STD and ADV+ADV models fluctuate depending on the resolution of the datasets, the STD+ADV models show the comparable accuracy to the highest-accurate models in all the datasets. These results indicate that the STD+ADV models obtain resolution-invariant features and see the inputs at various scales. From the above, it has been considered that the adversarial training has contradictory to the accuracy, but we clarified that both standard accurate and adversarially robust features can be used to improve the accuracy.
Through various experiments, we confirmed that adversarially trained models pay attention to larger scale patterns and stronger edges rather than fine textures. Here, we would like to further consider that the visualization results of adversarially trained models suggest. On the first layer visualization in Figure 4, both the standard trained and the adversarially trained models basically obtained similar patterns, but there exist two crucial differences. The first one is, as aforementioned in Section 4.1, the values of adversarially trained ones are smaller than standard trained ones. The other noticeable difference is that grid-like or checkerboard-like patterns appear in the adversarially trained models (see the enlarged images placed at the bottom left corners in Figure 4). The shape of the patterns appear in ResNet-50 and ResNet-101 looks similar although that of AlexNet is different. These observations might suggest that to some extent these patterns depend on the model architecture and become a clue to explain why adversarial examples are well transferred in the same architecture family as reported in . As future works, we will analyze this result in detail and apply it to construct more effective defense methods or strong perturbations which have high transferability.
In this paper, we address an open question: “What do adversarially robust models look at?” and provided the visual and the quantitative evidence that adversarially robust models looks at things at a larger scale than standard accurate models. Furthermore, we showed empirically that both standard trained and adversarially trained features are useful for improving accuracy contrary to the previous reports that adversarial robustness decreases the accuracy. From this result, we believe that adversarially robust features may also be useful in improving the accuracy of tasks other than classification (e.g., surface normal estimation and keypoint detection) and analyzing them may give clues not only to devise effective defense methods against adversarial perturbations but also to clarify the blackbox mechanisms of DNNs. Like these, exploring the potential of adversarially robust features is our future work.
-  J. Adebayo, J. Gilmer, M. Muelly, I. Goodfellow, M. Hardt, and B. Kim. Sanity checks for saliency maps. In Neural Information Processing Systems (NIPS), pages 9525–9536, 2018.
-  A. Athalye, N. Carlini, and D. Wagner. Obfuscated gradients give a false sense of security: Circumventing defenses to adversarial examples. arXiv:1802.00420, 2018.
A. Athalye, L. Engstrom, A. Ilyas, and K. Kwok.
Synthesizing robust adversarial examples.
International Conference on Machine Learning (ICML), 2018.
-  S. Bach, A. Binder, G. Montavon, F. Klauschen, K.-R. Müller, and W. Samek. On pixel-wise explanations for non-linear classifier decisions by layer-wise relevance propagation. PloS one, 10(7):e0130140, 2015.
L. Bossard, M. Guillaumin, and L. V. Gool.
Food-101 – mining discriminative with random forests.In European Conference on Computer Vision (ECCV), 2014.
-  W. Brendel and M. Bethge. Approximating cnns with bag-of-local-features models works surprisingly well on imagenet. International Conference on Learning Representations (ICLR), 2019.
N. Carlini and D. Wagner.
Adversarial examples are not easily detected: Bypassing ten detection
ACM Workshop on Artificial Intelligence and Security (AISEC), pages 3–14. ACM, 2017.
-  N. Carlini and D. Wagner. Towards evaluating the robustness of neural networks. In IEEE Symposium on Security & Privacy, 2017.
M. Caron, P. Bojanowski, A. Joulin, and M. Douze.
Deep clustering for unsupervised learning of visual features.In European Conference on Computer Vision (ECCV), 2018.
-  A. Chattopadhay, A. Sarkar, P. Howlader, and V. N. Balasubramanian. Grad-CAM++: Generalized gradient-based visual explanations for deep convolutional networks.
S. I. J. S. Z. W. Christian Szegedy, Vincent Vanhoucke.
Rethinking the inception architecture for computer vision.
IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2016.
-  M. Cisse, P. Bojanowski, E. Grave, Y. Dauphin, and N. Usunier. Parseval networks: Improving robustness to adversarial examples. In International Conference on Machine Learning (ICML), pages 854–863. JMLR. org, 2017.
-  A. Coates, A. Ng, and H. Lee. An analysis of single-layer networks in unsupervised feature learning. In Proceedings of the fourteenth international conference on artificial intelligence and statistics, pages 215–223, 2011.
-  J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei-Fei. ImageNet: A Large-Scale Hierarchical Image Database. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2009.
-  K. Dvijotham, R. Stanforth, S. Gowal, T. Mann, and P. Kohli. A dual approach to scalable verification of deep networks. arXiv:1803.06567, 2018.
-  D. Erhan, Y. Bengio, A. Courville, and P. Vincent. Visualizing higher-layer features of a deep network. University of Montreal, 1341(3):1, 2009.
-  A. Fawzi, O. Fawzi, and P. Frossard. Analysis of classifiers’ robustness of adversarial perturbations. Machine Learning, 107(3):481–508.
-  T. F. Fisher Yu, Vladlen Koltun. Dilated residual networks. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2017.
-  R. Geirhos, P. Rubisch, C. Michaelis, M. Bethge, F. A. Wichmann, and W. Brendel. ImageNet-trained CNNs are biased towards texture; increasing shape bias improves accuracy and robustness. In International Conference on Learning Representations (ICLR), 2019.
-  A. Ghorbani, A. Abid, and J. Zou. Interpretation of neural networks is fragile. arXiv:1710.10547, 2017.
-  J. Gilmer, L. Metz, F. Faghri, S. Schoeholz, M. Raghu, M. Wattenberg, and I. Goodfellow. Adversarial spheres. In International Conference on Learning Representations Workshop (ICLRW), 2018.
-  I. J. Goodfellow, J. Shlens, and C. Szegedy. Explaining and harnessing adversarial examples. In International Conference on Learning Representations (ICLR), 2015.
-  K. He, X. Zhang, S. Ren, and J. Sun. Deep residual learning for image recognition. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 770–778, 2016.
-  S. Ioffe and C. Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. In International Conference on Machine Learning (ICML), 2015.
-  A. Khosla, N. Jayadevaprakash, B. Yao, and F.-F. Li. Novel dataset for fine-grained image categorization. In First Workshop on Fine-Grained Visual Categorization (FGCV), 2011.
-  P.-J. Kindermans, S. Hooker, J. Adebayo, M. Alber, K. T. Schütt, S. Dähne, D. Erhan, and B. Kim. The (un)reliability of saliency methods. arXiv:1711.00867, 2017.
-  A. Krizhevsky. One weird trick for parallelizing convolutional neural networks. arXiv:1404.5997, 2014.
-  A. Krizhevsky and G. Hinton. Learning multiple layers of features from tiny images. Technical report, Citeseer, 2009.
-  A. Kurakin, I. Goodfellow, and S. Bengio. Adversarial machine learning at scale. arXiv:1611.01236, 2016.
-  A. Kurakin, I. J. Goodfellow, and S. Bengio. Adversarial examples in the physical world. In International Conference on Learning Representations (ICLR)W, 2017.
-  A. Kurakin, I. J. Goodfellow, and S. Bengio. Adversarial machine learning at scale. In International Conference on Learning Representations (ICLR), 2017.
A. Madry, A. Makelov, L. Schmidt, D. Tsipras, and A. Vladu.
Towards deep learning models resistant to adversarial attacks.In International Conference on Learning Representations (ICLR), 2018.
-  J. H. Metzen, T. Genewein, V. Fischer, and B. Bischoff. On detecting adversarial perturbations. International Conference on Learning Representations (ICLR), 2017.
T. Miyato, S.-I. Maeda, S. Ishii, and M. Koyama.
Virtual adversarial training: A regularization method for supervised and semi-supervised learning.IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), July 2018.
-  A. Modas, S.-M. Moosavi-Dezfooli, and P. Frossard. SparseFool: a few pixels make a big difference. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2019.
-  S.-M. Moosavi-Dezfooli, A. Fawzi, O. Fawzi, and P. Frossard. Universal adversarial perturbations. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 86–94, 2017.
-  S.-M. Moosavi-Dezfooli, A. Fawzi, and P. Frossard. Deepfool: a simple and accurate method to fool deep neural networks. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 2574–2582, 2016.
-  S.-M. Moosavi-Dezfooli, A. Fawzi, J. Uesato, and P. Frossard. Robustness via curvature regularization, and vice versa. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2019.
-  Y. Netzer, T. Wang, A. Coates, A. Bissacco, B. Wu, and A. Y. Ng. Reading digits in natural images with unsupervised feature learning. In Neural Information Processing Systems (NIPS) Workshop on Deep Learning and Unsupervised Feature Learning, 2011.
-  W. Nie, Y. Zhang, and A. B. Patel. A theoretical explanation for perplexing behaviors of backpropagation-based visualizations. International Conference on Machine Learning (ICML), 2018.
-  M.-E. Nilsback and A. Zisserman. Automated flower classification over a large number of classes. In Proceedings of the Indian Conference on Computer VIsion, Graphics and Image Processing, 2008.
-  M. Noroozi and P. Favaro. Unsupervised learning of visual representations by solving jigsaw puzzles. In European Conference on Computer Vision (ECCV), 2016.
-  N. Papernot, P. McDaniel, S. Jha, M. Fredrikson, Z. B. Celik, and A. Swami. The limitations of deep learning in adversarial settings. In IEEE European Symposium on Security and Privacy (EuroS&P), pages 372–387. IEEE, 2016.
-  N. Papernot, P. McDaniel, X. Wu, S. Jha, and A. Swami. Distillation as a defense to adversarial perturbations against deep neural networks. In IEEE Symposium on Security & Privacy, pages 582–597. IEEE, 2016.
-  O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma, Z. Huang, A. Karpathy, A. Khosla, M. Bernstein, A. C. Berg, and L. Fei-Fei. ImageNet Large Scale Visual Recognition Challenge. International Journal of Computer Vision (IJCV), 115(3):211–252, December 2015.
-  P. Samangouei, M. Kabkab, and R. Chellappa. Defense-GAN: Protecting classifiers against adversarial attacks using generative models. In International Conference on Learning Representations (ICLR), 2018.
-  R. R. Selvaraju, M. Cogswell, A. Das, R. Vedantam, D. Parikh, and D. Batra. Grad-CAM: Visual explanations from deep networks via gradient-based localization. In IEEE International Conference on Computer Vision (ICCV), pages 618–626, 2017.
-  A. Shrikumar, P. Greenside, and A. Kundaje. Learning important features through propagating activation differences. In International Conference on Machine Learning (ICML), pages 3145–3153. JMLR. org, 2017.
-  K. Simonyan, A. Vedaldi, and A. Zisserman. Deep inside convolutional networks: Visualising image classification models and saliency maps. arXiv:1312.6034, 2013.
-  D. Smilkov, N. Thorat, B. Kim, F. Viégas, and M. Wattenberg. SmoothGrad: removing noise by adding noise. arXiv:1706:03825, 2017.
-  Y. Song, R. Shu, N. Kushman, and S. Ermon. Constructing unrestricted adversarial examples with generative models. In Neural Information Processing Systems (NIPS), 2018.
-  J. T. Springenberg, A. Dosovitskiy, T. Brox, and M. Riedmiller. Striving for simplicity: The all convolutional net. International Conference on Learning Representations Workshop (ICLRW), 2015.
-  D. Su, H. Zhang, H. Chen, J. feng, Y.-Y. Chen, and Y. Gao. Is robustness the cost of accuracy?–a comprehensive study on the robustness of 18 deep image classification models. In European Conference on Computer Vision (ECCV), 2018.
-  M. Sundararajan, A. Taly, and Q. Yan. Axiomatic attribution for deep networks. In International Conference on Machine Learning (ICML), pages 3319–3328. JMLR. org, 2017.
-  C. Szegedy, W. Zaremba, I. Sutskever, J. Bruna, D. Erhan, I. Goodfellow, and R. Fergus. Intriguing properties of neural networks. In International Conference on Learning Representations (ICLR), 2014.
-  T. Tanay and L. Griffin. A boundary tilting persepective on the phenomenon of adversarial examples. arXiv:1608.07690, 2016.
M. A. Torkamani and D. Lowd.
On robustness and regularization of structural support vector machines.In International Conference on Machine Learning (ICML), pages 577–585, 2014.
-  F. Tramèr, A. Kurakin, N. Papernot, I. Goodfellow, D. Boneh, and P. McDaniel. Ensemble adversarial training: Attacks and defenses. In International Conference on Learning Representations (ICLR), 2018.
D. Tsipras, S. Santurkar, L. Engstrom, A. Turner, and A. Madrt.
Robustness may be at odds with accuracy.In International Conference on Learning Representations (ICLR), 2019.
-  J. Uesato, B. O’Donoghue, A. v. d. Oord, and P. Kohli. Adversarial risk and the dangers of evaluating against weak attacks. arXiv:1802.05666, 2018.
-  C. Wah, S. Branson, P. Welinder, P. Perona, and S. Belongie. The Caltech-UCSD Birds-200-2011 Dataset. Technical Report CNS-TR-2011-001, California Institute of Technology, 2011.
-  E. Wong, F. Schmidt, J. H. Metzen, and J. Z. Kolter. Scaling provable adversarial defenses. In Neural Information Processing Systems (NIPS), pages 8410–8419, 2018.
-  J. Wu, Q. Zhang, and G. Xu. Tiny ImageNet Challenge. http://cs231n.stanford.edu/reports/2017/pdfs/930.pdf, 2017.
-  K. Y. Xiao, V. Tjeng, N. M. Shafiullah, and A. Madry. Training for faster adversarial robustness verification via inducing relu stability. International Conference on Learning Representations (ICLR), 2019.
-  S. Zagoruyko and K. Komodakis. Wide residual networks. In Proceedings of the British Machine Vision Conference (BMVC), 2016.
-  M. D. Zeiler and R. Fergus. Visualizing and understanding convolutional networks. In European Conference on Computer Vision (ECCV), 2014.
-  M. D. Zeiler, G. W. Taylor, and R. Fergus. Adaptive deconvolutional networks for mid and high level features learning. In IEEE International Conference on Computer Vision (ICCV), 2011.
B. Zhou, A. Khosla, A. Lapedriza, A. Oliva, and A. Torralba.
Learning deep features for discriminative localization.In CVPR, 2016.
A Sanity Checks of Model Parameters Dependence
In the main paper, we used some visualization methods to confirm visually the difference between feature representations of standard trained models and those of adversarially trained ones. While visualization is one of the very important ways to explain the predictions of DNNs, some recent works claimed that a part of them lacks the theoretical background and does not explain the network decisions correctly.
In particular, guided backpropagation , one of the visualization methods we used, has been shown to lack class-discriminability [68, 47, 40]. However, the lack of this class dependence does not matter in achieving our goal of clarifying the difference between what the models look at. On the other hand, Adebayo et al. 222Their GitHub page (https://github.com/adebayoj/sanity_checks_saliency) exists, but it has no implementations. pointed out the problem that sensitivity maps by guided backpropagation are independent on the model parameters, and this problem is critical for our goal. Thus, in this section, we conduct the sanity check of this model parameters independence for various visualization methods and experimental results show that guided backpropagation has the model parameters dependence unlike their results . We release our code333https://github.com/anonymous-author-iccv2019/sanity_checks for reproducibility of our experiments.
Besides this, other desirable characteristics as the attribution methods have been discussed in many works [68, 26, 20, 47, 54, 40]. Note that we agree their view that reliance solely on visual assessment can be misleading, and also confirmed that our claim is correct via several quantitative experiments in the main paper.
a.1 Mathematical Formulation
In this section, we introduce several visualization methods that we use in our sanity checks. There are some parts that overlap with the main paper, but they are written again to make this supplementary material self-contained. The element-wise product of the input and the gradient (converted to gray-scale) is often used to reduce visual illegibility, but we use the raw gradients here because the color information of the gradients can be also important. Therefore we omit the element-wise product in the formulations accordingly.
Given a -class classifier and a pair of a data point and a ground-truth label , the predicted label is obtained by , where is the -th component of that corresponds to the -th class.
Vanilla Gradient. Vanilla gradient calculates the gradient of the classifier output of the ground-truth label with respect to the input as follows:
Loss Gradient. Loss gradient calculates the gradient of the loss with respect to the input as follows:
Guided Backpropagation.  Guided backpropagation set all the negative gradients to 0 like the ReLU function. We consider the -th ReLU activation in the -th layer with its input and its output , and denote the ReLU activation. In the gradient calculation, the forward ReLU is formally defined as follows:
where is the indicator function, and the backward ReLU is formally defined as follows:
where is the top gradient before activation, i.e., gradient of the output score with respect to . Then, the (modified) gradient after activation , i.e., gradient of the output score with respect to is given as follows:
SmoothGrad.  In order to avoid noisy results, SmoothGrad average the gradients with respect to the multiple noise added input as follows:
where is the number of samples and
represents Gaussian noise with standard deviation.
Integrated Gradient.  Integrated gradient accumulates the gradients at all points along the path between the input and the baseline as follows:
a.2 Experimental Results and Discussion
We performed aforementioned visualization methods on randomly initialized, standard trained and adversarially trained models for ResNet-50 and ResNet-101 as shown in Figure 8 - 12. As it can be seen in Figure 10, guided backpropagation leads to very different visualization results when the model parameters are different, similar to other visualization methods. Therefore, it can be concluded that guided backpropagation has “model parameters dependence” and this phenomenon has been also confirmed in the previous report .
In addition, we can see that guided backpropagation is the best suited to determine whether the models look at fine textures or not. This can be thought to be due to the strong ability of guide backpropagation to generate clearer outputs. Nie et al.  proved theoretically that guided backpropagation recovers the input images on randomly initialized models, and highlight more relevant pixels when the model parameters are strongly biased via training, as can be confirmed in Figure 10. We believe that it is necessary to compare with an appropriate baseline, rather than judge them independently, especially when using powerful visualization methods like guided backpropagation. The need for a baseline is also stated in the existing work . In fact, comparing the results in models with different parameters led to helpful clues to assess their characteristics.
Some people may think it strange that the results on randomly initialized models are human-interpretable in almost visualization methods. This phenomenon can be thought to be rooted in the local connectivity of CNNs. Noroozi et al. 
reported that AlexNet can achieve 12% accuracy in ImageNet (the chance rate is 0.01%) by training only the fully connected layers even though the parameters of all convolutional layers are fixed with random initialization (from a Gaussian distribution). Based on this strong prior, Caron et al. proposed an unsupervised learning method by clustering iteratively deep features and using these cluster assignments as pseudo-labels to learn the parameters of the convolution part. Therefore, this phenomenon is not the result of the defects of visualization methods and is due to the potential of CNNs.
B Additional Sensitivity Map Visualization
We provide more visualization results of the sensitivity maps of standard trained and adversarially trained networks (AlexNet and ResNet-50). The input images are randomly chosen from the ImageNet dataset. As we can see, all the results (Figure 13 and Figure 14) are consistent with our previous empirical observations that adversarially trained models look at things at a larger scale than standard models and pay less attention to fine textures.
C Additional Deeper Layer Visualization
In this section, we provide additional deeper layer visualizations  of standard trained and adversarially trained models (Figures 15 - 19). All results are randomly selected from the indicated convolutional layers. As we can see, deeper layers of adversarially trained models capture obviously larger structures than the standard trained models.
D Example Images of Variants on ImageNet
Here we provide some example images of variants on ImageNet that are used in our experiments. In Figure 20 and Figure 21, we represent some examples of stylized variants of ImageNet that are generated in the same way as Stylized ImageNet . As we can see, after applying style transfer, local texture cues are no longer highly predictive of the target class, although the global shapes and strong edges tend to be retained. Next, we provide the sample images that applied with Gaussian blur and median filter in Figure 22. While the edges in Gaussian blurred images become blurry equally, the median filter only removes weak edges and noise. Figure 22 also includes example images of color reversed variant of ImageNet.