Heat and Blur: An Effective and Fast Defense Against Adversarial Examples

03/17/2020 ∙ by Haya Brama, et al. ∙ Ariel University 0

The growing incorporation of artificial neural networks (NNs) into many fields, and especially into life-critical systems, is restrained by their vulnerability to adversarial examples (AEs). Some existing defense methods can increase NNs' robustness, but they often require special architecture or training procedures and are irrelevant to already trained models. In this paper, we propose a simple defense that combines feature visualization with input modification, and can, therefore, be applicable to various pre-trained networks. By reviewing several interpretability methods, we gain new insights regarding the influence of AEs on NNs' computation. Based on that, we hypothesize that information about the "true" object is preserved within the NN's activity, even when the input is adversarial, and present a feature visualization version that can extract that information in the form of relevance heatmaps. We then use these heatmaps as a basis for our defense, in which the adversarial effects are corrupted by massive blurring. We also provide a new evaluation metric that can capture the effects of both attacks and defenses more thoroughly and descriptively, and demonstrate the effectiveness of the defense and the utility of the suggested evaluation measurement with VGG19 results on the ImageNet dataset.



There are no comments yet.


page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Artificial neural networks (NNs) have been recently related to several breakthroughs in artificial intelligence tasks. However, their computational process usually involves millions of parameters and was initially considered inaccessible and uninterpretable to humans. This unintuitive nature of computation may lead to unpredicted (“Clever Hans”) mistakes, and may also give rise to different kinds of adversarial attacks. Such attacks produce adversarial examples (AEs) where small malicious changes, usually undetectable by humans, lead NNs to output wrong decisions with high confidence 

[18]. In this work we suggest using visualization techniques, originally created as a tool for NN interpretability, in ways that: 1) provide a new perspective on the nature of AEs’ impact; 2) can be utilized for a new and simple defense technique against adversarial attacks. Additionally, in order to evaluate our defense we introduce an evaluation metric that enables more informative analysis of the quality of both attacks and defenses.

1.1 AE Explanations and Defenses

Common explanations of the AE phenomenon focus on the failure of the models to map correctly the entire input space to the output space [19]. This can also be due to the linear nature of the computation, where small initial perturbations accumulate through the layers to cause significant deviations [8]. On the other hand, other researchers put the blame on the non-linear processes or the high dimensionality of the input [3], while others argue that AEs occur where the decision boundary of the models lies close to the data manifold [20]. A different approach relates the existence of AEs to non-robust features that exist in standard datasets [11].

Side by side with the attempts to explain AEs, many defense strategies have been proposed in order to increase the robustness of NNs against such attacks. Akhtar and Mian akhtar2018threat sorted those defenses along three main directions: 1) using modified training or modified input; 2) modifying the network by adding special layers or changing the loss/activation functions; and 3) using external models as network add-ons during classification. Though effective, the above methods as well as other methods do not provide a complete defense against AEs. Many of them have been evaluated with only a very specific attack type or under particular parameters, and fail when encountering different or stronger AEs 

[5], or when considering the worst-case of limited or even perfect-knowledge attacks [4]. Moreover, many of the proposed defenses require extra computations and special training or architecture. Therefore even the relatively successful defenses may not be relevant to already trained models, or to tasks that require quick decisions or complex and sophisticated modeling.

In the current research, we seek for a simple and fast method, in terms of both implementation and computational resources, to detect AEs. In order for this method to be applicable to different types of NNs, the defense should not require a special architecture or changes to the (already trained) network’s parameters. A natural direction is to modify the input during testing, in a way that cancels-out the subtle adversarial perturbations. Previous such methods, like feature squeezing [22], have been shown to be highly successful in black-box scenarios where the attacker is unaware of the defense. However, they can be broken easily when the input-modification is taken into account while attacking [4]. Of course, the defender can respond by increasing the protective distortion level of the defense, but this will eventually lead to a decrease in the performance of the network in general, since benign inputs will become unrecognizable as well.

1.2 Feature Visualization

At this point we take advantage of the progress made in a parallel line of research, called feature (or network) visualization [16], which aims to understand NN computation and make NNs more interpretable. Specifically, several methods have been proposed for highlighting the input features which contribute to the network’s decisions. Three main approaches to this end are: 1) input modification methods, which observe how changing the input to a NN affects the output; 2) class activation mapping methods, which combine activation patterns of a specific (usually high-level) layer with additional information such as the output and the gradients; and 3) back-propagation based methods, which are at the focus of the current work.

Back propagation based methods basically aim at reversing the inference process of the network. For example, Layer-wise Relevance Propagation (LRP) [2] propagates the final prediction backwards to the input in order to single-out the most influential components (e.g. pixels). Namely, the classification decision is decomposed into pixel-wise relevance scores, according to a conservation principle which forces the relevance quantity to be preserved between layers. Different reverse propagation rules have been suggested, as reflected in different variants of LRP such as LRP-, LRP- and LRP- [14]. However, the most basic attribution rule can be defined as:


where and is the relevance of the

th layer. When restricted to the positive input spaces, this rule can be interpreted and theoretically explained as deep Tylor decomposition (DTD) of the ReLU activations 


Though robust against gradient shattering and resulting in fine-detailed heatmaps, the DTD method was shown to be instance-specific but not class-discriminative [10] – the generated maps recognize foreground objects, but those maps are almost identical independently of the target class information. Following this, CLRP [10] and SGLRP [12]

have been developed in order to better specify the input features which trigger different output neurons. According to Gu

et al. gu2018understanding, the patterns of positively activated neurons are determined by (and hence also back-propagated to) the pixels on salient edges of the image, independently of the target class. The reverse propagation of different categories differs, however, in the exact relevance values that are assigned to the neurons. Based on that, the CLRP method utilizes the relative difference in relevance values of LRP maps of the same image for different categories as a tool for locating discriminative input features. SGLRP [12]

differs from the above CLRP method in the way it determines and distributes the initial relevance scores, where the post-softmax probabilities are used to weight non-target neurons in proportion to their contribution to the propagated heatmap rather than weighting them with a fixed value. Both CLRP and SGLRP methods result in class-discriminative features that mainly consist of contour lines that are relevant to the target class.

We aim to harness the expressive nature of the above techniques for a better understanding of the influence of AEs on the network computational process. If these methods can reveal the input features that contribute to the decision made by the network, what will those features look like when the input is adversarial?

1.3 Visualization Techniques and AEs

Several studies have investigated the effect of AEs on visualization of features. Xu et al. xu2019interpreting used different visualization techniques to provide pixel-wise analysis of the adversarial effects. Xiao et al. xiao2018spatially used CAM to show how the attention of the network shifts to irrelevant regions of AE, and interpreted this as an indication of the attack’s effectiveness. Dong et al. dong2017towards used discrepancy maps to show how AEs differ from benign input, and proposed adversarial training scheme which incorporates consistency loss between true and adversarial maps in the objective function. The difference between adversarial and benign saliency maps was also explored by Gu and Tresp gu2019saliency. Finally, Chalasani et al. chalasani2018concise show that adversarially trained networks tend to produce sparser and more concise explanations when using the Integrated Gradients method.

2 LRP-based Methods and AEs

We present herein results of LRP-based methods on AEs, starting with some new insights achieved when using existing techniques, followed by the introduction of a novel LRP-based version.

2.1 Class Discriminative Heatmaps of AEs

When applying class-discriminative methods to AEs, the effect of the attacks on the networks’ decision making can be clearly visualized [7, 21, 9]. However, the previous works have either only stated that the focus of the heatmap changes as a result of an attack [21, 9]

, or showed that for misclassification attacks, where the target class is very similar to the original classified class, the heatmap focuses more on features shared by both classes 

[7]. We show here that the same phenomenon can surprisingly be observed also for strong attacks where the target class is very different from the original one. Figure 1 presents three such examples, where the heatmaps of the adversarial images resemble objects from the adversarial target classes. These findings are not trivial, since one could speculate that for weak attacks, where misclassification is sufficient by itself, the attacks would enhance available features of an image that correspond to the non-primary classes. Strong target-class attacks, on the other hand, should require a different deceiving strategy, since the target class’ features are not present in the original image. Explanations for such attacks, as was mentioned in Subsection 1.1, usually involve more abstract and general aspects of NN computation such as “unmapped islands” in the input space. The current results suggest that for at least some of the strong attacks, the same strategy of enhancing non-target human-wise features still applies.

Figure 1: Visualization of the adversarial effect on the network’s decision neuron. The left column presents AEs created using Carlini-Wagner (CW) Attack (first row) and LBFGS Attack (second and third row). In the middle column are heatmaps produced for the predicted (adversarial) categories, and at the right column are examples from the ImageNet validation set of the adversarial objects.

As was discussed by Iwana et al. iwana2019explaining, the above methods do not always provide heatmaps as expected. Nevertheless, for the given examples, the adversarial effect can be interpreted as simply shifting of the network’s attention from the principal features of the true object to the target (more peripheral) features. A more fundamental insight from this discussion is that the information about the different classes and their corresponding features exists in the inference process of both benign and adversarial inputs. The difference in the final decisions is a result of the way these features are weighted and prioritized.

2.2 Absolute Value Heatmaps for Primary Class Specification

Motivated by the above observations, we hypothesize that the network preserves the knowledge of the “true” classification even though its actual output is incorrect. As stressed by Gu et al. gu2018understanding,gu2019saliency, the class-specific information can be extracted from the saliency/relevance values assigned to the pixels rather than from the filtering effect itself. Previous works utilized this fact to generate class discriminative heatmaps by comparing relevance values between heatmaps of different categories. However, the fact that the discriminative pixels are relatively higher suggests that information about classes hides within the absolute relevance values as well.

Specifically, we observe that the heatmap pixels with the highest absolute values, no matter which output neuron was that heatmap reverse-propagated for, belong to the primary object in the input image. This also applies even when the image is adversarial; when reverse-propagating the relevance of the target class, the pixels with the highest absolute relevance scores in the resulting heatmap still belong to the (true) primary object in that image. Figure 2

presents how such primary class heatmaps are almost identical for different output neurons given some input image, as well as for the target neurons of different AEs which are based on that same image. The heatmaps were created by first reverse-propagating the one-hot encoding of the desired output neuron, using the standard DTD method 


, and then binarizing the pixels of the heatmap so that only the top 5% of the pixels are preserved:


where and

are the heatmap pixels’ mean and standard deviation, respectively.

Figure 2: Our new method produces similar heatmaps for both benign and adversarial categories. First column: AEs created using untargeted CW and targeted LBFGS attacks. Second and third columns: heatmaps for the original (benign) image’s top-1 prediction and for another randomly selected prediction among the top-5 predictions, respectively. Forth and fifth columns: heatmaps for the AE’s top-1 prediction and for another randomly selected prediction among the top-5 predictions, respectively.

3 Heat and Blur

Following the previous section, we now have a tool to highlight the pixels of the “true” objects in both benign and AEs. We suggest using this knowledge in the following “heat and blur” defense procedure, which is summarized in Algorithm 1.

Input: An image
Parameter: Blurring factor
Output: A “cleaned” image

1:  Input the image to the model and get the output .
2:  Obtain the heatmap LRP.
3:  Use Equation 2 to get the binarized heatmap .
4:  Blur the original image using a Gaussian filter with to obtain .
5:  Restore the original pixels that correspond to the entries of and plant them in the blurred image .
6:  return
Algorithm 1 Heat and Blur

In this method, we first input a suspicious image to a pre-trained model in order to obtain a heatmap of the primary object outlines, as was described in section 3.2. Then we significantly modify the image (in this case, using a Gaussian filter) to guarantee the cancellation of adversarial perturbations, which are by definition restricted in their amplitude in order to remain below human detection level. The resultant image is not adversarial, but may still be misclassified due to the massive distortion. Then, we restore the original values of the pixels that belong only to the heatmap from step 1. In this way, the contour lines of the ”true” object are sharp and that object stands out, while the background remains blurry. It is important to preserve the background to some degree, since the texture and colors contribute to the decision making of the network, but the fine details are insignificant for the classification task. The modified image is then used as a new input for the final decision (see Figure 3 for a visual demonstration).

Figure 3: Demonstration of the Heat and Blur method. From left to right: a) AE where ‘Rhodesian ridgeback’ is misclassified as ‘Egyptian cat’; b) the blurred AE with ; c) the recovered pixels according to the binarized heatmap; d) the ‘cleaned’ image is now classified as ‘Rhodesian ridgeback’

We find that the above method is very efficient in “clearing” images from adversarial effects, with the cost of a moderate decrease in the network’s performance on benign images. The balance between accuracy for benign (or cleaned) images and susceptibility to AEs is controlled by the parameter, which determines the blurring degree (Figure 4

). It requires two forward and one backward passes, which are computationally efficient when using designated platforms of deep learning. However, we recognize that using the traditional top-1 accuracy measure is not sufficiently informative as an evaluation metric.

Figure 4: Effect of the blurring parameter on the accuracy of benign inputs (solid blue line), LBFGS AEs with respect to the true label (red dotted line) and adversarial terget category (black dashed line), and on the distance from the original input (green dash-dotted line) in the normalized [0, 1] space. The region of distance below the commonly used threshold of 0.031 is marked with green background

4 Evaluation Measurements

Even though it is currently the most common measurement for the success of both attacks and defenses, top-1 accuracy answers the very limited question of “did the NN classified the input as the exact correct label?”. This way, misclassification of ‘tiger shark’ as ‘white shark’ is equivalently wrong as misclassifying ‘tiger shark’ as ‘broom’, for example. Even the top-5 accuracy measure does not always capture the whole classification picture, since it only informs of whether the correct label appeared in those top-5 predictions, but does not indicate where it is located within those predictions, or how far it is from their semantic neighbourhood. There could be many applications where this distinction is important, for example if one wants to visually identify weapons in an image for security reasons, and an attacker masks weapons as animals. In this case, it does not matter if a defense strategy turns the adversarial ‘animal’ into one kind of weapon or another, as long as the object is now identified as a threat. Alternatively, many attacks only switch between the top- (where is small) predictions of the model, which in many cases is not necessarily a wrong classification per say. Many images consist of several objects, for example an image labelled as ‘harvester’ while there also appears ‘hay’ in it. A top-1 classification of that image as ‘hay’ provides true information regarding the content of that image, even though the correct label of ‘harvester’ only appears as the second highest prediction due to the attack. On the other hand, other attacks may entirely change the semantic neighborhood of the top predictions, but this dramatic and more fundamental change is currently unaccounted for. To summarize, the current top- accuracy metric does not express the semantic and conceptual gap between the correct and adversarial predictions, while this may be valuable information in many use-cases.

We therefore suggest modifying a measure of the ranking quality that is widely used in Information Retrieval literature, called Normalized Discounted Cumulative Gain (NDCG) [6]. This measure was originally designed to apply graded relevance scores to results of search engines, based on the position of each element in the results list. We propose to use the pre- and post-softmax predictions of the model for a benign input as a source for relevance scores and as an ideal grading, by which attack and defense results should be normalized. Specifically, a set of post-softmax predictions for a specific benign input will be scored as follows:


where is the pre-softmax value of output neuron , and is a parameter chosen as the prediction coverage. For example, for , only the highest predictions that sum up to 0.9 will be scored according to their original pre-softmax normalized value, and all the other predictions will be zeros. These scores will be used as relevance scores for the AE (or alternatively the cleaned image ) that was created on the basis of :


where is the prediction within the output that corresponds to the category of (note that this prediction is not necessarily located in the th place), and with the parameter .

The discounted cumulative gain of the predictions for an image is given by:


and the normalized DCG is defined as:


where we define IDCG as the ideal DCG given by the predictions for the benign image. Therefore the normalized DCG for an AE is calculated as:


5 Experimental Evaluation

Figure 5: Average NDCG scores for the 5 highest predictions of the model, compared between AEs (‘Adv’), AEs after our method was applied (‘Cleaned’) and cleaned benign images (‘Control’). Top-1 accuracy results for each group are presented in the legends.

The experiments were conducted using VGG19 architecture with the ImageNet validation dataset. They include only benign images which were correctly classified by the model, and successfully applied AEs. For each randomly selected image, an adversarial counterpart was created using the Foolbox222https://github.com/bethgelab/foolbox toolbox [17]. The applied attacks consisted of bounded targeted and untargeted Carlini-Wagner (CW), targeted LBFGS and untargeted boundary attacks. For the targeted attacks, the target category was randomly selected, and the distance threshold was set to 0.02. The suggested defense procedure was based on the LRP method using the Innvestigate333https://github.com/albermax/innvestigate toolbox [1], and the threshold for the heatmap mask was empirically chosen to be the top 5% of the LRP heatmap pixels. The blurring of the rest of the images was achieved using a Gaussian filter with between [0.8, 1.2]. Each experimental group consisted of 500-800 couples of benign and adversarial images except for the boundary attack group with 175 couples, and the coverage parameter in the NDCG method was set to 0.99 for both benign and adversarial images. The heatmaps in Figure 1 were created using the code provided by  [12]444https://github.com/uchidalab/softmaxgradient-lrp.

5.1 Evaluation of Heat and Blur as a Defense

We used the NDCG measure to evaluate the effectiveness of our suggested defense procedure against four different attack types. As can be observed in Figure 5, the different attacks have a varying effect on the predictions of the model. For example, the top-1 accuracy of both targeted and untargeted AEs is 0, but these groups’ top category NDCG scores are very different; while the targeted attacks cause the model to classify the AE as a random (and usually unrelated) class and hence the NDCG score is very close to 0, the untargeted attacks often directs the model to a category within the top- predictions of the original benign counterpart, as reflected in the NDCG score of 0.5. This higher score captures the fact that the misclassification is not always so dramatic and sometimes not even wrong, especially in cases where the input consists of several objects or can suffer several possible interpretations. The sharp increase of the untargeted AEs graphs is another indication of an attack strategy that often only switches the location of the top category, while the slow and incomplete rise of the targeted attacks’ graph shows how these attacks completely change the relationships and semantic neighborhood of the top- categories. The effectiveness of our defense also varies, though it can be observed that it successfully recovers the correct context of the predictions, even when the top-1 accuracy is limited. Also note that for the control benign group, the NDCG scores are higher and relatively consistent across the different experimental groups, whereas the top-1 accuracy measure is more sensitive to noise.

5.2 Adaptive Attacks

Since some of the pixels remain untouched, one can argue that an adaptive attack is still possible, where perturbations are only applied to the heatmap pixels of the “true” object. In this way, the defense will fail to clear the adversarial effect and the network will still misclassify the image. In order to explore this possibility, we created a set of new attacks that indeed consist of perturbations to the top-5% pixels alone (see Algorithm 2 for the detailed procedure). We used several attack types such as CW or PGD, and found that constraining the possible perturbations to such a small portion of the image pixels has a dramatic impact on the attack process. If the perturbation size is not strictly defined, the resultant perturbations are above the human detection threshold. When trying to apply attacks with - or -norm bounded perturbations, the algorithm usually fails to find an attack (we have succeeded in less than 1% of the attempts). In addition, all the AEs that we have managed to produce have been (wrongly) classified by the network with low confidence and into a semantically related class. We believe that further refinements of our suggested defense, combined with more thorough analysis of the vulnerable pixels that belong to the heatmap, will help circumvent even the few currently successful attacks.

Input: An image , an attack method Att(.)
Parameter: Number of iterations
Output: An adaptive AE or

2:  for  iterations do
3:      = Att()
4:     hmap_mask = binarized_heatmap()
5:     [hmap_mask] = [hmap_mask]
6:      = clip() to be in the allowed pixel range
7:     if  is adversarial then
8:        return
9:     else
10:         =
11:     end if
12:  end for
13:  return
Algorithm 2 Adaptive Attack

6 Discussion and Conclusions

The above results exemplify how interpretation methods such as LRP can be used for defense against AEs. We chose to focus on LRP since this method is computationally efficient and conceptually straightforward, and does not require many assumptions and hyperparameters tuning. Using a Gaussian filter for the defensive distortion is not essential and other types of randomized noise or input transformations may give comparable or even better results, as the connection between adversarial and corruption robustness was thoroughly explored by Ford

et al. ford2019adversarial. The choice of the 5% highest pixels for localization of the primary object, though empirically effective, has some limitations. The relevance heatmaps are sometimes rather sparse, especially in cases where the model predictions are vague, and as a result the top 5% pixels may be too diluted. A more sophisticated determination of the percentage of pixels to restore may improve the classification accuracy of those cases. Overall, the simplicity of the suggested method could enhance its robustness, as will be discussed below, and it could also facilitate integration with other defenses. For example, scanning of the restored pixels for statistically atypical values, or defending against possible attacks on the interpretation method itself.

Recent studies have demonstrated that interpretability methods may also be vulnerable to adversarial attacks. For example, it was shown that saliency methods are sometimes sensitive to input transformations such as adding a constant to the input [13], however the results are less relevant to our defense since the highest-valued pixels of the attacked heatmaps seem to remain intact. In the work of Zhang et al. zhang2018interpretable, it was shown that pixel manipulation can indeed produce wrong interpretation heatmaps, but their method was not tested against LRP-based methods. The authors have, however, showed that the transferability of AEs across different interpreters is low, a fact that makes black-box attacks more difficult than standard classification attacks. They also presented defense methods against such attacks. Ghorbani et al. ghorbani2019interpretation show that several interpretation methods are fragile to small perturbations, which change the produced heatmaps while maintaining the same predictions. They explain this vulnerability as a result of the high dimensionality and non-linearities in deep networks which affect gradient-based interpretations; intuitively, two sets of classes can be separated similarly using different boundaries, so that the gradients may change and lead to misleading interpretations. Since their attack method was only tested against gradient-based interpretations where the prediction of the model was intact, it is left for future research to determine whether methods like our LRP version that are not based directly on gradients are vulnerable as well. If so, then new intuition and explanation are required for understanding the effect of such attacks, as well as new defense strategies.


This research was supported by the Ministry of Science & Technology, Israel, and by the Ariel Cyber Innovation Center in conjunction with the Israel National Cyber directorate in the Prime Minister’s Office.


  • [1] M. Alber, S. Lapuschkin, P. Seegerer, M. Hägele, K. T. Schütt, G. Montavon, W. Samek, K. Müller, S. Dähne, and P. Kindermans (2019) iNNvestigate neural networks!.

    Journal of Machine Learning Research

    20 (93), pp. 1–8.
    Cited by: §5.
  • [2] S. Bach, A. Binder, G. Montavon, F. Klauschen, K. Müller, and W. Samek (2015) On pixel-wise explanations for non-linear classifier decisions by layer-wise relevance propagation. PloS one 10 (7), pp. e0130140. Cited by: §1.2.
  • [3] A. N. Bhagoji, D. Cullina, and P. Mittal (2017) Dimensionality reduction as a defense against evasion attacks on machine learning classifiers. arXiv:1704.02654. Cited by: §1.1.
  • [4] N. Carlini and D. Wagner (2017) Adversarial examples are not easily detected: bypassing ten detection methods. In ACM Workshop on Artificial Intelligence and Security, pp. 3–14. Cited by: §1.1, §1.1.
  • [5] N. Carlini and D. Wagner (2017) Towards evaluating the robustness of neural networks. In SP, pp. 39–57. Cited by: §1.1.
  • [6] W. B. Croft, D. Metzler, and T. Strohman (2010) Search engines: information retrieval in practice. Vol. 520, Addison-Wesley Reading. Cited by: §4.
  • [7] Y. Dong, H. Su, J. Zhu, and F. Bao (2017) Towards interpretable deep neural networks by leveraging adversarial examples. arXiv:1708.05493. Cited by: §2.1.
  • [8] I. J. Goodfellow, J. Shlens, and C. Szegedy (2014) Explaining and harnessing adversarial examples. arXiv:1412.6572. Cited by: §1.1.
  • [9] J. Gu and V. Tresp (2019) Saliency methods for explaining adversarial attacks. arXiv:1908.08413. Cited by: §2.1.
  • [10] J. Gu, Y. Yang, and V. Tresp (2018)

    Understanding individual decisions of CNNs via contrastive backpropagation

    In ACCV, pp. 119–134. Cited by: §1.2.
  • [11] A. Ilyas, S. Santurkar, D. Tsipras, L. Engstrom, B. Tran, and A. Madry (2019) Adversarial examples are not bugs, they are features. arXiv:1905.02175. Cited by: §1.1.
  • [12] B. K. Iwana, R. Kuroki, and S. Uchida (2019)

    Explaining convolutional neural networks using softmax gradient layer-wise relevance propagation

    arXiv:1908.04351. Cited by: §1.2, §5.
  • [13] P. Kindermans, S. Hooker, J. Adebayo, M. Alber, K. T. Schütt, S. Dähne, D. Erhan, and B. Kim (2019) The (un)reliability of saliency methods. In Explainable AI: Interpreting, Explaining and Visualizing Deep Learning, pp. 267–280. Cited by: §6.
  • [14] G. Montavon, A. Binder, S. Lapuschkin, W. Samek, and K. Müller (2019) Layer-wise relevance propagation: an overview. In Explainable AI: Interpreting, Explaining and Visualizing Deep Learning, pp. 193–209. Cited by: §1.2.
  • [15] G. Montavon, S. Lapuschkin, A. Binder, W. Samek, and K. Müller (2017) Explaining nonlinear classification decisions with deep taylor decomposition. Pattern Recognition 65, pp. 211–222. Cited by: §1.2, §2.2.
  • [16] C. Olah, A. Mordvintsev, and L. Schubert (2017) Feature visualization. Distill 2 (11), pp. e7. Cited by: §1.2.
  • [17] J. Rauber, W. Brendel, and M. Bethge (2017) Foolbox: a Python toolbox to benchmark the robustness of machine learning models. arXiv:1707.04131. Cited by: §5.
  • [18] C. Szegedy, W. Zaremba, I. Sutskever, J. Bruna, D. Erhan, I. Goodfellow, and R. Fergus (2013) Intriguing properties of neural networks. arXiv:1312.6199. Cited by: §1.
  • [19] P. Tabacof and E. Valle (2016) Exploring the space of adversarial images. In IJCNN, pp. 426–433. Cited by: §1.1.
  • [20] T. Tanay and L. Griffin (2016) A boundary tilting persepective on the phenomenon of adversarial examples. arXiv:1608.07690. Cited by: §1.1.
  • [21] C. Xiao, J. Zhu, B. Li, W. He, M. Liu, and D. Song (2018) Spatially transformed adversarial examples. arXiv:1801.02612. Cited by: §2.1.
  • [22] W. Xu, D. Evans, and Y. Qi (2018) Feature squeezing: detecting adversarial examples in deep neural networks. In NDSS, Cited by: §1.1.