White Box Methods for Explanations of Convolutional Neural Networks in Image Classification Tasks

04/06/2021
by   Meghna P Ayyar, et al.
23

In recent years, deep learning has become prevalent to solve applications from multiple domains. Convolutional Neural Networks (CNNs) particularly have demonstrated state of the art performance for the task of image classification. However, the decisions made by these networks are not transparent and cannot be directly interpreted by a human. Several approaches have been proposed to explain to understand the reasoning behind a prediction made by a network. In this paper, we propose a topology of grouping these methods based on their assumptions and implementations. We focus primarily on white box methods that leverage the information of the internal architecture of a network to explain its decision. Given the task of image classification and a trained CNN, this work aims to provide a comprehensive and detailed overview of a set of methods that can be used to create explanation maps for a particular image, that assign an importance score to each pixel of the image based on its contribution to the decision of the network. We also propose a further classification of the white box methods based on their implementations to enable better comparisons and help researchers find methods best suited for different scenarios.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 5

page 7

page 9

page 15

page 18

page 29

09/22/2017

SwGridNet: A Deep Convolutional Neural Network based on Grid Topology for Image Classification

Deep convolutional neural networks (CNNs) achieve remarkable performance...
07/23/2020

Right for the Right Reason: Making Image Classification Robust

Convolutional neural networks (CNNs) have achieved astonishing performan...
10/17/2019

Effect of Superpixel Aggregation on Explanations in LIME – A Case Study with Biological Data

End-to-end learning with deep neural networks, such as convolutional neu...
01/05/2018

Efficient Image Evidence Analysis of CNN Classification Results

Convolutional neural networks (CNNs) define the current state-of-the-art...
06/19/2018

RISE: Randomized Input Sampling for Explanation of Black-box Models

Deep neural networks are increasingly being used to automate data analys...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Deep learning approaches which are a part of methods we call today Artificial Intelligence (AI), have become indispensable for a wide range of applications requiring analysis and understanding of a large amount of data. They can produce promising results that outperform human capabilities in various decision tasks relating to visual content classification and understanding such as face detection

[16], object detection and segmentation[10], image denoising[48], video-based tasks like sports action recognition [25] and saliency detection[38] amongst others. The success of deep learning-based systems in these tasks have also paved the way for their applications to be developed for a variety of medical diagnosis tasks like cancer detection [49], and Alzhiemer’s disease detection[1] on different imaging modalities just to name a few. Along with the usefulness of these tools, the trustfulness and reliability of such systems is also being questioned.

Though the results of deep learning models have been exemplary they are not perfect, can produce errors, are sensitive to noise in data and often lack the transparency to have verifiability of the decisions that they make. A specific example is related to the visual task of object classification from an image. The study by Ribeiro et al.[33] showed that the trained network that performed supervised image classification, used the presence of snow as the distinguishing feature between the ”Wolf” and ”Husky” named classes present in the dataset. Such limitations raise ethical and reliability concerns that need to be addressed before such systems can be deployed and adopted on a wider scale. The objective of explainable AI/Deep learning is to design and develop methods that can be used to understand how these systems produce their decisions.

The behaviour described in the case of the wolf/husky classification has been termed as the problem of a trained classifier behaving like a ”Clever Hans” predictor

[24]. Explanation methods aid in the unmasking of such spurious correlations and biases in the model or data and also understand the failure cases of the system. If we can comprehend the reasoning behind the decision of a model, it could also help in uncovering associations that had been previously unobserved, which could aid in furthering the future research trends. It is important to mention that explainability focuses on the attribution of the output based on the input. It does not deal with answering the causality of the features or factors that have lead to a decision that has been taken. That is the explainers are only correlation-based (input-output) and do not make causal inferences.

The current study focuses on the task of supervised image classification using specific deep neural network architectures ,i.e. Convolutional Neural Networks (CNNs). CNNs have become one of the most successful architectures concerning AI tasks relating to images and hence the methods presented in the subsequent sections are focused on finding the relation between the predicted output classes and the input features that are the pixels of the image. In the remainder of the paper, the topology for the methods is presented in Sec. 2 and the detailed problem definition in Sec. 3. The other sections present the different explanation methods in detail and Sec. 7 provides the analyses and discussions of the methods that have been discussed in this study.

2 Topology of Explanation Methods for Image Classification Tasks

In the book by Samek et al.[36], the authors present recent trends in the research in explainable AI and some of the directions for future explorations. They have presented a topology for the various explanations methods like meta-explainers, surrogate/sampling-based, occlusion based and propagation-based to name a few. However, with the addition of newer methods and their adaptations to different types of neural networks and datasets, we propose to update the topology based on the domain to which the methods are applied and their inherent design. Comparing recent studies, two major types of explanation methods exist i) Black-box methods and ii) White Box methods. In this review, for both cases, we mainly focus on the explanations of decisions of trained Deep Neural Network (DNN) classifiers. This means that for each sample of the data the methods we review explain the decision of the network. This is why they are called ”sample-based” methods[32]. In the following, we will briefly explain the ”Black box” methods and focus on ”White box” methods in image classification tasks.

2.1 Black Box Methods

Black Box refers to an opaque system. The internal functioning of the model is kept hidden or is not accessible to the user. Only the input and the output of the system are accessible and such methods are termed as black box methods as they are model agnostic.

There are multiple ways to examine what a black-box model has learned[20]

. A prominent group of methods are focused on explaining the model as a whole by approximating the black-box model like the neural networks with an inherently interpretable model. One such example is the use of decision trees

[5]. Decision trees are human-interpretable as the output is based on a sequence of decisions starting from the input data. To approximate a black-box trained network Frosst et al.[13] have used multiple input-output pairs generated by the network to train a soft decision tree that could mimic the network’s behaviour. Each node makes a binary decision and learns a filter and a bias

term and the probability of the right branch of the tree being selected is given by Eq. (

1) where the function is the sigmoid logistic function, is the input and is the current node.

(1)

The leaf nodes learn a simple distribution for the different classes present in the dataset. This method can be qualified as a Dataset-based explanation, as the decision tree is built for the whole dataset of pairs input-output.

Sample-based black-box methods deal with explaining a particular output of the model. These methods are not focused on understanding the internal logic of the model for all the classes on a whole but are restricted to explaining the prediction for a single input. The Local Interpretable Model-agnostic Explanations[33] (LIME) method is one such approach that derives explanations for individual predictions. It generates multiple perturbed samples of the same input data and the corresponding outputs from the black box and trains an inherently interpretable model like a decision tree or a linear regressor on this combination to provide explanations.

Taking into account human understanding of visual scenes, such as attraction by meaningful objects in visual understanding tasks, for image classification tasks the regions of the image where the objects are present should have a higher contribution to the prediction. Based on this logic, some methods attempt to occlude different parts of the image iteratively using a sliding window mask[51]. Figure 1 illustrates how the gray-valued window is used for occluding different parts of the images by sliding it across the image. By observing the change in the prediction of the classifier when different regions are hidden, the importance of regions for the final decision is calculated.

Figure 1:

Gray-value sliding window used to occlude to different parts of the image. Image taken from ImageNet

[11]

Fong et al.[12] also build the explanations on which region contributes to the DNN decision the most by masking. Instead of using a constant grey-value mask, they formulate the explanation as a search for the minimal mask which changes the classification score for the given image the most. The mask applies a meaningful transformation which models image acquisition process like blurring. They find the mask by minimizing the expectation of output classification score of the network on the image perturbed using the blurred mask. Instead of using a single mask to perform the search, they apply the perturbation mask stochastically to the image. Also, L1 and total variation regularization are used to ensure that the final mask deletes the smallest subset of the image and has a regular structure.

Nevertheless, these methods only aid in identifying if the network is predicting based on a non-intuitive region in the image. The explanations are not useful to identify which layers or filters in a DNN classifier cause these wrong correlations between the input image regions and the prediction. Thus they cannot be used to improve the network performance. Hence the white box methods, which allow for analyzing internal layers of the network are more interesting.

2.2 White Box Methods

The term ”white box” implies a clear box that symbolizes the ability to see into the inner workings of the model i.e. its architecture and the parameters. Due to the extensive research in DNNs, they are not unknown architectures anymore and studies like Yosinski et al.[50]

have been able to show the types of features that are learnt at the different layers in a DNN. Therefore, multiple methods aim to exploit the available knowledge of the network itself to create a better understanding of the prediction and the internal logic of the network thus allowing for further optimization of the architecture and hyperparameters of the model.

In this review work, we propose to deal with the specific case of Deep neural network classifiers such as convolutional neural networks (CNNs). We propose the following taxonomy for existing ”white-box” methods based on their approach used for generating explanations: i) Methods based on Linearization of the Deep-CNN, ii) Methods based on network structure, iii) Methods based on Adversarial approach. Due to the exploding research in the field, we do not claim to be complete in our taxonomy, but believe to have addressed the main trends.

Figure 2:

Architecture of a standard CNN: convolutional layers with non-linear activations, pooling layers and a perceptron at the end for classification

3 Problem Definition

This section provides the basic terminology and the definitions required to understand the type of network that we will be focusing on, the notations used and how the results are to be visualized.

3.1 Network Definition

The problem under consideration is the image classification task. To define the task, first consider a convolutional neural network (CNN). A simple AlexNet-like[23] CNN is illustrated in Fig. 2. The network consists of a series of convolutional layers, a non-Linear activation layer and a pooling layer that form the convolution (conv) block as illustrated by Fig. 3

. The conv blocks are followed by fully connected layers (FC) that are simple feed-forward neural networks

[17]

. The Rectified Linear Unit (ReLU) activation shown in Eq. (

2

) and Max Pooling are the most commonly used while building CNN classifiers

[52]. The last layer of the network has the same dimension as the number of classes in the problem, in the example in Fig. 2 it is 10 implying there are 10 categories of objects to recognize.

(2)

3.2 Notations

Consider a CNN that takes as input an image of size expressed as

and the output of the classification is a C-dimensional vector

. Here represents the number of classes and the image represents the input features for the network. The scores would be the output classification score for the image for the class . The network thus models a mapping . The output score vector is usually normalized to approximate the probability, thus is restricted to the interval and the score’s vector sums to .

Figure 3: Structure of a convolution block as proposed by Goodfellow et.al.[17]

The problem of explanations consists in assigning, to each pixel , an importance score with respect to its contribution to the output . Otherwise to produce a relevance score map for each of the pixels and/or features of internal convolutions layers of the network to the output where the class can either be the correct label class or a different class where it can be used to analyze the cause for that classification.

To ”explain” pixel importance to the user a visualization of the scores in is usually performed by computing ”Saliency/ Heat maps” and superimposing them on the original image.

3.3 Saliency/ Heat maps

A saliency/heat map is the visualization of the relevance score map with colour Look-up-Tables (LuTs) mapping onto a colour scale from blue to red as illustrated by Fig. 4. This form of visualizations is necessary for the user to understand and glean insights from the results of the explanation methods. In the current illustration, we have used the ”jet” colour map that has a linear transition from the maximum value mapping to red, the middle to yellow-green colour, and the lowest to blue. Other LuTs can also be used for the visualization of the heat maps but we have chosen ”jet” as it is one of the more popular colour maps and is intuitively understandable for a human observer.

Given this kind of network classifier and problem formulation, several methods have been proposed that can be employed for the visualization of relevance score maps given a particular image.

Figure 4:

A saliency map visualization for the sample image. Sample image taken from ImageNet dataset

[11]

4 Methods based on Linearization of the Deep-CNN

A (Convolutional) Neural Network is a non-linear classifier. It can be defined as a mapping from the input (feature space) to the output score space . The methods based on the linearization of a CNN produce explanations approximating the non-linear mapping . One of the commonly used approximations is the linear approximation

(3)

where the are the weights, is the input and is the bias related to the network. Different methods employ different ways to calculate the weight and bias parameters of the network approximation and thus produce different explanations.

4.1 Deconvolution Network based method

The Deconvolution Network (DeconvNet) proposed by Zeiler et al.[51] was a network that reversed the mapping of a CNN. It builds a mapping of the output score to the space of the input pixels . It does not require retraining and directly uses the learned filters of the CNN. Starting with the input image , a full forward pass through the CNN is done to compute the feature activation throughout the layers. To visualize the features of a particular layer the corresponding feature maps from the CNN layer are passed onto the DeconvNet. In the DeconvNet the three steps i) Unpooling ii) Rectification iii) Filtering are done at each layer iteratively till we reach the input features layer.

  • Unpooling: Max Pooling operation in a CNN is non-invertible, Hence to reverse this, during the forward pass in the network, at each layer the maxima locations are saved to a matrix called the Max location switches. During the unpooling, the values from the previous layers are mapped to only the locations of maxima and the rest of the positions have a 0 assigned to them.

  • Rectification: After applying the unpooling, a ReLU function (Eq. 2

    ) is applied onto the matrix to ensure that only positive influences on the output are backpropagated.

  • Filtering: This operation is the inverse operation to the convolution in the forward pass. To achieve this the DeconvNet convolves the rectified maps with the vertically and horizontally flipped version of the filter that has been learned by that layer in the CNN. The authors show that filters thus defined from learnt CNN filters are the deconvolution filters. We show the mathematical derivation of this in Appendix A.

Performing these operations iteratively from the layer of our choice to the input pixel layer helps to reconstruct the features from the layer that correspond to different regions in the input image . The importance of pixels is then expressed with a heat map.

4.2 Gradient Backpropagation

The gradient backpropagation method[40] was proposed to explain prediction of a model based on its locally evaluated gradient. The local gradient of the output classification score with respect to the input at a particular image is used to calculate the weights parameter from Eq. (3). This means that the linear approximation of the non-linear mapping is formulated as a Taylor First Order Expansion of in the vicinity of a particular image , and . The weight parameters are thus calculated as in Eq. (4).

(4)

The partial derivative of the output classification score with respect to the input corresponds to the gradient calculation for a single backpropagation pass for a particular input image . It is equivalent to the backpropagation step that is performed during training which usually corresponds to a batch of images. For this method, the notation of gradient at is to show that the backpropagation is for just one image. Also, during the training of a CNN, the backpropagation step stops at the second layer of the network for efficiency as the aim is not to change the input values. But with this method, the backpropagation is performed till the input layer to inspect which pixels affect the output the most.

The final heat map relevance scores for a particular pixel in the input 2D image are calculated as shown in Eq. (5) in the case of a gray-scale image.

(5)

For an RGB image, the final map is calculated as the maximum weight of that pixel from the weights matrices from each of the three channels as shown in Eq. (6), where corresponds to the different channels in the image.

(6)

Also, these gradients can be used for performing a type of sensitivity analysis. The magnitude of the derivatives that have been calculated could be interpreted to indicate the input pixels to which the output classification is the most sensitive. The gradient value would correspond to the pixels that need to be changed the least to affect the final class score the most.

Simonyan et al.[40] have also shown that the gradient backpropagation is a generalization of DeconvNet (Sec. 4.1). Indeed, this can be shown by comparing the three operations that DeconvNet performs with the gradient calculation.

  • Unpooling: During basic backpropagation at a max-pooling layer the gradients are backpropagated to only those positions that had the max values during the forward pass. This is exactly the same operation that is achieved by the use of the Max location switches matrix in the DeconvNet, see Sec. 4.1.

  • Rectification: For a CNN with the output of a convolution layer as the application of the ReLU activation is performed as , where then is the input for the next layer in the network. During the gradient backpropagation the rectification applied on the gradient map is based on the input, i.e. on those position where . Whereas, in the DeconvNet the rectification is applied on the unpooled maps and hence corresponds to the condition of . Figures. 5(b) and 5(c) show the changes in the calculation of the two matrices based on this difference in the operations.

  • Filtering: As shown in Appendix A, the vertical and horizontally flipped filter that is used during the filtering step in the DeconvNet corresponds to the gradient calculation of the convolution with respect to the input . This is the same step as the gradient backpropagation method performs and hence this step is equivalent for the two methods.

Except for the rectification step, the two methods are equivalent in their calculations and therefore, the gradient backpropagation method can be seen as a generalization of the DeconvNet.

4.3 Guided Backpropagation

Computing a saliency map based on gradients gives an idea of the various input features (pixels) that have contributed to the neuron responses in the output layer of the network. The primary idea proposed by Springenberg et al.

[43] is to prevent the backpropagation of negative gradients found in the deconvolution approach as they decrease the activation of the higher layer unit we aim to visualize. This is achieved by combining the rectification operation performed by the DeconvNet and the gradient backpropagation. As shown in Fig. 5(d), guided backpropagation proposes to restrict the flow of the gradients that have a negative value during backpropagation and also those values that had a negative value during the forward pass. This nullification of negative gradient values is called the guidance. Using the guidance step results in sharper visualizations for the descriptive regions in the input image[43]. Figure. 6 shows the heat maps generated by the gradient backpropagation and guided backpropagation methods for the network trained on ResNet34 architecture[28]. It can be seen that the guidance steps result in reducing the number of pixels that have a higher importance score and hence produce slightly sharper visualization.

Figure 5: The ReLU operation during (a) Forward Pass (b) Backpropagation (c) Backpropagation with DeconvNet (d) Guided Backpropagation

4.4 Integrated Gradients

Sundararajan et al.[45] proposed to calculate the saliency map as an integration of the gradients for a set of images that are created from the transformation of a baseline image to the input image . They propose the baseline

as a black image, then the series of images is produced by a linear transformation

. Thus if we denote by the value of -feature in our then Eq. (7) shows the calculation of the integrated gradients for the network with output classification for a class as . This forms the map of relevance scores with a corresponding score for each input pixel .

(7)

The parameter varies in and the term within the partial derivative would go from the baseline image to the final input as we integrate over as shown in Fig. 7. In practice, the integral is approximated by a summation over a fixed number of samples i.e. Riemann approximation.

Figure 6: Samples showing the saliency maps for the (a) Sample image given by (b) Gradient backpropagation (c) Guided backpropagation methods. Sample taken from MexCulture Architectural Styles dataset[31]
Figure 7: Transformation of the baseline image for integrated gradients for 7 values of . Image has been taken from ImageNet database[11]

The authors observed that if there were slight changes in the pixel value of the image such that visually the image did not appear to have changed, the gradients calculated by the gradient backpropagation methods showed large fluctuations in their values. For a small amount of noise present in the image, the visualization with gradient backpropagation was different to that of the original image. They propose that integrated gradients perform an averaging operation over a sample of images and so the final relevance maps would be less sensitive to these fluctuating gradient values when compared with the other gradient-based methods.

4.5 SmoothGrad

An alternative method to circumvent the issue of noisy saliency maps called the SmoothGrad was proposed by Smilkov et al.[42]

. The idea of this method is to have a smoother map with sharper visualizations by averaging over multiple noisy maps. To achieve this, the authors propose to add small noise vectors sampled from a Gaussian distribution

, where

is the standard deviation, to the input image

. Thus, they create samples of the input image with a small amount of noise added to its pixels. The relevance score maps are calculated for each of these images and the average of these generated maps gives the final relevance score map for the image as shown in Eq. (8). SmoothGrad is not a standalone method rather it can be used as an extension to other gradient-based methods to reduce the visual noise of the saliency maps. The authors observe that adding about noise to sampled images produced sharper maps. The parameter was chosen such that was in the range of . Here, and refer to the maximum and minimum values of the pixels of the image.

(8)

4.6 Gradient-Class Activation Mapping (Grad-CAM)

Gradient Class Activation Mapping (Grad-CAM)[37] is a post-hoc explanation via visualization of class discriminative activations for a network. Similar to gradient-based methods, Grad-CAM leverages the structure of the CNN to produce a heat map of the pixels from the input image that contribute to the prediction of a particular class.

A key observation that Grad-CAM relies on is that the deeper convolutional layers of a CNN act as high-level feature extractors[4]. So the feature maps of the last convolution layer of the network would contain the structural spatial information of objects in the image. Therefore, instead of propagating the gradient till the input layer like other gradient-based methods, Grad-CAM propagates the value from the output till the last convolutional layer of the network.

The features maps from the last convolution layer cannot be used directly as they would contain information regarding all the classes present in the dataset. Assuming that the last convolution layer of the network has feature maps named , the Grad-CAM method proposed to determine an importance value for each of the maps to the class predicted by the network. This value is calculated as the global average pooling of the gradient of the classification score with respect to the activation values for that feature map. As shown in Eq. (9), is the importance value for the feature map and there are such weights that are computed.

(9)

Here where and correspond to the height and width dimension of each feature map.

The weights are then used to weight each feature in the feature maps, the latter are then averaged. This gives us the relevance score map and a ReLU (rectification) function is applied over this map, see Eq. (10), to nullify the features that are negative and retain only those values that have a positive influence. At this stage, the relevance map is a 2-D map with the same spatial dimension as the feature maps of the last convolution layer. To have a correspondence to the input image , is upsampled to the spatial dimension of

using interpolation methods and scaled to the interval of

to visualize the final heat map.

(10)

Grad-Cam is the generalization of previously proposed by Zhu et al.[53] CAM method which requires squeezing of the feature maps of the last conv layer by average pooling to form the input of the FC layer in the network. Grad-Cam on the contrary can be applied to all the architectures of Deep CNNs.

4.6.1 Guided Grad-CAM

The heat maps produced by Grad-CAM are coarse, unlike the other gradient-based methods[37]. As the feature map of the last convolutional layer has a smaller resolution compared to the input image , Grad-CAM maps do not have fine-grained details that are generally seen in other gradient-based methods. To refine the maps, a variant of the method called the Guided Grad-CAM has been proposed which is a combination of the Grad-CAM and the guided backpropagation method by doing an element-wise multiplication of the two maps. The heat map that is obtained by this operation has been observed to have a higher resolution[37]. We illustrate maps obtained by Grad-Cam and Guided Grad-CAM in Fig. 8.

Figure 8: Samples showing the saliency maps for the (a) Sample image (b) Grad-CAM (c) Guided Grad-CAM methods. Image taken from MexCulture Architectural Styles dataset[31]

5 Methods based on network structure

This category of methods integrates the architecture of the network while explaining the output. Starting from an output neuron, they employ different local redistribution rules to propagate the prediction to the input layer to obtain the relevance score maps. In this section, we present the details of the methods that belong to this category and differ in the rules that they use for the redistribution process.

5.1 Layer-wise Relevance Propagation (LRP)

Layer-wise Relevance Propagation (LRP) is an explanation method proposed by Bach et al.[2] that explains the decision of a network for a particular image by redistributing the classification score for a class backwards through the network. The method does not use gradient calculations but defines the activation of the output neuron (either the predicted class or another class that is being considered) as the relevance value and a set of local rules for the re-distribution of this relevance score backwards till the input, layer by layer. The first rule that they propose is that of relevance conservation. Let the neurons in the different layers of the network be denoted by etc. and the classification score for the input image regarding the class . Then according to the relevance conservation rule, the sum of the relevance scores of all the neurons in each layer is a constant and equals as shown in Eq. (11).

(11)

Let and be two consecutive layers in the network and , denote the neurons belonging to these layers respectively. The relevance of the neuron based on can be written as . If neuron is connected to neuron then it is assigned a relevance value of weighted by the activation of the neuron and the weight of the connection between the two neurons . Similarly, neuron receives a relevance value from all the neurons that it is connected to in the next layer . The sum of all the relevance contributions that the neuron receives from the neurons it was connected to in the next layer is the final relevance value that is assigned to the neuron as shown in Eq. (12). The denominator term in Eq. (12) is the normalization value that is used to ensure relevance conservation rule Eq. (11). This rule is termed as the LRP-0 rule[26].

(12)

In this equation, the summation is done for all the neurons in the lower layer including the bias neuron in the network. The activation of the bias neuron is considered as and the weight of the connection is denoted as

. For the relevance propagation, the bias neuron is considered only for this term and is not considered elsewhere. Note that the authors propose these rules only for the specific case of rectifier networks i.e. for networks with ReLU as the non-linearity. The relevance of the output neuron is considered as its activation taken before the Softmax layer.

Similarly there exist a few other rules that improve on the LRP-0 rule for the propagation of relevance as presented in the following list.

  • Epsilon Rule (LRP-): To improve the stability of the LRP-0 rule a small positive term is added to the denominator as shown in Eq. (13). The term also reduces the flow of the relevance if the activation of the neuron is very small or there is a weak connection between the two neurons. If the value of is increased then it aids in ensuring only the stronger connections receive the redistributed relevance[26].

    (13)
  • LRP- : The parameter was introduced to improve the contributions of the connections that had a positive weight () as shown in Eq. (15). The function and so the neurons with a positive weight connection receive a higher relevance score during propagation.

    (14)
  • LRP rule: Two parameters and are used to control separately the positive and negative contributions to the relevance propagation. The function and and the parameters are constrained under the rule of .

    (15)

5.1.1 LRP as Deep Taylor Decomposition (DTD)

Montavon et al.[27] propose a framework to connect a rule-based method like LRP and the Taylor decomposition method as a way to theoretically explain the choice of the relevance propagation rules[2]. They propose a method called the Deep Taylor Decomposition (DTD) which treats LRP as consecutive Taylor expansions applied locally at each layer and neuron. The main idea that DTD uses is that a deep network can be written as a set of subfunctions that relate the neurons of two consecutive layers. Instead of treating the whole network as a function , DTD expresses LRP as a series of mapping of neurons at a layer to the relevance of the neuron in layer .

The Taylor expansion of the relevance score can be expressed as function of the activations at some root point in the space of the activations as shown in Eq. (16).

(16)

The first-order term in this expansion can be used to determine how much of the relevance is redistributed to the neurons in the lower layer. The main challenge in the computation of the Taylor expansion, in this case, is that of finding the appropriate root point and compute the local gradients.

To compute the function , the authors propose to substitute it with a relevance model that is simpler to analyze. From the relevance propagation rules of LRP (Sec. 5.1), it can be seen that the relevance score of a neuron can be written as function of its activations as , where in the case of the LRP-0 rule Eq. (12) can be written as . As the LRP rules are described for deep rectifier networks, the relevance function is expressed based on the ReLU activation as:

(17)

A Taylor expansion of this function gives:

(18)

Due to the linearity of the ReLU function on the domain of positive activations the higher order terms in the expansion are zero. The choice of the root point would ensure that the zero order terms can be made small. The first order term computation is fairly straightforward and would identify how much of the relevance value should be redistributed to the neurons of the lower layer. Different LRP rules that have been presented previously can be derived from Eq. (18) based on the choice of the reference point . For instance, LRP-0 rule shown in Eq. (12) can be derived by choosing and LRP- as shown in Eq. (13) by choosing .

5.2 Deep Learning Important FeaTures (DeepLIFT)

The primary idea of DeepLIFT a method proposed by Shrikumar et al.[39] is similar to the LRP method explained in Sec. 5.1. The major difference between the two methods is that DeepLIFT establishes the importance of neurons in each layer in terms of the difference of their response to that one of a reference state

. The reference state is either a default image or is an image chosen based on domain-specific knowledge. This reference could be an image that has the specific property against whose differences in the explanations are meant to be calculated. For example, it could be a black image in the case of the MNIST dataset as the backgrounds of the images in that dataset are all black. DeepLIFT aims to explain the difference in the output produced by the input image and the output of the reference state based on the difference between the input image and the chosen reference image.

For the output classification score of the input image and the output score of reference state as , the difference term is defined as . For the neurons belonging to a layer the relevance is denoted by . denotes the difference between the activations of the neuron for the input image and the reference state. Similar to the LRP method, DeepLIFT has the Summation to delta rule where the summation of the relevance of neurons at each is constant and equal to as shown in Eq. (19).

(19)

In order to explain the propagation rules, the authors define the a term multiplier - , which is defined as the contribution of the difference in the reference and input image activations to the difference in the output prediction divided by as shown in Eq. (20).

(20)

The multiplier is a term that is similar to a partial derivative but defined over finite differences[39]. They also define the Chain rule for multipliers

similar to the chain rule used with derivatives as shown in Eq. (

21) where denotes the neurons in intermediate layers between the neurons and the output .

(21)

Similar to LRP, the authors also separate the relevance values into two terms: positive and negative as they can then be treated differently if required. For each neuron , the two terms and are the positive and negative components respectively. These components can be found by grouping the positive and negative terms that contribute to the calculation of . Based on this idea the difference in the neuron activations of the input image and the reference state, and the relevance contribution can be written as shown in Eq. (22).

(22)

Using these terms, DeepLIFT proposes three rules that can be applied to a network for different layers to propagate the relevance from the output to the input layer.

  • Linear Rule: The linear rule is applied for the FC and Convolution layers (not applicable for the non-linearity layers). Considering the function , where is the activation of the neuron in the next layer, are the activations to the neuron from the previous layer and is weights of the connections, then we have based on the difference taken with the activations of the reference state neurons. The relevance contribution is then written as . The multiplier in this case is given by .

  • Rescale Rule: The Rescale rule is applied to layers with the non-linearities like the ReLU. Consider the neuron to be the non linear transformation of as . In the case of the ReLU function (Eq. (2)) denoted by . Considering the summation to delta property, the relevance contribution is as there is only one input . Hence, the multiplier in this case would be .

  • RevealCancel Rule: The RevealCancel rule treats the positive and negative contributions to relevance values separately. The impact of the positive and negative components of given as and on the components of given by and are calculated separately. Instead of the straightforward calculation, the value of is computed as the average of two terms. The first term is the average impact of the addition of only terms on the output of the nonlinearity . With as the value of the reference state at that neuron, the impact of is calculated by comparing the difference in the function value when it is included on top of , and is given as . The second term computes the impact of after the negative terms have been included. So the term measures the impact of when both the reference and negative terms are present. This computation is shown in Eq. (23).

    (23)

    Similarly for the calculation of , the average individual term is first computed in the absence of the positive term and then another term with the inclusion of is added to get the total impact as shown in Eq. (24).

    (24)

    Thus two multipliers that will be computed using this rule are as shown in Eq. (25), where and are calculated using Eqs. (23), (24) and and correspond to the sum of the positive and negative terms of .

    (25)

    The authors propose that the relevance scores that have been assigned to and are then distributed to the input features using the Linear Rule.

5.3 Feature based Explanation Method (FEM)

Feature based Explanation Method (FEM) proposed by Fuad et al.[14], similar to Grad-CAM, employs the observation that deeper convolutional layers of the network act as high-level feature extractors.

Let us consider a CNN that comprises a single Gaussian filter at each convolution layer. Then the consecutive convolutions of the input image with the Gaussian filters followed by the pooling (downsampling) would be the same operation that would be performed to create a multi-resolution Gaussian pyramid. In this pyramid, the image at the last level would have only the spatial information of the main objects which are present in the image. Considering a standard CNN, the learned filters at the deeper convolution layers behave similar to the high-pass, i.e. derivative filters on the top of Gaussian pyramid (some examples are given in[52]). This would imply that the information contained in the feature maps of the last convolution layer correspond to the main object that has been detected by the network from the given input image .

Hence, FEM proposes that the contribution of the input pixels for the network decision can be directly inferred from the features of that have been detected at the last conv layer of the network. Also, FEM proposes that the final decision of the output would be influenced by the strong features from maps in the last conv layer. FEM supposes that the feature maps of the last convolutional layer have a Gaussian distribution. In this case, the strong features from these maps would correspond to the rare features. The authors propose a K-sigma filtering rule to identify these rare and strong features. Each of the feature maps in layer is thresholded based on the K-sigma rule to create binary maps where denote the spatial dimension of the feature maps as shown in Eq. (26). The mean and standard deviation are calculated for each of the feature maps followed by the thresholding to create binary maps . K is the parameter that controls the threshold value and is set to 1 by the authors in their work.

(26)

The hyperparameters of a DNN such as the number of filters to train at each layer is often set arbitrarily, as the hyperparameter optimization process is a heavy computational problem. Hence channel attention mechanisms have been proposed in the DNNs[21] to improve classification accuracy. These mechanisms select important feature channels (maps) in an end-to-end-training process. Inspired by these models, the authors hypothesize that not all feature maps will be important for the classification. The importance is always understood as the magnitude of positive features in the channels. Hence, a weight term equal to the mean of the initial feature maps is assigned to each of the binary maps . The importance map is computed as the linear combination of all the weighted binary maps and then normalized to the interval . The importance map is upsampled to the same spatial resolution as the input image by interpolation.

FEM eliminates the need to calculate the gradients from the output neuron and provides a faster and simpler method to get an importance score of the input pixels based only on the features that have been extracted by the network. It does not examine the classification part of the network but uses only the feature extraction part of the CNN to explain the important input pixels that have been extracted by the network to produce the decision. The method is applicable both for 2D and 3D images or video, considered as a 2D +t volume. We will now illustrate it in the problem of image classification from ImageNet database performed with the VGG16

[41]. We propose the reader to visually compare the heat maps presented in Fig. 9 obtained by using different LRP rules[2] and FEM.

It can be seen from the fgure that LRP heat maps are dependent on the rule that is used. In this case, the LRP- assigns equal weight to both positive and negative features, and in the Fig. 9(a) most of the input pixels get assigned a higher relevance score. LRP- without bias from Fig. 9(c) results in a heat map that has the importance only in a smaller region of the tiger near the top. Without the added bias term the relevance scores are not distributed properly to the other regions. The rule with and only considers the features with a positive influence. As illustrated in the figure, this rule only highlights the contours with higher contrast in the image. Though FEM also considers the positive features (features are taken after the ReLU), the heat map that is more holistic and highlights the important regions in the image.

Figure 9: Explanation heat maps obtained for (a) Sample Image by (b) LRP- (c) LRP- ignore bias (d) LRP- with , (e) FEM

6 Methods based on Adversarial approach

Many recent works have used adversarial attacks on CNNs to demonstrate the susceptibility of the networks to simple methods that could lead the network to make completely wrong predictions[46]. Different adversarial attacks on the network can be used to interpret its behaviour[47]. Sample images that produce adversarial results give hints about the behaviour of the network[34, 22]. For example, the one-pixel attack proposed by Su et al.[44], showed that the network can have a completely wrong prediction by changing just one pixel in input image. Studying these adversarial attacks we can interpret the regions of the image that the network focuses on to make a decision.

In addition to the adversarial attack based interpretation of the network, we bring focus on a recent adversarial learning-based explanation method proposed by Charachon et al.[8]

that uses a Generative Adversarial Network (GAN)

[18]

based model. GAN is a type of network architecture with two components: i) Generator ii) Discriminator, that simultaneously trains the generator (G) to learn the data distribution and the discriminator (D) to estimate if the sample belongs to the dataset or has been generated by G.

For the case of a binary classification task on medical images, the authors along with their CNN classifier, use two generator networks i) Similar image generator ii) Adversarial image generator to produce explanations. The network is trained to generate an image that has the same output as the input image by the network. The network is trained to produce an image with a prediction that is opposite to that of the (adversarial). Consequently, the authors propose that the difference in these two generated images forms the explanation of the output by the network. For a given image the explanation of the classifier is then given as shown in Eq. (27).

(27)

The approach of just one network to generate an adversarial image ( and use the difference in the input image and the as the explanation was observed to produce noisy and non-intuitive features. To improve this, the authors use the two-generator approach and train and to sample from the same adversarial space, to have minimal differences in their learnt parameters but produce images with opposite classifications by the CNN.

7 Discussion

A saliency map is a representative of what the network has learnt and it is not guaranteed that the explanations match human intuition. However, it is observed that a network with a higher classification accuracy generally produces more intuitive maps[35]. A desirable property of every explanation map is that it is non-random and highlights only the relevant regions in the image and no more. Many of the methods use a qualitative assessment based on human inspection to evaluate and compare different maps. To evaluate the Grad-CAM maps, the authors conduct user surveys to find which maps and models they find reliable. Although human measurements on the quality of these maps are useful, they are time-consuming, could introduce bias and inaccurate evaluations[6]. Another way to compare the generated explanation/saliency maps is to use different metrics proposed in the long-term research on the prediction of visual attention in images and video [7]. For FEM[14] the authors compare the saliency maps obtained by FEM with gradient-based methods. They show that the most similar explanations in terms of usual metrics of comparison of saliency maps such as Pearson Correlation Coefficient, -Similarity are given by Grad-CAM method we presented in Sec. 4.6.

Also, methods based on the computation of gradients like the backpropagation, guided backpropagation and DeconvNet suffer from gradient shattering[3]

, whereas the depth of the layers increased the gradients of the progressively resembled white noise. This causes the importance scores to have high-frequency variations and makes them highly sensitive to small variations in the input. Galli et al.

[15] perform adversarial perturbations to their input image using the fast gradient method [19] and DeepFool[29]. They compare the explanation maps of Guided Grad-CAM for the image and its perturbed variations using the Dice similarity coefficient. They observe that perturbations in the image strongly affected the saliency maps. The maps of the images before and after perturbations are different, though they note that these differences are not easily perceived by a human viewer. LRP, DeepLIFT and FEM are not sensitive to this problem as they do not compute gradients.

Cruciani et al.[9] demonstrate the usefulness of LRP to visualize relevant features on brain magnetic resonance imaging modality (MRI) for multiple sclerosis classification. It is observed that the LRP heat maps are sparse and not always as intuitive which is also illustrated by Fig. 9. It is therefore important to consider the user, who will be using the maps while choosing a method to explain a network. For the domain of medical images, the specialist might require a map that provides holistic explanations for them to be able to interpret and trust the network decision.

Computation times of different methods also limit their application when dealing with a larger set of images. Fuad et al.[14] observe that gradient-based methods including Grad-CAM have longer computation times and FEM was faster in comparison.

Selvaraju et al. use datasets with manually annotated bounding boxes on objects in the image to compare the saliency map and human annotation using Intersection over Union to determine the quality of their explanations. Muddamsetty et al.[30]

also create a dataset comprising the user saliency maps in the form of eye-tracking details of medical experts on retinal images. They compare the two saliency maps based on metrics like Area Under the Curve (AUC) and Kullback-Leibler Divergence (KL-Div)

[7] and show that the saliency map closely aligns with human experts.

These are some of the first attempts that directly comparing the maps generated by a method to a human expert to determine which method is best suited based on the user application. The methods that we have presented in this paper focus on creating explanations aimed at humans and involving human feedback and intuition is thus a necessity for the choice and evaluation of these methods.

8 Conclusion

In this paper, we proposed a topology for the types of explanation methods: Black box and White box methods. We focused on white box methods as they can leverage the extra information that is available from the knowledge of the architecture of the network. Though there are multiple domains where different DNNs have been successful, we restrict our discussion to the explanations on image classification tasks using CNNs as the results have better intuitive interpretation and extensive applications in different modalities of images.

The methods that we have discussed are focused on explaining the decision for a single input image by creating saliency maps that attribute importance scores to each of its pixels based on its contribution to the final output. Multiple approaches have been used to calculate this contribution and based on the recent works, we proposed a categorization of methods to understand the similar approaches and compare the performance of one with the other. Based on our study, we can see that the choice of a method depends on the user for whom the explanation map is created. Evaluations of the methods based on simple human inspection might not always be the best way but we must consider some form of user-saliency to determine if the explanation method can represent human intuition and is easily interpretable.

9 Disclosures

The authors declare that they have no affiliations with or involvement in any organization or entity with any financial interest or non-financial interest in the subject matter or materials discussed in this manuscript.

10 Acknowledgement

This research was supported by University of Bordeaux/LaBRI.

Appendix A Filtering operation in DeconvNet and Gradient calculation

Figure 10: Convolution operation neuron in a CNN

Consider a convolution neuron as shown in Fig. 10 where the operation performed is , with Y as the input, F the convolution layer filter and O the output. In a standard CNN during backpropagation of the loss , the neuron receives a partial gradient of and the gradients to be calculated are and .

According to the chain rule, the calculation of the partial gradient is given by Eq. (28).

(28)

To go through a step-by-step calculation of these gradients we suppose that the input is a matrix of , the filter F is a matrix of size

and the convolution operation is the one with a stride of

. Then the corresponding matrices and the loss gradient that is backpropagated from the following layer would be as shown in Eq. (29).

(29)

The equations for the calculation of the convolution during the forward pass yields the expressions of the as shown in Eq.  (30).

(30)

So to calculate the partial gradient of the loss w.r.t. the input, the first calculation that needs to be done is . A single calculation of this expression is shown in Eq.  (31) based on the Eqs. (30). The rest of the terms can be calculated similarly.

(31)

Subsequently, the partial gradient of the loss w.r.t the input would be given by Eqs. (32).

(32)

Thus the partial gradient

when calculated using the chain rule can be written as a full convolution (when the loss matrix is zero-padded to have full convolution operation) of the

inverted filter, i.e. the filter matrix has been flipped vertically and then horizontally as shown in Eq. (33), and the loss gradient matrix .

(33)

DeconvNet (Sec. 4.1) uses the same operation at the filtering step. Thus, the deconvolution step of the DeconvNet and the calculation of the gradient with respect to the input at a convolution layer are equivalent.

References

  • [1] K. Aderghal, K. Afdel, J. Benois-Pineau, G. Catheline, A. D. N. Initiative, et al. (2020)

    Improving alzheimer’s stage categorization with convolutional neural network using transfer learning and different magnetic resonance imaging modalities

    .
    Heliyon 6 (12), pp. e05652. Cited by: §1.
  • [2] S. Bach, A. Binder, G. Montavon, F. Klauschen, K. Müller, and W. Samek (2015) On pixel-wise explanations for non-linear classifier decisions by layer-wise relevance propagation. PloS one 10 (7), pp. 1–46. Cited by: §5.1.1, §5.1, §5.3.
  • [3] D. Balduzzi, M. Frean, L. Leary, J. Lewis, K. W. Ma, and B. McWilliams (2017) The shattered gradients problem: if resnets are the answer, then what is the question?. In

    Proceedings of the International Conference on Machine Learning

    ,
    pp. 342–350. Cited by: §7.
  • [4] Y. Bengio, A. Courville, and P. Vincent (2013) Representation learning: a review and new perspectives. IEEE Trans. on pattern analysis and machine intelligence 35 (8), pp. 1798–1828. Cited by: §4.6.
  • [5] O. Boz (2002) Extracting decision trees from trained neural networks. In Proceedings of the Eighth ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 2002, pp. 456–461. External Links: Link, Document Cited by: §2.1.
  • [6] Z. Buçinca, P. Lin, K. Z. Gajos, and E. L. Glassman (2020) Proxy tasks and subjective measures can be misleading in evaluating explainable ai systems. In Proceedings of the 25th International Conference on Intelligent User Interfaces, pp. 454–464. Cited by: §7.
  • [7] Z. Bylinskii, T. Judd, A. Oliva, A. Torralba, and F. Durand (2018)

    What do different evaluation metrics tell us about saliency models?

    .
    in IEEE Trans on Pattern Analysis and Machine Intelligence 41 (3), pp. 740–757. Cited by: §7, §7.
  • [8] M. Charachon, C. Hudelot, P. Cournède, C. Ruppli, and R. Ardon (2020) Combining similarity and adversarial learning to generate visual explanation: application to medical image classification. arXiv:2012.07332 .. Note: To be published in ICPR 2020 Cited by: §6.
  • [9] F. Cruciani, L. Brusini, M. Zucchelli, G. R. Pinheiro, F. Setti, et al. (2021) Explainable 3d-cnn for multiple sclerosis patients stratification. In Proceedings of the ICPR 2020 Workshops Explainable Deep Learning-AI (EDL-AI), LNCS, Vol. 12663, pp. 103–114. External Links: Document, Link Cited by: §7.
  • [10] P. P. de San Roman, J. Benois-Pineau, J. Domenger, F. Paclet, D. Cattaert, and A. de Rugy (2017) Saliency driven object recognition in egocentric videos with deep CNN: toward application in assistance to neuroprostheses. Comput. Vis. Image Underst. 164, pp. 82–91. Cited by: §1.
  • [11] J. Deng, W. Dong, R. Socher, L. Li, K. Li, and L. Fei-Fei (2009) Imagenet: a large-scale hierarchical image database. In

    Proceedings of IEEE Conference on Computer Vision and Pattern Recognition

    ,
    pp. 248–255. Cited by: Figure 1, Figure 4, Figure 7.
  • [12] R. Fong and A. Vedaldi (2019) Explanations for attributing deep neural network predictions. In Explainable AI: Interpreting, Explaining and Visualizing Deep Learning, W. Samek, G. Montavon, A. Vedaldi, L. K. Hansen, and K. Müller (Eds.), Lecture Notes in Computer Science, Vol. 11700, pp. 149–167. External Links: Link, Document Cited by: §2.1.
  • [13] N. Frosst and G. E. Hinton (2017) Distilling a neural network into a soft decision tree. In in Comprehensibility and Explanation in AI and ML (CEX), AI*IA, CEUR Workshop Proceedings, Vol. 2071. External Links: Link Cited by: §2.1.
  • [14] K. A. A. Fuad, P. Martin, R. Giot, R. Bourqui, J. Benois-Pineau, and A. Zemmari (2020) Features understanding in 3d cnns for actions recognition in video. In Proceedings of the Tenth International Conference on Image Processing Theory, Tools and Applications (IPTA), pp. 1–6. Cited by: §5.3, §7, §7.
  • [15] A. Galli, S. Marrone, V. Moscato, and C. Sansone (2021) Reliability of explainable artificial intelligence in adversarial perturbation scenarios. In Proceedings of the ICPR 2020 Workshops Explainable Deep Learning-AI (EDL-AI), LNCS, Vol. 12663, pp. 243–256. External Links: Document, Link Cited by: §7.
  • [16] C. Garcia and M. Delakis (2004) Convolutional face finder: a neural architecture for fast and robust face detection. in IEEE Trans. on Pattern Analysis and Machine Intelligence 26 (11), pp. 1408–1423. Cited by: §1.
  • [17] I. Goodfellow, Y. Bengio, A. Courville, and Y. Bengio (2016) Deep learning. Vol. 1, MIT press Cambridge. Note: ISBN: 9780262035613 Cited by: Figure 3, §3.1.
  • [18] I. J. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. C. Courville, and Y. Bengio (2014) Generative adversarial nets. In Proceedings of Advances in Neural Information Processing Systems, NeurIPS 2014, pp. 2672–2680. External Links: Link Cited by: §6.
  • [19] I. J. Goodfellow, J. Shlens, and C. Szegedy (2015) Explaining and harnessing adversarial examples. In Procedings of the 3rd International Conference on Learning Representations, ICLR, External Links: Link Cited by: §7.
  • [20] R. Guidotti, A. Monreale, S. Ruggieri, F. Turini, F. Giannotti, and D. Pedreschi (2018) A survey of methods for explaining black box models. ACM computing surveys (CSUR) 51 (5), pp. 1–42. Cited by: §2.1.
  • [21] J. Hu, L. Shen, and G. Sun (2018) Squeeze-and-excitation networks. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 7132–7141. Cited by: §5.3.
  • [22] A. Ignatiev, N. Narodytska, and J. Marques-Silva (2019) On relating explanations and adversarial examples. In Proceedings of Advances in Neural Information Processing Systems, NeurIPS 2019, pp. 15857–15867. External Links: Link Cited by: §6.
  • [23] A. Krizhevsky, I. Sutskever, and G. E. Hinton (2012) Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25, pp. 1097–1105. Cited by: §3.1.
  • [24] S. Lapuschkin, S. Wäldchen, A. Binder, G. Montavon, W. Samek, and K. Müller (2019) Unmasking clever hans predictors and assessing what machines really learn. Nature communications 10 (1), pp. 1–8. Cited by: §1.
  • [25] P. Martin, J. Benois-Pineau, R. Péteri, and J. Morlier (2020) Fine grained sport action recognition with twin spatio-temporal convolutional neural networks. Multim. Tools Appl. 79 (27-28), pp. 20429–20447. Cited by: §1.
  • [26] G. Montavon, A. Binder, S. Lapuschkin, W. Samek, and K. Müller (2019) Layer-wise relevance propagation: an overview. In Explainable AI: Interpreting, Explaining and Visualizing Deep Learning, W. Samek, G. Montavon, A. Vedaldi, L. K. Hansen, and K. Müller (Eds.), Lecture Notes in Computer Science, Vol. 11700, pp. 193–209. External Links: Link, Document Cited by: 1st item, §5.1.
  • [27] G. Montavon, S. Lapuschkin, A. Binder, W. Samek, and K. Müller (2017) Explaining nonlinear classification decisions with deep taylor decomposition. Pattern Recognition 65, pp. 211–222. Cited by: §5.1.1.
  • [28] A. Montoya Obeso, J. Benois-Pineau, M. S. García Vázquez, and A. Á. Ramírez Acosta (2019)

    Organizing cultural heritage with deep features

    .
    In Proceedings of the 1st Workshop on Structuring and Understanding of Multimedia Heritage Contents, pp. 55–59. Cited by: §4.3.
  • [29] S. Moosavi-Dezfooli, A. Fawzi, and P. Frossard (2016) Deepfool: a simple and accurate method to fool deep neural networks. In Proceedings of the IEEE conference on Computer Vision and Pattern Recognition, pp. 2574–2582. Cited by: §7.
  • [30] S. M. Muddamsetty, M. N. S. Jahromi, and T. B. Moeslund (2021) Expert level evaluations for explainable ai (xai) methods in the medical domain. In Proceedings of the ICPR 2020 Workshops Explainable Deep Learning-AI (EDL-AI), LNCS, Vol. 12663, pp. 35–46. External Links: Document, Link Cited by: §7.
  • [31] A. M. Obeso, L. M. A. Reyes, M. L. Rodriguez, M. H. M. Cruz, M. S. G. Vázquez, J. Benois-Pineau, L. M. Z. Fuentes, E. C. Martinez, J. A. F. Secundino, J. L. R. Martinez, et al. (2016) Image annotation for mexican buildings database. In SPIE Optical Engineering+ Applications, Vol. 9970, pp. 99700Y–99700Y–8. Cited by: Figure 6, Figure 8.
  • [32] D. Petkovic, A. Alavi, D. Cai, and M. Wong (2021) Random forest model and sample explainer for non-experts in machine learning – two case studies. In Proceedings of the ICPR 2020 Workshops Explainable Deep Learning-AI (EDL-AI), LNCS, Vol. 12663, pp. 62–75. External Links: Document, Link Cited by: §2.
  • [33] M. T. Ribeiro, S. Singh, and C. Guestrin (2016) Why should i trust you? explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 1135–1144. Cited by: §1, §2.1.
  • [34] A. Ross and F. Doshi-Velez (2018) Improving the adversarial robustness and interpretability of deep neural networks by regularizing their input gradients. In Proceedings of the AAAI Conference on Artificial Intelligence, Vol. 32. Cited by: §6.
  • [35] W. Samek, A. Binder, G. Montavon, S. Lapuschkin, and K. Müller (2016) Evaluating the visualization of what a deep neural network has learned. in IEEE Trans. on neural networks and learning systems 28 (11), pp. 2660–2673. Cited by: §7.
  • [36] W. Samek, G. Montavon, A. Vedaldi, L. K. Hansen, and K. Müller (2019) Explainable ai: interpreting, explaining and visualizing deep learning. Vol. 11700, Springer Nature. Cited by: §2.
  • [37] R. R. Selvaraju, M. Cogswell, A. Das, R. Vedantam, D. Parikh, and D. Batra (2017) Grad-cam: visual explanations from deep networks via gradient-based localization. In Proceedings of the IEEE international conference on computer vision, pp. 618–626. Cited by: §4.6.1, §4.6.
  • [38] J. Shang, Y. Liu, H. Zhou, and M. Wang (2021) Moving object properties-based video saliency detection. in Journal of Electronic Imaging 30 (2), pp. 023005. Cited by: §1.
  • [39] A. Shrikumar, P. Greenside, and A. Kundaje (2017) Learning important features through propagating activation differences. In Proceedings of International Conference on Machine Learning, pp. 3145–3153. Cited by: §5.2, §5.2.
  • [40] K. Simonyan, A. Vedaldi, and A. Zisserman (2014) Deep inside convolutional networks: visualising image classification models and saliency maps. In Proceedings of 2nd International Conference on Learning Representations, ICLR 2014, Workshop Track Proceedings, External Links: Link Cited by: §4.2, §4.2.
  • [41] K. Simonyan and A. Zisserman (2015) Very deep convolutional networks for large-scale image recognition. In ICLR, pp. 1–14. Cited by: §5.3.
  • [42] D. Smilkov, N. Thorat, B. Kim, F. B. Viégas, and M. Wattenberg (2017) SmoothGrad: removing noise by adding noise. CoRR abs/1706.03825, pp. 1–10. External Links: Link, 1706.03825 Cited by: §4.5.
  • [43] J. T. Springenberg, A. Dosovitskiy, T. Brox, and M. A. Riedmiller (2015) Striving for simplicity: the all convolutional net. In Proceedings of 3rd International Conference on Learning Representations, ICLR 2015, Workshop Track Proceedings, External Links: Link Cited by: §4.3.
  • [44] J. Su, D. V. Vargas, and K. Sakurai (2019) One pixel attack for fooling deep neural networks.

    in IEEE Trans. on Evolutionary Computation

    23 (5), pp. 828–841.
    Cited by: §6.
  • [45] M. Sundararajan, A. Taly, and Q. Yan (2017) Axiomatic attribution for deep networks. In Proceedings of International Conference on Machine Learning, pp. 3319–3328. Cited by: §4.4.
  • [46] C. Szegedy, W. Zaremba, I. Sutskever, J. Bruna, D. Erhan, I. J. Goodfellow, and R. Fergus (2014) Intriguing properties of neural networks. In ICLR (Poster), pp. 1–10. Cited by: §6.
  • [47] G. Tao, S. Ma, Y. Liu, and X. Zhang (2018) Attacks meet interpretability: attribute-steered detection of adversarial samples. In Proceedings of the 32nd International Conference on Neural Information Processing Systems, pp. 7728–7739. Cited by: §6.
  • [48] Z. Wen, H. Wang, Y. Gong, and J. Wang (2021) Denoising convolutional neural network inspired via multi-layer convolutional sparse coding. Journal of Electronic Imaging 30 (2), pp. 023007. External Links: Document Cited by: §1.
  • [49] N. Wu, J. Phang, J. Park, Y. Shen, Z. Huang, M. Zorin, S. Jastrzebski, T. Févry, J. Katsnelson, E. Kim, et al. (2019) Deep neural networks improve radiologists’ performance in breast cancer screening. IEEE transactions on medical imaging 39 (4), pp. 1184–1194. Cited by: §1.
  • [50] J. Yosinski, J. Clune, Y. Bengio, and H. Lipson (2014) How transferable are features in deep neural networks?. In Proceedings of the Annual Conference on Neural Information Processing Systems, NeurIPS 2014, pp. 3320–3328. External Links: Link Cited by: §2.2.
  • [51] M. D. Zeiler and R. Fergus (2014) Visualizing and understanding convolutional networks. In Proceedings of European Conference on Computer Vision, pp. 818–833. Cited by: §2.1, §4.1.
  • [52] A. Zemmari and J. Benois-Pineau (2020) Deep learning in mining of visual content. Springer. Note: ISBN: 9783030343750 Cited by: §3.1, §5.3.
  • [53] B. Zhou, A. Khosla, A. Lapedriza, A. Oliva, and A. Torralba (2016) Learning deep features for discriminative localization. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 2921–2929. Cited by: §4.6.