[ICCV 2017] Torch code for Grad-CAM
We propose a technique for producing "visual explanations" for decisions from a large class of CNN-based models, making them more transparent. Our approach - Gradient-weighted Class Activation Mapping (Grad-CAM), uses the gradients of any target concept, flowing into the final convolutional layer to produce a coarse localization map highlighting the important regions in the image for predicting the concept. Unlike previous approaches, GradCAM is applicable to a wide variety of CNN model-families: (1) CNNs with fully-connected layers (e.g. VGG), (2) CNNs used for structured outputs (e.g. captioning), (3) CNNs used in tasks with multimodal inputs (e.g. VQA) or reinforcement learning, without any architectural changes or re-training. We combine GradCAM with fine-grained visualizations to create a high-resolution class-discriminative visualization and apply it to off-the-shelf image classification, captioning, and visual question answering (VQA) models, including ResNet-based architectures. In the context of image classification models, our visualizations (a) lend insights into their failure modes (showing that seemingly unreasonable predictions have reasonable explanations), (b) are robust to adversarial images, (c) outperform previous methods on weakly-supervised localization, (d) are more faithful to the underlying model and (e) help achieve generalization by identifying dataset bias. For captioning and VQA, our visualizations show that even non-attention based models can localize inputs. Finally, we conduct human studies to measure if GradCAM explanations help users establish trust in predictions from deep networks and show that GradCAM helps untrained users successfully discern a "stronger" deep network from a "weaker" one. Our code is available at https://github.com/ramprs/grad-cam. A demo and a video of the demo can be found at http://gradcam.cloudcv.org and youtu.be/COjUB9Izk6E.READ FULL TEXT VIEW PDF
We propose a technique for making Convolutional Neural Network (CNN)-bas...
The convolutional neural network (CNN) has become a powerful tool for va...
The visual explanation of learned representation of models helps to
Deep convolutional neural networks (CNN) have revolutionized various fie...
The gradient-weighted class activation mapping (Grad-CAM) method can
A rich line of research attempts to make deep neural networks more
Clearly explaining a rationale for a classification decision to an end-u...
[ICCV 2017] Torch code for Grad-CAM
tensorflow implementation of Grad-CAM (CNN visualization)
:rainbow: :camera: Gradient-weighted Class Activation Mapping (Grad-CAM) Demo
Chainer implementation of Grad-CAM
Gesture Detection using neural Network
Convolutional Neural Networks (CNNs) and other deep networks have enabled unprecedented breakthroughs in a variety of computer vision tasks, from image classification[27, 18] to object detection , semantic segmentation , image captioning [47, 7, 13, 23], and more recently, visual question answering [3, 15, 36, 41]. While these deep neural networks enable superior performance, their lack of decomposability into intuitive and understandable components makes them hard to interpret . Consequently, when today’s intelligent systems fail, they fail spectacularly disgracefully, without warning or explanation, leaving a user staring at an incoherent output, wondering why.
Interpretability Matters. In order to build trust in intellegent systems and move towards their meaningful integration into our everyday lives, it is clear that we must build ‘transparent’ models that explain why they predict what they predict
. Broadly speaking, this transparency is useful at three different stages of Artificial Intelligence (AI) evolution. First, when AI is significantly weaker than humans and not yet reliably ‘deployable’ (visual question answering), the goal of transparency and explanations is to identify the failure modes [1, 19], thereby helping researchers focus their efforts on the most fruitful research directions. Second, when AI is on par with humans and reliably ‘deployable’ (, image classification  on a set of categories trained on sufficient data), the goal is to establish appropriate trust and confidence in users. Third, when AI is significantly stronger than humans (chess or Go ), the goal of explanations is in machine teaching  – , a machine teaching a human about how to make better decisions.
. (b) Guided Backpropagation: highlights all contributing features. (c, f) Grad-CAM (Ours): localizes class-discriminative regions, (d) Combining (b) and (c) gives Guided Grad-CAM, which gives high-resolution class-discriminative visualizations.Interestingly, the localizations achieved by our Grad-CAM technique, (c) are very similar to results from occlusion sensitivity (e), while being orders of magnitude cheaper to compute. (f, l) are Grad-CAM visualizations for ResNet-18 layer. Note that in (d, f, i, l), red regions corresponds to high score for class, while in (e, k), blue corresponds to evidence for the class. Figure best viewed in color.
There typically exists a trade-off between accuracy and simplicity or interpretability. Classical rule-based or expert systems  are highly interpretable but not very accurate (or robust). Decomposable pipelines where each stage is hand-designed are thought to be more interpretable as each individual component assumes a natural intuitive explanation. By using deep models, we sacrifice interpretable modules for uninterpretable ones that achieve greater performance through greater abstraction (more layers) and tighter integration (end-to-end training). Recently introduced deep residual networks (ResNets)  are over 200-layers deep and have shown state-of-the-art performance in several challenging tasks. Such complexity makes these models hard to interpret. As such, deep models are beginning to explore the spectrum between interpretability and accuracy.
Zhou  recently proposed a technique called Class Activation Mapping (CAM) for identifying discriminative regions used by a restricted class of image classification CNNs which do not contain any fully-connected layers. In essence, this work trades off model complexity and performance for more transparency into the working of the model. In contrast, we make existing state-of-the-art deep models interpretable without altering their architecture, thus avoiding the interpretability accuracy tradeoff. Our approach is a generalization of CAM  and is applicable to a significantly broader range of CNN model families: (1) CNNs with fully-connected layers (VGG), (2) CNNs used for structured outputs (captioning), (3) CNNs used in tasks with multi-modal inputs (VQA) or reinforcement learning.
What makes a good visual explanation? Consider image classification  – a ‘good’ visual explanation from the model justifying a predicted class should be (a) class-discriminative (localize the target category in the image) and (b) high-resolution (capture fine-grained detail).
Fig. 1 shows outputs from a number of visualizations for the ‘tiger cat’ class (top) and ‘boxer’ (dog) class (bottom). Pixel-space gradient visualizations such as Guided Backpropagation  and Deconvolution  are high-resolution and highlight fine-grained details in the image, but are not class-discriminative (fig:teaser_gb_cat and fig:teaser_gb_dog are very similar).
In contrast, localization approaches like CAM or our proposed method Gradient-weighted Class Activation Mapping (Grad-CAM), are highly class-discriminative (the ‘cat’ explanation exclusively highlights the ‘cat’ regions but not ‘dog’ regions in fig:teaser_gcam_cat, and vice versa in fig:teaser_gcam_dog).
In order to combine the best of both worlds, we show that it is possible to fuse existing pixel-space gradient visualizations with Grad-CAM to create Guided Grad-CAM visualizations that are both high-resolution and class-discriminative. As a result, important regions of the image which correspond to any decision of interest are visualized in high-resolution detail even if the image contains evidence for multiple possible concepts, as shown in Figures 1d and 1j. When visualized for ‘tiger cat’, Guided Grad-CAM not only highlights the cat regions, but also highlights the stripes on the cat, which is important for predicting that particular variety of cat.
To summarize, our contributions are as follows:
(1) We propose Grad-CAM, a class-discriminative localization technique that can generate visual explanations from any CNN-based network without requiring architectural changes or re-training. We evaluate Grad-CAM for localization (sec:localization), pointing (sec:pointing_game), and faithfulness to model (sec:occ), where it outperforms baselines.
(2) We apply Grad-CAM to existing top-performing classification, captioning (sec:nic), and VQA (sec:vqa) models. For image classification, our visualizations help identify dataset bias (sec:bias) and lend insight into failures of current CNNs (sec:diagnose), showing that seemingly unreasonable predictions have reasonable explanations. For captioning and VQA, our visualizations expose the somewhat surprising insight that common CNN + LSTM models are often good at localizing discriminative image regions despite not being trained on grounded image-text pairs.
(3) We visualize ResNets  applied to image classification and VQA (sec:vqa). Going from deep to shallow layers, the discriminative ability of Grad-CAM significantly reduces as we encounter layers with different output dimensionality.
(4) We conduct human studies (sec:human_evaluation) that show Guided Grad-CAM explanations are class-discriminative and not only help humans establish trust, but also help untrained users successfully discern a ‘stronger’ network from a ‘weaker’ one, even when both make identical predictions.
Our work draws on recent work in CNN visualizations, model trust assessment, and weakly-supervised localization.
Visualizing CNNs. A number of previous works [44, 46, 49, 14] have visualized CNN predictions by highlighting ‘important’ pixels (change in intensities of these pixels have the most impact on the prediction’s score). Specifically, Simonyan  visualize partial derivatives of predicted class scores pixel intensities, while Guided Backpropagation  and Deconvolution  make modifications to ‘raw’ gradients that result in qualitative improvements. These approaches are compared in . Despite producing fine-grained visualizations, these methods are not class-discriminative. Visualizations with respect to different classes are nearly identical (see Figures 1b and 1h).
Other visualization methods synthesize images to maximally activate a network unit [44, 12] or invert a latent representation [35, 11]. Although these can be high-resolution and class-discriminative, they visualize a model overall and not predictions for specific input images.
Assessing Model Trust. Motivated by notions of interpretability  and assessing trust in models , we evaluate Grad-CAM visualizations in a manner similar to  via human studies to show that they can be important tools for users to evaluate and place trust in automated systems.
Weakly supervised localization. Another relevant line of work is weakly supervised localization in the context of CNNs, where the task is to localize objects in images using only whole image class labels [8, 38, 39, 51].
Most relevant to our approach is the Class Activation Mapping (CAM) approach to localization . This approach modifies image classification CNN architectures replacing fully-connected layers with convolutional layers and global average pooling 
, thus achieving class-specific feature maps. Others have investigated similar methods using global max pooling and log-sum-exp pooling .
A drawback of CAM is that it requires feature maps to directly precede softmax layers, so it is only applicable to a particular kind of CNN architectures performing global average pooling over convolutional maps immediately prior to prediction (conv feature mapsglobal average pooling softmax layer). Such architectures may achieve inferior accuracies compared to general networks on some tasks (image classification) or may simply be inapplicable to any other tasks (image captioning or VQA). We introduce a new way of combining feature maps using the gradient signal that does not require any modification in the network architecture. This allows our approach to be applied to any CNN-based architecture, including those for image captioning and visual question answering. For a fully-convolutional architecture, Grad-CAM reduces to CAM.Thus, Grad-CAM is a generalization to CAM.
Other methods approach localization by classifying perturbations of the input image. Zeiler and Fergus perturb inputs by occluding patches and classifying the occluded image, typically resulting in lower classification scores for relevant objects when those objects are occluded. This principle is applied for localization in . Oquab  classify many patches containing a pixel then average these patch class-wise scores to provide the pixel’s class-wise score. Unlike these, our approach achieves localization in one shot; it only requires a single forward and a partial backward pass per image and thus is typically an order of magnitude more efficient. In recent work Zhang 
introduce contrastive Marginal Winning Probability (c-MWP), a probabilistic Winner-Take-All formulation for modelling the top-down attention for neural classification models which can highlight discriminative regions. This is slower than Grad-CAM and like CAM, it only works for Image Classification CNNs. Moreover, quantitative and qualitative results are worse than for Grad-CAM (see Sec.4.1 and supplementary sec:localization) are worse than for Grad-CAM.
. Furthermore, convolutional features naturally retain spatial information which is lost in fully-connected layers, so we can expect the last convolutional layers to have the best compromise between high-level semantics and detailed spatial information. The neurons in these layers look for semantic class-specific information in the image (say object parts). Grad-CAM uses the gradient information flowing into the last convolutional layer of the CNN to understand the importance of each neuron for a decision of interest.Although our technique is very generic and can be used to visualize any activation in a deep network, in this work we focus on explaining decisions the network can possibly make.
As shown in Fig. 2, in order to obtain the class-discriminative localization map Grad-CAM of width and height for any class , we first compute the gradient of the score for class , (before the softmax), with respect to feature maps of a convolutional layer, . These gradients flowing back are global-average-pooled to obtain the neuron importance weights :
This weight represents a partial linearization of the deep network downstream from A, and captures the ‘importance’ of feature map for a target class .
We perform a weighted combination of forward activation maps, and follow it by a ReLU to obtain,
Notice that this results in a coarse heat-map of the same size as the convolutional feature maps ( in the case of last convolutional layers of VGG  and AlexNet  networks). We apply a ReLU to the linear combination of maps because we are only interested in the features that have a positive influence on the class of interest, pixels whose intensity should be increased in order to increase . Negative pixels are likely to belong to other categories in the image. As expected, without this ReLU, localization maps sometimes highlight more than just the desired class and achieve lower localization performance. Figures 1c, 1f and 1i, 1l show Grad-CAM visualizations for ‘tiger cat’ and ‘boxer (dog)’ respectively. Ablation studies and more Grad-CAM visualizations can be found in the supplementary. In general, need not be the class score produced by an image classification CNN. It could be any differentiable activation including words from a caption or the answer to a question.
Grad-CAM as a generalization to CAM. Recall that CAM  produces a localization map for an image classification CNN with a specific kind of architecture where global average pooled convolutional feature maps are fed directly into softmax. Specifically, let the penultimate layer produce feature maps,
. These feature maps are then spatially pooled using Global Average Pooling (GAP) and linearly transformed to produce a scorefor each class ,
To produce the localization map for modified image classification architectures, such as above, the order of summations can be interchanged to obtain ,
Note that this modification of architecture necessitates retraining because not all architectures have weights connecting features maps to outputs. When Grad-CAM is applied to these architectures —making Grad-CAM a strict generalization of CAM (see sec:sup_generalization for details).
The above generalization also allows us to generate visual explanations from CNN-based models that cascade convolutional layers with much more complex interactions. Indeed, we apply Grad-CAM to “beyond classification” tasks and models that utilize CNNs for image captioning and Visual Question Answering (VQA) (Sec. 8.2).
Guided Grad-CAM. While Grad-CAM visualizations are class-discriminative and localize relevant image regions well, they lack the ability to show fine-grained importance like pixel-space gradient visualization methods (Guided Backpropagation and Deconvolution). For example in Figure 1c, Grad-CAM can easily localize the cat region; however, it is unclear from the low-resolutions of the heat-map why the network predicts this particular instance as ‘tiger cat’. In order to combine the best aspects of both, we fuse Guided Backpropagation and Grad-CAM visualizations via point-wise multiplication (
is first up-sampled to the input image resolution using bi-linear interpolation). fig:approach bottom-left illustrates this fusion. This visualization is both high-resolution (when the class of interest is ‘tiger cat’, it identifies important ‘tiger cat’ features like stripes, pointy ears and eyes) and class-discriminative (it shows the ‘tiger cat’ but not the ‘boxer (dog)’). Replacing Guided Backpropagation with Deconvolution in the above gives similar results, but we found Deconvolution to have artifacts (and Guided Backpropagation visualizations were generally less noisy), so we chose Guided Backpropagation over Deconvolution.
In this section, we evaluate the localization capability of Grad-CAM in the context of image classification. The ImageNet localization challenge
requires competing approaches to provide bounding boxes in addition to classification labels. Similar to classification, evaluation is performed for both the top-1 and top-5 predicted categories. Given an image, we first obtain class predictions from our network and then generate Grad-CAM maps for each of the predicted classes and binarize with threshold of 15% of the max intensity. This results in connected segments of pixels and we draw our bounding box around the single largest segment.
We evaluate the pretrained off-the-shelf VGG-16 
model from the Caffe Model Zoo. Following ILSVRC-15 evaluation, we report both top-1 and top-5 localization error on the val set in Table. 1. Grad-CAM localization errors are significantly lower than those achieved by c-MWP  and Simonyan  for the VGG-16 model, which uses grabcut to post-process image space gradients into heat maps. Grad-CAM also achieves better top-1 localization error than CAM , which requires a change in the model architecture, necessitates re-training and thereby achieves worse classification errors (2.98% increase in top-1), whereas Grad-CAM makes no compromise on classification performance.
|Method||Top-1 loc error||Top-5 loc error||Top-1 cls error||Top-5 cls error|
|Backprop on VGG-16 ||61.12||51.46||30.38||10.89|
|c-MWP on VGG-16 ||70.92||63.04||30.38||10.89|
|Grad-CAM on VGG-16 (ours)||56.51||46.41||30.38||10.89|
|VGG-16-GAP (CAM) ||57.20||45.14||33.40||12.20|
Weakly-supervised Segmentation. We use Grad-CAM localization as weak-supervision to train the segmentation architecture from SEC . We provide more details along with qualitative results in the supplementary sec:sup_segmentation.
Zhang  introduced the Pointing Game experiment to evaluate the discriminativeness of different attention maps for localizing target objects in scenes. Their evaluation protocol cues each competing visualization technique with the ground-truth object label and extracts the maximum point on the generated heatmap and evaluates if it lies in one of the annotated instances of the cued object category, thereby a hit or a miss is counted. The localization accuracy is then calculated as . However this evaluation only measures the precision aspect of the visualization technique. Hence we modify the protocol to also measure the recall as follows. We compute the visualization for the top-5 class predictions from the CNN classifiers222We use the GoogLeNet CNN finetuned on COCO provided in . and evaluate them using the pointing game setup with an additional option that a visualization may reject any of the top-5 predictions from the model if the max value in the visualization is below a threshold, if the visualization correctly rejects the predictions which are absent from the ground-truth categories, it gets that as a hit. We find that our approach Grad-CAM outperforms c-MWP  by a significant margin (70.58% 60.30%). Qualitative examples comparing c-MWP  and Grad-CAM on COCO, imageNet, and PASCAL categories can be found in supplementary sec:sup_pointing333 c-MWP  highlights arbitrary regions for predicted but non-existent categories, unlike Grad-CAM maps which seem more reasonable..
Our first human study evaluates the main premise of our approach: are Grad-CAM visualizations more class-discriminative than previous techniques? Having established that, we turn to understanding whether it can lead an end user to trust the visualized models appropriately. For these experiments, we compare VGG-16 and AlexNet CNNs finetuned on PASCAL VOC 2007 train set and use the val set to generate visualizations.
In order to measure whether Grad-CAM helps distinguish between classes we select images from VOC 2007 val set that contain exactly two annotated categories and create visualizations for each one of them. For both VGG-16 and AlexNet CNNs, we obtain category-specific visualizations using four techniques: Deconvolution, Guided Backpropagation, and Grad-CAM versions of each these methods (Deconvolution Grad-CAM and Guided Grad-CAM). We show visualizations to 43 workers on Amazon Mechanical Turk (AMT) and ask them “Which of the two object categories is depicted in the image?” as shown in fig:human_studies.
Intuitively, a good prediction explanation is one that produces discriminative visualizations for the class of interest. The experiment was conducted using all 4 visualizations for 90 image-category pairs (360 visualizations); 9 ratings were collected for each image, evaluated against the ground truth and averaged to obtain the accuracy. When viewing Guided Grad-CAM, human subjects can correctly identify the category being visualized in 61.23% of cases (compared to 44.44% for Guided Backpropagation; thus, Grad-CAM improves human performance by 16.79%). Similarly, we also find that Grad-CAM helps make Deconvolution more class-discriminative (from 53.33% to 61.23%). Guided Grad-CAM performs the best among all the methods. Interestingly, our results seem to indicate that Deconvolution is more class discriminative than Guided Backpropagation, although Guided Backpropagation is more aesthetically pleasing than Deconvolution. To the best of our knowledge, our evaluations are the first to quantify this subtle difference.
Given two prediction explanations, we want to evaluate which seems more trustworthy. We use AlexNet and VGG-16 to compare Guided Backpropagation and Guided Grad-CAM visualizations, noting that VGG-16 is known to be more reliable than AlexNet with an accuracy of 79.09 mAP (69.20 mAP) on PASCAL classification. In order to tease apart the efficacy of the visualization from the accuracy of the model being visualized, we consider only those instances where both models made the same prediction as ground truth. Given a visualization from AlexNet and one from VGG-16, and the predicted object category, 54 AMT workers were instructed to rate the reliability of the models relative to each other on a scale of clearly more/less reliable (+/-2), slightly more/less reliable (+/-1), and equally reliable (0). This interface is shown in fig:human_studies. To eliminate any biases, VGG and AlexNet were assigned to be model1 with approximately equal probability. Remarkably, we find that human subjects are able to identify the more accurate classifier (VGG over AlexNet) despite viewing identical predictions from the two, simply from the different explanations generated from the two. With Guided Backpropagation, humans assign VGG an average score of 1.00 which means that it is slightly more reliable than AlexNet, while Guided Grad-CAM achieves a higher score of 1.27 which is closer to the option saying that VGG is clearly more reliable. Thus our visualization can help users place trust in a model that can generalize better, just based on individual prediction explanations.
Faithfulness of a visualization to a model is its ability to accurately explain the function learned by the model. Naturally, there exists a tradeoff between the interpretability and faithfulness of a visualization: a more faithful visualization is typically less interpretable and vice versa. In fact, one could argue that a fully faithful explanation is the entire description of the model, which in the case of deep models is not interpretable/easy to visualize. We have verified in previous sections that our visualizations are reasonably interpretable. We now evaluate how faithful they are to the underlying model. One expectation is that our explanations should be locally accurate, in the vicinity of the input data point, our explanation should be faithful to the model .
For comparison, we need a reference explanation with high local-faithfulness. One obvious choice for such a visualization is image occlusion , where we measure the difference in CNN scores when patches of the input image are masked. Interestingly, patches which change the CNN score are also patches to which Grad-CAM and Guided Grad-CAM assign high intensity, achieving rank correlation 0.254 and 0.261 (0.168, 0.220 and 0.208 achieved by Guided Backpropagation, c-MWP and CAM, respectively) averaged over 2510 images in PASCAL 2007 val set. This shows that Grad-CAM visualizations are more faithful to the original model compared to all existing methods. Through localization, pointing, segmentation, and human studies, we see that Grad-CAM visualizations are more interpretable, and through correlation with occlusion maps we see that Grad-CAM is more faithful to the model, which are two important characteristics of a visualization technique.
We use Guided Grad-CAM to analyze failure modes of the VGG-16 CNN on ImageNet classification . In order to see what mistakes a network is making we first get a list of examples that the network (VGG-16) fails to classify correctly. For the misclassified examples, we use Guided Grad-CAM to visualize both the correct and the predicted class. A major advantage of Guided Grad-CAM visualization over other methods that allows for this analysis is its high-resolution and its ability to be highly class-discriminative. As seen in fig:failures, some failures are due to ambiguities inherent in ImageNet classification. We can also see that seemingly unreasonable predictions have reasonable explanations, an observation also made in HOGgles .
Goodfellow  demonstrated the vulnerability of current deep networks to adversarial examples, which are slight imperceptible perturbations of input images which fool the network into misclassifying them with high confidence. We generate adversarial images for the ImageNet trained VGG-16 model such that it assigns a high probability (>0.9999) to a category that is absent in the image and a very low probability to categories that are present. We then compute Grad-CAM visualizations for the categories that are present. We can see from Fig. 5 that inspite of the network being completely certain about the absence of these categories (tiger cat and boxer), Grad-CAM visualizations can correctly localize the categories. This shows the robustness of Grad-CAM to adversarial noise.
In this section we demonstrate another use of Grad-CAM: identifying and thus reducing bias in training datasets. Models trained on biased datasets may not generalize to real-world scenarios, or worse, may perpetuate biases and stereotypes (gender, race, age, ) [6, 37]. We finetune an ImageNet trained VGG-16 model for the task of classifying “doctor” “nurse”. We built our training dataset using the top 250 relevant images (for each class) from a popular image search engine. The trained model achieves good accuracy on validation images from the search engine. But at test time the model did not generalize as well (82%).
Grad-CAM visualizations of the model predictions revealed that the model had learned to look at the person’s face / hairstyle to distinguish nurses from doctors, thus learning a gender stereotype. Indeed, the model was misclassifying several female doctors to be a nurse and male nurses to be a doctor. Clearly, this is problematic. Turns out the image search results were gender-biased (78% of images for doctors were men, and 93% images for nurses were women).
Through this intuition gained from our visualization, we reduced the bias from the training set by adding in male nurses and female doctors to the training set, while maintaining the same number of images per class as before. The re-trained model now generalizes better to a more balanced test set (90%). Additional analysis along with Grad-CAM visualizations from both models can be found in the supplementary. This experiment demonstrates that Grad-CAM can help detect and remove biases in datasets, which is important not just for generalization, but also for fair and ethical outcomes as more algorithmic decisions are made in society.
We propose a new explanation modality - Counterfactual explanations. Using a slight modification to Grad-CAM we obtain these counterfactual explanations, which highlight the support for the regions that would make the network change its decision. Removing concepts occurring in those regions would make the model more confident about the given target decision.
Specifically, we negate the gradient of (score for class ) with respect to feature maps of a convolutional layer. Thus the importance weights , now become,
captioning model for captions generated by a dense captioning model for the three bounding box proposals marked on the left. We can see that we get back Grad-CAM localizations (right) that agree with those bounding boxes – even though the captioning model and Grad-CAM techniques do not use any bounding box annotations.
Finally, we apply our Grad-CAM technique to the image captioning [7, 23, 47] and Visual Question Answering (VQA) [3, 15, 36, 41] tasks. We find that Grad-CAM leads to interpretable visual explanations for these tasks as compared to baseline visualizations which do not change noticeably across different predictions. Note that existing visualization techniques are either not class-discriminative (Guided Backpropagation, Deconvolution), or simply cannot be used for these tasks or architectures, or both (CAM or c-MWP).
In this section, we visualize spatial support for an image captioning model using Grad-CAM. We build on top of the publicly available ‘neuraltalk2’444https://github.com/karpathy/neuraltalk2 implementation  that uses a finetuned VGG-16 CNN for images and an LSTM-based language model. Note that this model does not have an explicit attention mechanism. Given a caption, we compute the gradient of its log probability units in the last convolutional layer of the CNN ( for VGG-16) and generate Grad-CAM visualizations as described in sec:approach. See Fig. 6(a). In the first example, the Grad-CAM maps for the generated caption localize every occurrence of both the kites and people in spite of their relatively small size. In the next example, notice how Grad-CAM correctly highlights the pizza and the man, but ignores the woman nearby, since ‘woman’ is not mentioned in the caption. More qualitative examples can be found in the supplementary sec:sup_experiments.
Comparison to dense captioning. Johnson  recently introduced the Dense Captioning (DenseCap) task that requires a system to jointly localize and caption salient regions in a given image. Their model consists of a Fully Convolutional Localization Network (FCLN) and an LSTM-based language model that produces both bounding boxes for regions of interest and associated captions in a single forward pass. Using DenseCap model, we generate region-specific captions. Next, we visualize Grad-CAM localizations for these region-specific captions using the holistic captioning model described earlier (neuraltalk2). Interestingly, we observe that Grad-CAM localizations correspond to regions in the image that the DenseCap model described, even though the holistic captioning model was not trained with any region or bounding-box level annotations (See Fig. 6(b)).
Typical VQA pipelines [3, 15, 36, 41] consist of a CNN to model images and an RNN language model for questions. The image and the question representations are fused to predict the answer, typically with a 1000-way classification. Since this is a classification problem, we pick an answer (the score in (3)) and use its score to compute Grad-CAM to show image evidence that supports the answer. Despite the complexity of the task, involving both visual and language components, the explanations (of the VQA model from ) described in Fig. 8 are suprisingly intuitive and informative.
Comparison to Human Attention. Das  collected human attention maps for a subset of the VQA dataset . These maps have high intensity where humans looked in the image in order to answer a visual question. Human attention maps are compared to Grad-CAM visualizations for the VQA model from  on 1374 val question-image (QI) pairs from  using the rank correlation evaluation protocol developed in . Grad-CAM and human attention maps have a correlation of 0.136, which is statistically higher than chance or random attention maps (zero correlation). This shows that despite not being trained on grounded image-text pairs, even non-attention based CNN + LSTM based VQA models are surprisingly good at localizing discriminative regions required to output a particular answer.
Visualizing ResNet-based VQA model with attention. Lu  use a 200 layer ResNet  to encode the image, and jointly learn a hierarchical attention mechanism on the question and image. Fig. 7(b) shows Grad-CAM visualization for this network. As we visualize deeper layers of the ResNet we see small changes in Grad-CAM for most adjacent layers and larger changes between layers that involve dimensionality reduction. Visualizations for various layers in ResNet can be found in the supplementary sec:sup_resnet_analysis. To the best of our knowledge, we are the first to visualize decisions made by ResNet-based architectures.
In this work, we proposed a novel class-discriminative localization technique—Gradient-weighted Class Activation Mapping (Grad-CAM)—for making any
CNN-based models more transparent by producing visual explanations. Further, we combined our Grad-CAM localizations with existing high-resolution visualizations to obtain high-resolution class-discriminative Guided Grad-CAM visualizations. Our visualizations outperform all existing approaches on weakly-supervised localization, pointing, and faithfulness to original model. Extensive human studies reveal that our visualizations can discriminate between classes more accurately, better reveal the trustworthiness of a classifier, and help identify biases in datasets. Finally, we showed the broad applicability of Grad-CAM to various off-the-shelf available architectures for tasks including image classification, image captioning and VQA providing faithful visual explanations for possible model decisions. We believe that a true AI system should not only be intelligent, but also be able to reason about its beliefs and actions for humans to trust it. Future work includes explaining the decisions made by deep networks in domains such as reinforcement learning, natural language processing and video applications.
Is object localization for free? – weakly-supervised learning with convolutional neural networks.In CVPR, 2015.
Learning Deep Features for Discriminative Localization.In CVPR, 2016.
In this section we formally prove that Grad-CAM is a generalization of CAM, as mentioned in Section 3 in the main paper.
Recall that the CAM architecture consists of fully-covolutional CNNs, followed by global average pooling, and linear classification layer with softmax.
Let the final convolutional layer produce feature maps , with each element indexed by . So refers to the activation at location of the feature map .
CAM computes a global average pooling (GAP) on . Let us define to be the global average pooled output,
CAM computes the final scores by,
where is the weight connecting the feature map with the class.
Taking the gradient of the score for class c () with respect to the feature map we get,
From (7) we get that, . Hence,
Now, we can sum both sides of this expression in (5) over all pixels to get:
Note that is the number of pixels in the feature map (or ). Thus, we can re-order terms and see that:
We can see that up to a proportionality constant () that is normalized out during visualization, the expression for is identical to used by Grad-CAM (as described in the main paper).
Thus Grad-CAM is a generalization of CAM to arbitrary CNN-based architectures, while maintaining the computational efficiency of CAM.
In this section we provide more qualitative results for Grad-CAM and Guided Grad-CAM applied to the task of image classification, image captioning and VQA.
We use Grad-CAM and Guided Grad-CAM to visualize the regions of the image that provide support for a particular prediction. The results reported in Fig. A1 correspond to the VGG-16  network trained on ImageNet.
Fig. A1 shows randomly sampled examples from COCO  validation set. COCO images typically have multiple objects per image and Grad-CAM visualizations show precise localization to support the model’s prediction.
Guided Grad-CAM can even localize tiny objects. For example our approach correctly localizes the predicted class “torch” (Fig.A1
.a) inspite of its size and odd location in the image. Our method is also class-discriminative – it places attentiononly on the “toilet seat” even when a popular ImageNet category “dog” exists in the image (Fig. A1.e).
We also visualized Grad-CAM, Guided Backpropagation (GB), Deconvolution (DC), GB + Grad-CAM (Guided Grad-CAM), DC + Grad-CAM (Deconvolution Grad-CAM) for images from the ILSVRC13 detection val set that have at least 2 unique object categories each. The visualizations for the mentioned class can be found in the following links.
“computer keyboard, keypad” class: http://i.imgur.com/QMhsRzf.jpg
“sunglasses, dark glasses, shades” class: http://i.imgur.com/a1C7DGh.jpg
We use the publicly available Neuraltalk2 code and model555https://github.com/karpathy/neuraltalk2 for our image captioning experiments. The model uses VGG-16 to encode the image. The image representation is passed as input at the first time step to an LSTM that generates a caption for the image. The model is trained end-to-end along with CNN finetuning using the COCO  Captioning dataset. We feedforward the image to the image captioning model to obtain a caption. We use Grad-CAM to get a coarse localization and combine it with Guided Backpropagation to get a high-resolution visualization that highlights regions in the image that provide support for the generated caption.
We use Grad-CAM and Guided Grad-CAM to explain why a publicly available VQA model  answered what it answered.
The VQA model by Lu uses a standard CNN followed by a fully connected layer to transform the image to 1024-dim to match the LSTM
embeddings of the question. Then the transformed image and LSTM embeddings are pointwise multiplied to get a combined representation of the image and question and a multi-layer perceptron is trained on top to predict one among 1000 answers. We show visualizations for the VQA model trained with 3 different CNNs - AlexNet, VGG-16 and VGG-19 . Even though the CNNs were not finetuned for the task of VQA, it is interesting to see how our approach can serve as a tool to understand these networks better by providing a localized high-resolution visualization of the regions the model is looking at. Note that these networks were trained with no explicit attention mechanism enforced.
Notice in the first row of Fig. A3, for the question, “Is the person riding the waves?”, the VQA model with AlexNet and VGG-16 answered “No”, as they concentrated on the person mainly, and not the waves. On the other hand, VGG-19 correctly answered “Yes”, and it looked at the regions around the man in order to answer the question. In the second row, for the question, “What is the person hitting?”, the VQA model trained with AlexNet answered “Tennis ball” just based on context without looking at the ball. Such a model might be risky when employed in real-life scenarios. It is difficult to determine the trustworthiness of a model just based on the predicted answer. Our visualizations provide an accurate way to explain the model’s predictions and help in determining which model to trust, without making any architectural changes or sacrificing accuracy. Notice in the last row of Fig. A3, for the question, “Is this a whole orange?”, the model looks for regions around the orange to answer “No”.
In this section we provide qualitative examples showing the explanations from the two models trained for distinguishing doctors from nurses- model1 which was trained on images (with an inherent bias) from a popular search engine, and model2 which was trained on a more balanced set of images from the same search engine.
As shown in Fig. A4, Grad-CAM visualizations of the model predictions show that the model had learned to look at the person’s face / hairstyle to distinguish nurses from doctors, thus learning a gender stereotype.
Using the insights gained from the Grad-CAM visualizations, we balanced the dataset and retrained the model. The new model, model2 not only generalizes well to a balanced test set, it also looks at the right regions.
In this section we provide details of the ablation studies we performed.
Fig. 1 (e,k) of main paper show the results of occlusion sensitivity for the “cat” and “dog” class. We compute this occlusion map by repeatedly masking regions of the image and forward propagate each masked image. At each location of the occlusion map we store the difference in the original score for the particular class and the score obtained after forward propagating the masked image. Our choices for mask sizes include (, , , , , and
). We zero-pad the images so that the resultant occlusion map is of the same size as the original image. The resultant occlusion maps can be found in Fig.A5. Note that blue regions correspond to a decrease in score for a particular class (“tiger cat” in the case of Fig. A5) when the region around that pixel is occluded. Hence it serves as an evidence for the class. Whereas the red regions correspond to an increase in score as the region around that pixel is occluded. Hence these regions might indicate existence of other confusing classes. We observe that is a good trade-off between sharp results and a smooth appearance.
We show results of applying Grad-CAM for the “Tiger-cat” category on different convolutional layers in AlexNet and VGG-16 CNN. As expected, the results from Fig. A6 show that localization becomes progressively worse as we move to shallower convolutional layers. This is because the later convolutional layers capture high-level semantic information and at the same time retain spatial information, while the shallower layers have smaller receptive fields and only concentrate on local features that are important for the next layers.
|Grad-CAM without ReLU in Eq.1||74.98|
|Grad-CAM with Absolute gradients||58.19|
|Grad-CAM with GMP gradients||59.96|
|Grad-CAM with Deconv ReLU||83.95|
|Grad-CAM with Guided ReLU||59.14|
We evaluate design choices via top-1 localization error on the ILSVRC15 val set .
Instead of Global Average Pooling (GAP) the incoming gradients to the convolutional layer, we tried Global Max Pooling (GMP) them. We observe that using GMP lowers the localization ability of our Grad-CAM technique. An example can be found in Fig. A9 below. This observation is also summarized in Table. A1. This may be due to the fact that max is statistically less robust to noise compared to the averaged gradient.
Effect of Guided-ReLU:
Springenberg  introduced Guided Backprop, where they modified the backward pass of ReLU to pass only positive gradients to regions with positive activations. Applying this change to the computation of our Grad-CAM maps introduces a drop in the class-discriminative ability of Grad-CAM as can be seen in fig:relu, but it gives a slight improvement in the localization ability on ILSVRC’14 localization challenge (see Table. A1).
Effect of Deconv-ReLU:
Zeiler and Fergus  in their Deconvolution work introduced a slight modification to the backward pass of ReLU, to pass only the positive gradients from higher layers. Applying this modification to the computation of our Grad-CAM gives worse results as shown in fig:relu.
In recent work Kolesnikov 
introduced a new loss function for training weakly-supervised image segmentation models. Their loss function is based on three principles: 1. to seed with weak localization cues, 2. to expand object seeds to regions of reasonable size, 3. to constrain segmentations to object boundaries. They showed that their proposed loss function leads to better segmentation.
They showed that their algorithm is very sensitive to seed loss, without which the segmentation network fails to localize the objects correctly . In their work, they used CAM for weakly localizing foreground classes. We replaced CAM with Grad-CAM and show results in Fig. A11. The last row shows 2 failure cases. In the bottom left image, the clothes of the 2 person weren’t highlighted correctly. This could be because the most discriminative parts are their faces, and hence Grad-CAM maps only highlights those. This results in a segmentation that only highlights the faces of the 2 people. In the bottom right image, the bicycles, being extremely thin aren’t highlighed. This could be because the resolution of the Grad-CAM maps are low () which makes it difficult to capture thin areas.
In , the pointing game was setup to evaluate the discriminativeness of different attention maps for localizing ground-truth categories. In a sense, this evaluates the precision of a visualization, how often does the attention map intersect the segmentation map of the ground-truth category. This does not evaluate how often the visualization technique produces maps which do not correspond to the category of interest. For example this evaluation does not penalize the visualization in Fig. A13 top-left, for highlighting a zebra when visualizing the bird category.
Hence we propose a modification to the pointing game to evaluate visualizations of the top-5 predicted category. In this case the visualizations are given an additional option to reject any of the top-5 predictions from the CNN classifiers. For each of the two visualizations, Grad-CAM and c-MWP, we choose a threshold on the max value of the visualization, that can be used to determine if the category being visualized exists in the image.
We compute the maps for the top-5 categories, and based on the maximum value in the map, we try to classify if the map is of the GT label or a category that is absent in the image. As mentioned in Section 4.2 of the main paper, we find that our approach Grad-CAM outperforms c-MWP by a significant margin (70.58% vs 60.30%). Fig. A13 shows the maps computed for the top-5 categories using c-MWP and Grad-CAM.
We compare Grad-CAM, CAM and c-MWP visualizations from ImageNet trained VGG-16 models finetuned on PASCAL VOC 2012 dataset. While Grad-CAM and c-MWP visualizations can be directly obtained from existing models, CAM requires an architectural change, and requires re-training, which leads to loss in accuracy. Also, unlike Grad-CAM, c-MWP and CAM can only be applied for image classification networks. Visualizations for the ground-truth categories can be found in Fig. A12.
We compare Grad-CAM and c-MWP visualizations from ImageNet trained VGG-16 models finetuned on COCO dataset. Visualizations for the top-5 predicted categories can be found in Fig. A13. It can be seen that c-MWP highlights arbitrary regions for predicted but non-existent categories, unlike Grad-CAM which seem much more reasonable. We quantitatively evaluate this through the pointing experiment.
In this section, we perform Grad-CAM on Residual Networks (ResNets). In particular, we analyze the 200-layer architecture trained on ImageNet666We use the 200-layer ResNet architecture from https://github.com/facebook/fb.resnet.torch..
Current ResNets  typically consist of residual blocks. One set of blocks use identity skip connections (shortcut connections between two layers having identical output dimensions). These sets of residual blocks are interspersed with downsampling modules that alter dimensions of propagating signal. As can be seen in Fig. A14 our visualizations applied on the last convolutional layer can correctly localize the cat and the dog. Grad-CAM can also visualize the cat and dog correctly in the residual blocks of the last set. However, as we go towards earlier sets of residual blocks with different spatial resolution, we see that Grad-CAM fails to localize the category of interest (see last row of Fig. A14). We obseve similar trends for other ResNet architectures (18 and 50-layer ResNets).