Today, many deep learning models perform well with good results in tasks like object detection, speech recognition, machine translation and few others . According to , breakthroughs are evident in some computer vision tasks which include image classification , object detection , semantic segmentation , image captioning , and visual question answering . As much as the performance of these complex models are improved, there are limitations in the effectiveness of the conventional intelligent models as they lack the ability to explain their decisions to human users. This is a non-trivial issue in risk-averse domains such as in security, health and autonomous navigation .
. Artificial Intelligent agents are weaker than humans and not yet completely reliable. Thus, transparency and explainability is key in neural network models to identify failure modes. Narrowing down to image classification, few techniques have been proposed to understand the decisions of image classification models. A common approach usually called saliency (sensitivity or pixel attribution) is to find regions or subset of pixels of an image that were particularly influential to the final classification by the model [11, 12, 13]. This approach, including the general Class Activation Map (CAM) under perform when localizing multiple occurrences of the same class. Also, gradient based CAM (Grad-CAM) does not capture the entire object in completeness when used on single object images, this affect performance on recognition tasks. Although, Grad-CAM++ technique tends to take care of these limitations, improvements are required in terms of class object capturing, localization and visual appeal. In addition, the implementation of the current visualization techniques does not put into consideration the visualization of single or subset of neurons in a feature map, they only stop at the feature map level.
In this paper, we introduce gradient smoothening into Grad-CAM++, the resulting technique makes provision for visualizing a convolutional layer, subset of feature maps and subset of neurons in a feature map with improved visual appeal, localization and class object capturing. Smoothening entails adding noise to the sample image of interest, and taking the average of all gradient matrices generated from each noised image. Grad-CAM++ does pixel-wise weighting of the gradients of the output with respect to a particular spatial position in the final convolutional feature map of the CNN. This provides a measure of importance of each pixel in a feature map towards the overall decision of the CNN.
In this section, we discuss previous efforts put into understanding outputs from CNNs. One of the earliest efforts in understanding deep CNNs is the deconvolution approach called Deconvnet. In this method, data flow from a neuron activation in the higher layers to lower layers, then parts of the image that highly influence that neuron are highlighted in the process 
. This led to the guided backpropagation idea. introduced a new variant of the “deconvolution approach” called guided backpropagation for visualizing features learned by CNNs, which can be applied to a broader range of network structures than existing approaches.  presented a visualization toolbox to synthesize the input image that causes a specific unit in a neural network to have a high activation, this helps in visualizing the functionality of the unit. The toolbox could show activations for input images from a webcam or an image file, and gives intuition to what each filter is doing in each layer. Class-specific saliency maps which are generated by performing a gradient ascent in pixel space to reach a maxima was proposed by . This proves to be a more guided approach to synthesizing input images that maximally activate a neuron and helps to give better explanations on how a given CNN modeled a class . Other interesting approaches were proposed such as Local Interpretable Model-Agnostic Explanations(LIME) , DeepLift , and Contextual Explanation Networks (CENs) .
More recent visualization techniques are Class Activation Map(CAM), Gradient-Weighted Class Activation Map(Grad-CAM) and a generalization of Grad-CAM called Grad-CAM++. In CAM,  demonstrated that a CNN with a Global Average Pooling (GAP) layer shows to have remarkable localization ability despite being trained on image-level labels. The CAM works for modified image classification CNNs that do not contain fully connected layers. The Grad-CAM is a generalization of the CAM for any CNN-based architecture. While CAM is limited to a narrow class of CNN models, Grad-CAM is broadly applicable to any CNN-based architectures and needs no re-training. The Grad-CAM technique
computes the gradient of the class score() with respect to feature map of the last convolution layer that is:
The gradients flowing back are global-averaged-pooled to obtain weights .
Where captures the importance of the feature map K for a target class c.
The Grad-CAM heatmap is a weighted combination of feature maps with ReLU.
In Grad-CAM++,  proposed an algorithm which uses the weighted combination of the positive partial derivatives of the last convolutional layer feature maps with respect to a specific class score as weights to generate a visual explanation for the class label under consideration. The author formulated:
to capture the importance of a particular activation map by:
where captures the importance of location (i,j) for activation map for target class c.
Replaced in the fundamental assumption of CAM(not shown here) to give:
Taking partial derivative with respect to and rearranging terms gives the importance of each location in each activation map.
 introduced SMOOTHGRAD, a simple method that can help visually sharpen gradient-based sensitivity maps by taking random samples in a neighborhood of an input x, and averaging the resulting sensitivity maps. Formally, this means calculating:
where n is the number of samples, and
represents Gaussian noise with standard deviation. With the motivation to provide an enhanced visualization maps, we apply this smoothening technique in the gradients computations involved in Grad-CAM++ as shown above. The resulting gradients are applied in the Grad-CAM++ algorithm. This provides better (in terms of visual appeal, localization and capturing) maps for deep CNNs.
In this section we discuss the smoothening process (noise over input)and gradient averaging. We also discuss how the API was used to generate our results and how it could be used subsequently by other users. This involves model, convolution layer, feature map, and neuron selection.
3.1 Noise Over Input
We set the number of noised sample images to be generated by adding Gaussian noise to the original input. The standard deviation from mean value of the input is set, which provides the degree of noise to be added. We provide an API for interacting with the algorithm. The provided API uses 0 as the number of noised sample images to be generated by default. This implies that the gradients of the original input is used with no noise. The default standard deviation value is set to 0.15. These values could be varied till a satisfactory visual map is produced.
3.2 Gradients Averaging
We take the average of all 1st, 2nd and 3rd order partial derivatives of all noised inputs and apply the resulting averaged derivatives in computing and .
Let and denote matrices of 1st, 2nd and 3rd order partial derivatives respectively for feature map k. We compute as:
substituting the averaged gradient into Equation 3, Grad-CAM++ weights becomes:
When is substituted into Equation 2, we get the final class discriminative saliency matrix which could be plotted with matplotlib or any other image plotting library. This serves as the final saliency map; this modified Grad-CAM++ is what we call Smooth Grad-CAM++.
3.3 Choosing a Model
Any learned deep CNN model can be chosen for visualization. In this paper, we used VGG-16 pre-trained model and explored the last convolutional layer.
3.4 Choosing Layer
At each instance, only one convolution layer can be visualized. The name of the layer to be visualized is passed to the API. Names by default have a specific convention, however, viewing the summary of the trained model will reveal the name or unique identifier of each convolution layer. Each layer contains feature maps which is usually with dimension of about
where is the input height, is the input width,
is the amount of zero padding added,is the kernel height, is the kernel width, and
are horizontal and vertical strides of the convolutions respectively. If no padding is supplied, then dimension would be about:
where ceil rounds up the value to the nearest higher integer.
3.5 Choosing Feature Maps
To specify the feature maps to be visualized, the filter parameter must be set. The filter parameter is a list of integers specifying the index of the feature maps to be visualized in the specified convolution layer. If values are set for the filter parameter, maps are generated corresponding to each feature map. For instance, if filter values are: , it means the values of in the respective equations are bounded to .
Recall that the number of feature maps corresponds to the number of kernels.
3.6 Choosing Neurons
One of the main contribution of this work is the capability of our technique to visualize subsets of neurons in a feature map. Visualizing neurons is useful when individual neuron activation is of interest. For instance,  used subset scan algorithm to identify anomalous activations in a convolutional neural network. Smooth Grad-CAM++ will be useful in providing explanations at the neuron level. Our API provides an option to visualize regions of neurons within a specified coordinate boundary when region parameter is set to true. When region parameter is set to false and a subset of coordinates is provided, only the neurons in those coordinates are visualized while other activations are clipped at zero. Smooth Grad-CAM++ could be a very flexible tool for debugging CNN models.
3.7 API Call
Necessary arguments are passed during API calls as shown below:
grads = grad_cam_plus_smooth(model, img, layer_name=’block5_conv3’, nsamples=5, std_dev=0.3, filters=, region=True, subset=[(0,10),(12, 12)])
If the region parameter is set to true, the two coordinates in the subset would be treated as bound for neurons to be visualized. Hence, all neurons within the set bound are visualized. If region is false, each coordinate specified in the subset list is visualized.
From Figure 1, Smooth Grad-CAM++ gives a clearer explanation of particular features the model learned. For instance, Smooth Grad-CAM was able to highlight larger portion of the water-bird’s legs in Figure 1. Also, Smooth Grad-CAM++ captures larger amount of the class object (as seen in the dog image in Figure1), and does a good localization. Figure 2 shows visual map for 3 randomly selected feature maps which are feature map 10, 32 and 3. Each feature map learns special features, some may be blank as seen in Figure 2. Figure 4 and 5 and show saliency map at neuron level for specific feature maps as captioned in the labels. This technique is a step towards gaining insights on what CNN models actually learn.
An enhanced visual saliency map can help increase our understanding of the internal workings of trained deep convolutional neural network models at the inference stage. In this paper, we proposed Smooth Grad-CAM++, an enhanced visual map for deep convolutional neural networks. Our results disclosed improvements in the generated visual maps when compared to existing methods. These maps were generated by averaging gradients (i.e derivative of class score with respect to the input) from many small perturbations of a given image and applying the resulting gradients in the generalized Grad-CAM algorithm (Grad-CAM++). Smooth Grad-CAM++ performs well in object localization and also in multiple occurrences of an object of same class. It is able to create maps for specific layers, subset of feature maps and neurons of interest. This will provide better insights on machine learning model explainability to machine learning researchers. Future works entail further investigations to extend this technique to handle multiple class scenarios, and different network architectures aside CNNs.
-  Chattopadhay, Aditya, Anirban Sarkar, Prantik Howlader, and Vineeth N. Balasubramanian. ”Grad-cam++: Generalized gradient-based visual explanations for deep convolutional networks.” In 2018 IEEE Winter Conference on Applications of Computer Vision (WACV), pp. 839-847. IEEE, 2018.
-  Selvaraju, Ramprasaath R., Michael Cogswell, Abhishek Das, Ramakrishna Vedantam, Devi Parikh, and Dhruv Batra. ”Grad-CAM: Visual Explanations from Deep Networks via Gradient-Based Localization.” In ICCV, pp. 618-626. 2017.
Krizhevsky, Alex, Ilya Sutskever, and Geoffrey E. Hinton. ”Imagenet classification with deep convolutional neural networks.” In Advances in neural information processing systems, pp. 1097-1105. 2012.
Vinyals, Oriol, Alexander Toshev, Samy Bengio, and Dumitru Erhan. ”Show and tell: A neural image caption generator.” In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 3156-3164. 2015.
-  Antol, Stanislaw, Aishwarya Agrawal, Jiasen Lu, Margaret Mitchell, Dhruv Batra, C. Lawrence Zitnick, and Devi Parikh. ”Vqa: Visual question answering.” In Proceedings of the IEEE international conference on computer vision, pp. 2425-2433. 2015.
-  He, Kaiming, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. ”Deep residual learning for image recognition.” In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 770-778. 2016.
-  Mahendran, Aravindh, and Andrea Vedaldi. ”Visualizing deep convolutional neural networks using natural pre-images.” International Journal of Computer Vision 120, no. 3 (2016): 233-255.
-  Smilkov, Daniel, Nikhil Thorat, Been Kim, Fernanda Viégas, and Martin Wattenberg. ”Smoothgrad: removing noise by adding noise.” arXiv preprint arXiv:1706.03825 (2017).
-  Lou, Yin, Rich Caruana, and Johannes Gehrke. ”Intelligible models for classification and regression.” In Proceedings of the 18th ACM SIGKDD international conference on Knowledge discovery and data mining, pp. 150-158. ACM, 2012.
-  Hoiem, Derek, Yodsawalai Chodpathumwan, and Qieyun Dai. ”Diagnosing error in object detectors.” In European conference on computer vision, pp. 340-353. Springer, Berlin, Heidelberg, 2012.
Zhou, Bolei, Aditya Khosla, Agata Lapedriza, Aude Oliva, and Antonio Torralba. ”Learning deep features for discriminative localization.” In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2921-2929. 2016.
-  Zintgraf, Luisa M., Cohen, Taco S., and Welling, Max. A new method to visualize deep neural networks. CoRR, abs/1603.02518, 2016. URL http://arxiv.org/ abs/1603.02518.
-  Sundararajan, Mukund, Taly, Ankur, and Yan, Qiqi. Axiomatic attribution for deep networks. arXiv preprint arXiv:1703.01365, 2017.
-  Zeiler, Matthew D., and Rob Fergus. ”Visualizing and understanding convolutional networks.” In European conference on computer vision, pp. 818-833. Springer, Cham, 2014.
-  Springenberg, Jost Tobias, Alexey Dosovitskiy, Thomas Brox, and Martin Riedmiller. ”Striving for simplicity: The all convolutional net.” arXiv preprint arXiv:1412.6806 (2014).
-  Yosinski, Jason, Jeff Clune, Anh Nguyen, Thomas Fuchs, and Hod Lipson. ”Understanding neural networks through deep visualization.” arXiv preprint arXiv:1506.06579 (2015).
-  Simonyan, Karen, Andrea Vedaldi, and Andrew Zisserman. ”Deep inside convolutional networks: Visualising image classification models and saliency maps.” arXiv preprint arXiv:1312.6034 (2013).
Ribeiro, Marco Tulio, Sameer Singh, and Carlos Guestrin. ”Why should i trust you?: Explaining the predictions of any classifier.” In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining, pp. 1135-1144. ACM, 2016.
-  Shrikumar, Avanti, Peyton Greenside, and Anshul Kundaje. ”Learning important features through propagating activation differences.” arXiv preprint arXiv:1704.02685 (2017).
-  Al-Shedivat, Maruan, Avinava Dubey, and Eric P. Xing. ”Contextual explanation networks.” arXiv preprint arXiv:1705.10301 (2017).
-  Speakman, Skyler, Srihari Sridharan, Sekou Remy, Komminist Weldemariam, and Edward McFowland. ”Subset Scanning Over Neural Network Activations.” arXiv preprint arXiv:1810.08676 (2018).