Shap-CAM: Visual Explanations for Convolutional Neural Networks based on Shapley Value

08/07/2022
by   Quan Zheng, et al.
0

Explaining deep convolutional neural networks has been recently drawing increasing attention since it helps to understand the networks' internal operations and why they make certain decisions. Saliency maps, which emphasize salient regions largely connected to the network's decision-making, are one of the most common ways for visualizing and analyzing deep networks in the computer vision community. However, saliency maps generated by existing methods cannot represent authentic information in images due to the unproven proposals about the weights of activation maps which lack solid theoretical foundation and fail to consider the relations between each pixel. In this paper, we develop a novel post-hoc visual explanation method called Shap-CAM based on class activation mapping. Unlike previous gradient-based approaches, Shap-CAM gets rid of the dependence on gradients by obtaining the importance of each pixel through Shapley value. We demonstrate that Shap-CAM achieves better visual performance and fairness for interpreting the decision making process. Our approach outperforms previous methods on both recognition and localization tasks.

READ FULL TEXT

page 10

page 12

research
05/20/2020

Score-CAM: Score-Weighted Visual Explanations for Convolutional Neural Networks

Recently, increasing attention has been drawn to the internal mechani...
research
10/03/2019

Score-CAM:Improved Visual Explanations Via Score-Weighted Class Activation Mapping

Recently, more and more attention has been drawn into the internal mecha...
research
07/08/2022

Abs-CAM: A Gradient Optimization Interpretable Approach for Explanation of Convolutional Neural Networks

The black-box nature of Deep Neural Networks (DNNs) severely hinders its...
research
03/01/2023

SUNY: A Visual Interpretation Framework for Convolutional Neural Networks from a Necessary and Sufficient Perspective

Researchers have proposed various methods for visually interpreting the ...
research
07/31/2023

MetaCAM: Ensemble-Based Class Activation Map

The need for clear, trustworthy explanations of deep learning model pred...
research
10/11/2021

TSG: Target-Selective Gradient Backprop for Probing CNN Visual Saliency

The explanation for deep neural networks has drawn extensive attention i...
research
01/12/2023

Hierarchical Dynamic Masks for Visual Explanation of Neural Networks

Saliency methods generating visual explanatory maps representing the imp...

Please sign up or login with your details

Forgot password? Click here to reset