DeepAI AI Chat
Log In Sign Up

Group-CAM: Group Score-Weighted Visual Explanations for Deep Convolutional Networks

by   Qinglong Zhang, et al.
Nanjing University

In this paper, we propose an efficient saliency map generation method, called Group score-weighted Class Activation Mapping (Group-CAM), which adopts the "split-transform-merge" strategy to generate saliency maps. Specifically, for an input image, the class activations are firstly split into groups. In each group, the sub-activations are summed and de-noised as an initial mask. After that, the initial masks are transformed with meaningful perturbations and then applied to preserve sub-pixels of the input (i.e., masked inputs), which are then fed into the network to calculate the confidence scores. Finally, the initial masks are weighted summed to form the final saliency map, where the weights are confidence scores produced by the masked inputs. Group-CAM is efficient yet effective, which only requires dozens of queries to the network while producing target-related saliency maps. As a result, Group-CAM can be served as an effective data augment trick for fine-tuning the networks. We comprehensively evaluate the performance of Group-CAM on common-used benchmarks, including deletion and insertion tests on ImageNet-1k, and pointing game tests on COCO2017. Extensive experimental results demonstrate that Group-CAM achieves better visual performance than the current state-of-the-art explanation approaches. The code is available at


page 2

page 5

page 6

page 8


Score-CAM:Improved Visual Explanations Via Score-Weighted Class Activation Mapping

Recently, more and more attention has been drawn into the internal mecha...

FD-CAM: Improving Faithfulness and Discriminability of Visual Explanation for CNNs

Class activation map (CAM) has been widely studied for visual explanatio...

SESS: Saliency Enhancing with Scaling and Sliding

High-quality saliency maps are essential in several machine learning app...

Score-CAM: Score-Weighted Visual Explanations for Convolutional Neural Networks

Recently, increasing attention has been drawn to the internal mechani...

CAMERAS: Enhanced Resolution And Sanity preserving Class Activation Mapping for image saliency

Backpropagation image saliency aims at explaining model predictions by e...

Hierarchical Dynamic Masks for Visual Explanation of Neural Networks

Saliency methods generating visual explanatory maps representing the imp...

Verifying Attention Robustness of Deep Neural Networks against Semantic Perturbations

It is known that deep neural networks (DNNs) classify an input image by ...