Enhancing Deep Neural Network Saliency Visualizations with Gradual Extrapolation

04/11/2021
by   Tomasz Szandała, et al.
0

We propose an enhancement technique of the Class Activation Mapping methods like Grad-CAM or Excitation Backpropagation, which presents visual explanations of decisions from CNN-based models. Our idea, called Gradual Extrapolation, can supplement any method that generates a heatmap picture by sharpening the output. Instead of producing a coarse localization map highlighting the important predictive regions in the image, our method outputs the specific shape that most contributes to the model output. Thus, it improves the accuracy of saliency maps. Effect has been achieved by gradual propagation of the crude map obtained in deep layer through all preceding layers with respect to their activations. In validation tests conducted on a selected set of images, the proposed method significantly improved the localization detection of the neural networks' attention. Furthermore, the proposed method is applicable to any deep neural network model.

READ FULL TEXT

page 2

page 3

page 4

page 5

page 6

page 7

page 9

research
07/08/2022

Abs-CAM: A Gradient Optimization Interpretable Approach for Explanation of Convolutional Neural Networks

The black-box nature of Deep Neural Networks (DNNs) severely hinders its...
research
10/07/2016

Grad-CAM: Visual Explanations from Deep Networks via Gradient-based Localization

We propose a technique for producing "visual explanations" for decisions...
research
11/22/2016

Grad-CAM: Why did you say that?

We propose a technique for making Convolutional Neural Network (CNN)-bas...
research
06/06/2023

G-CAME: Gaussian-Class Activation Mapping Explainer for Object Detectors

Nowadays, deep neural networks for object detection in images are very p...
research
02/24/2020

See, Attend and Brake: An Attention-based Saliency Map Prediction Model for End-to-End Driving

Visual perception is the most critical input for driving decisions. In t...
research
09/20/2023

Signature Activation: A Sparse Signal View for Holistic Saliency

The adoption of machine learning in healthcare calls for model transpare...
research
05/18/2018

A Theoretical Explanation for Perplexing Behaviors of Backpropagation-based Visualizations

Backpropagation-based visualizations have been proposed to interpret con...

Please sign up or login with your details

Forgot password? Click here to reset