CAMERAS: Enhanced Resolution And Sanity preserving Class Activation Mapping for image saliency

Backpropagation image saliency aims at explaining model predictions by estimating model-centric importance of individual pixels in the input. However, class-insensitivity of the earlier layers in a network only allows saliency computation with low resolution activation maps of the deeper layers, resulting in compromised image saliency. Remedifying this can lead to sanity failures. We propose CAMERAS, a technique to compute high-fidelity backpropagation saliency maps without requiring any external priors and preserving the map sanity. Our method systematically performs multi-scale accumulation and fusion of the activation maps and backpropagated gradients to compute precise saliency maps. From accurate image saliency to articulation of relative importance of input features for different models, and precise discrimination between model perception of visually similar objects, our high-resolution mapping offers multiple novel insights into the black-box deep visual models, which are presented in the paper. We also demonstrate the utility of our saliency maps in adversarial setup by drastically reducing the norm of attack signals by focusing them on the precise regions identified by our maps. Our method also inspires new evaluation metrics and a sanity check for this developing research direction. Code is available here https://github.com/VisMIL/CAMERAS

READ FULL TEXT

page 2

page 6

page 7

page 8

research
04/06/2020

There and Back Again: Revisiting Backpropagation Saliency Methods

Saliency methods seek to explain the predictions of a model by producing...
research
08/22/2019

Saliency Methods for Explaining Adversarial Attacks

In this work, we aim to explain the classifications of adversary images ...
research
10/30/2018

Scale-Invariant Structure Saliency Selection for Fast Image Fusion

In this paper, we present a fast yet effective method for pixel-level sc...
research
06/15/2021

Explaining decision of model from its prediction

This document summarizes different visual explanations methods such as C...
research
11/10/2020

Removing Brightness Bias in Rectified Gradients

Interpretation and improvement of deep neural networks relies on better ...
research
03/25/2021

Group-CAM: Group Score-Weighted Visual Explanations for Deep Convolutional Networks

In this paper, we propose an efficient saliency map generation method, c...
research
07/28/2023

SAFE: Saliency-Aware Counterfactual Explanations for DNN-based Automated Driving Systems

A CF explainer identifies the minimum modifications in the input that wo...

Please sign up or login with your details

Forgot password? Click here to reset