Explaining decision of model from its prediction

06/15/2021
by   Dipesh Tamboli, et al.
0

This document summarizes different visual explanations methods such as CAM, Grad-CAM, Localization using Multiple Instance Learning - Saliency-based methods, Saliency-driven Class-Impressions, Muting pixels in input image - Adversarial methods and Activation visualization, Convolution filter visualization - Feature-based methods. We have also shown the results produced by different methods and a comparison between CAM, GradCAM, and Guided Backpropagation.

READ FULL TEXT

page 2

page 3

page 4

page 5

page 6

research
08/22/2019

Saliency Methods for Explaining Adversarial Attacks

In this work, we aim to explain the classifications of adversary images ...
research
02/03/2020

Robust saliency maps with decoy-enhanced saliency score

Saliency methods help to make deep neural network predictions more inter...
research
06/20/2021

CAMERAS: Enhanced Resolution And Sanity preserving Class Activation Mapping for image saliency

Backpropagation image saliency aims at explaining model predictions by e...
research
09/20/2023

COSE: A Consistency-Sensitivity Metric for Saliency on Image Classification

We present a set of metrics that utilize vision priors to effectively as...
research
10/08/2018

Sanity Checks for Saliency Maps

Saliency methods have emerged as a popular tool to highlight features in...
research
07/28/2021

Evaluating the Use of Reconstruction Error for Novelty Localization

The pixelwise reconstruction error of deep autoencoders is often utilize...
research
12/03/2020

Visualization of Supervised and Self-Supervised Neural Networks via Attribution Guided Factorization

Neural network visualization techniques mark image locations by their re...

Please sign up or login with your details

Forgot password? Click here to reset