Evaluating Input Perturbation Methods for Interpreting CNNs and Saliency Map Comparison

01/26/2021
by   Lukas Brunke, et al.
8

Input perturbation methods occlude parts of an input to a function and measure the change in the function's output. Recently, input perturbation methods have been applied to generate and evaluate saliency maps from convolutional neural networks. In practice, neutral baseline images are used for the occlusion, such that the baseline image's impact on the classification probability is minimal. However, in this paper we show that arguably neutral baseline images still impact the generated saliency maps and their evaluation with input perturbations. We also demonstrate that many choices of hyperparameters lead to the divergence of saliency maps generated by input perturbations. We experimentally reveal inconsistencies among a selection of input perturbation methods and find that they lack robustness for generating saliency maps and for evaluating saliency maps as saliency metrics.

READ FULL TEXT

page 8

page 9

page 11

page 12

research
07/13/2022

Verifying Attention Robustness of Deep Neural Networks against Semantic Perturbations

It is known that deep neural networks (DNNs) classify an input image by ...
research
06/08/2021

Investigating sanity checks for saliency maps with image and text classification

Saliency maps have shown to be both useful and misleading for explaining...
research
09/29/2020

Trustworthy Convolutional Neural Networks: A Gradient Penalized-based Approach

Convolutional neural networks (CNNs) are commonly used for image classif...
research
07/28/2021

Evaluating the Use of Reconstruction Error for Novelty Localization

The pixelwise reconstruction error of deep autoencoders is often utilize...
research
09/04/2023

Uncertainty in AI: Evaluating Deep Neural Networks on Out-of-Distribution Images

As AI models are increasingly deployed in critical applications, ensurin...
research
12/31/2020

iGOS++: Integrated Gradient Optimized Saliency by Bilateral Perturbations

The black-box nature of the deep networks makes the explanation for "why...
research
12/22/2021

Comparing radiologists' gaze and saliency maps generated by interpretability methods for chest x-rays

The interpretability of medical image analysis models is considered a ke...

Please sign up or login with your details

Forgot password? Click here to reset