Revisiting Sanity Checks for Saliency Maps

10/27/2021
by   Gal Yona, et al.
0

Saliency methods are a popular approach for model debugging and explainability. However, in the absence of ground-truth data for what the correct maps should be, evaluating and comparing different approaches remains a long-standing challenge. The sanity checks methodology of Adebayo et al [Neurips 2018] has sought to address this challenge. They argue that some popular saliency methods should not be used for explainability purposes since the maps they produce are not sensitive to the underlying model that is to be explained. Through a causal re-framing of their objective, we argue that their empirical evaluation does not fully establish these conclusions, due to a form of confounding introduced by the tasks they evaluate on. Through various experiments on simple custom tasks we demonstrate that some of their conclusions may indeed be artifacts of the tasks more than a criticism of the saliency methods themselves. More broadly, our work challenges the utility of the sanity check methodology, and further highlights that saliency map evaluation beyond ad-hoc visual examination remains a fundamental challenge.

READ FULL TEXT

page 5

page 6

page 8

page 9

page 10

research
07/20/2021

Saliency for free: Saliency prediction as a side-effect of object recognition

Saliency is the perceptual capacity of our visual system to focus our at...
research
05/04/2023

Evaluating Post-hoc Interpretability with Intrinsic Interpretability

Despite Convolutional Neural Networks having reached human-level perform...
research
08/04/2016

Saliency Integration: An Arbitrator Model

Saliency integration approaches have aroused general concern on unifying...
research
07/20/2021

Shared Interest: Large-Scale Visual Analysis of Model Behavior by Measuring Human-AI Alignment

Saliency methods – techniques to identify the importance of input featur...
research
12/09/2021

Evaluating saliency methods on artificial data with different background types

Over the last years, many 'explainable artificial intelligence' (xAI) ap...
research
06/27/2023

xAI-CycleGAN, a Cycle-Consistent Generative Assistive Network

In the domain of unsupervised image-to-image transformation using genera...
research
11/05/2022

New Definitions and Evaluations for Saliency Methods: Staying Intrinsic, Complete and Sound

Saliency methods compute heat maps that highlight portions of an input t...

Please sign up or login with your details

Forgot password? Click here to reset