DeepAI AI Chat
Log In Sign Up

Revisiting Sanity Checks for Saliency Maps

10/27/2021
by   Gal Yona, et al.
0

Saliency methods are a popular approach for model debugging and explainability. However, in the absence of ground-truth data for what the correct maps should be, evaluating and comparing different approaches remains a long-standing challenge. The sanity checks methodology of Adebayo et al [Neurips 2018] has sought to address this challenge. They argue that some popular saliency methods should not be used for explainability purposes since the maps they produce are not sensitive to the underlying model that is to be explained. Through a causal re-framing of their objective, we argue that their empirical evaluation does not fully establish these conclusions, due to a form of confounding introduced by the tasks they evaluate on. Through various experiments on simple custom tasks we demonstrate that some of their conclusions may indeed be artifacts of the tasks more than a criticism of the saliency methods themselves. More broadly, our work challenges the utility of the sanity check methodology, and further highlights that saliency map evaluation beyond ad-hoc visual examination remains a fundamental challenge.

READ FULL TEXT

page 5

page 6

page 8

page 9

page 10

07/20/2021

Saliency for free: Saliency prediction as a side-effect of object recognition

Saliency is the perceptual capacity of our visual system to focus our at...
05/04/2023

Evaluating Post-hoc Interpretability with Intrinsic Interpretability

Despite Convolutional Neural Networks having reached human-level perform...
08/04/2016

Saliency Integration: An Arbitrator Model

Saliency integration approaches have aroused general concern on unifying...
07/20/2021

Shared Interest: Large-Scale Visual Analysis of Model Behavior by Measuring Human-AI Alignment

Saliency methods – techniques to identify the importance of input featur...
12/09/2021

Evaluating saliency methods on artificial data with different background types

Over the last years, many 'explainable artificial intelligence' (xAI) ap...
02/23/2023

The Generalizability of Explanations

Due to the absence of ground truth, objective evaluation of explainabili...
06/27/2021

Crowdsourcing Evaluation of Saliency-based XAI Methods

Understanding the reasons behind the predictions made by deep neural net...