Beyond Faithfulness: A Framework to Characterize and Compare Saliency Methods

06/07/2022
by   Angie Boggust, et al.
17

Saliency methods calculate how important each input feature is to a machine learning model's prediction, and are commonly used to understand model reasoning. "Faithfulness", or how fully and accurately the saliency output reflects the underlying model, is an oft-cited desideratum for these methods. However, explanation methods must necessarily sacrifice certain information in service of user-oriented goals such as simplicity. To that end, and akin to performance metrics, we frame saliency methods as abstractions: individual tools that provide insight into specific aspects of model behavior and entail tradeoffs. Using this framing, we describe a framework of nine dimensions to characterize and compare the properties of saliency methods. We group these dimensions into three categories that map to different phases of the interpretation process: methodology, or how the saliency is calculated; sensitivity, or relationships between the saliency result and the underlying model or input; and, perceptibility, or how a user interprets the result. As we show, these dimensions give us a granular vocabulary for describing and comparing saliency methods – for instance, allowing us to develop "saliency cards" as a form of documentation, or helping downstream users understand tradeoffs and choose a method for a particular use case. Moreover, by situating existing saliency methods within this framework, we identify opportunities for future work, including filling gaps in the landscape and developing new evaluation metrics.

READ FULL TEXT

page 5

page 7

page 13

research
05/13/2021

Sanity Simulations for Saliency Methods

Saliency methods are a popular class of feature attribution tools that a...
research
09/20/2023

COSE: A Consistency-Sensitivity Metric for Saliency on Image Classification

We present a set of metrics that utilize vision priors to effectively as...
research
07/20/2021

Shared Interest: Large-Scale Visual Analysis of Model Behavior by Measuring Human-AI Alignment

Saliency methods – techniques to identify the importance of input featur...
research
07/11/2021

One Map Does Not Fit All: Evaluating Saliency Map Explanation on Multi-Modal Medical Images

Being able to explain the prediction to clinical end-users is a necessit...
research
11/05/2022

New Definitions and Evaluations for Saliency Methods: Staying Intrinsic, Complete and Sound

Saliency methods compute heat maps that highlight portions of an input t...
research
10/13/2021

When saliency goes off on a tangent: Interpreting Deep Neural Networks with nonlinear saliency maps

A fundamental bottleneck in utilising complex machine learning systems f...
research
11/02/2017

The (Un)reliability of saliency methods

Saliency methods aim to explain the predictions of deep neural networks....

Please sign up or login with your details

Forgot password? Click here to reset