The Struggles of Feature-Based Explanations: Shapley Values vs. Minimal Sufficient Subsets

09/23/2020
by   Oana-Maria Camburu, et al.
0

For neural models to garner widespread public trust and ensure fairness, we must have human-intelligible explanations for their predictions. Recently, an increasing number of works focus on explaining the predictions of neural models in terms of the relevance of the input features. In this work, we show that feature-based explanations pose problems even for explaining trivial models. We show that, in certain cases, there exist at least two ground-truth feature-based explanations, and that, sometimes, neither of them is enough to provide a complete view of the decision-making process of the model. Moreover, we show that two popular classes of explainers, Shapley explainers and minimal sufficient subsets explainers, target fundamentally different types of ground-truth explanations, despite the apparently implicit assumption that explainers should look for one specific feature-based explanation. These findings bring an additional dimension to consider in both developing and choosing explainers.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/04/2020

Explaining Deep Neural Networks

Deep neural networks are becoming more and more popular due to their rev...
research
03/04/2022

Evaluating Local Model-Agnostic Explanations of Learning to Rank Models with Decision Paths

Local explanations of learning-to-rank (LTR) models are thought to extra...
research
12/21/2020

On Relating 'Why?' and 'Why Not?' Explanations

Explanations of Machine Learning (ML) models often address a 'Why?' ques...
research
06/16/2021

Best of both worlds: local and global explanations with human-understandable concepts

Interpretability techniques aim to provide the rationale behind a model'...
research
02/16/2016

"Why Should I Trust You?": Explaining the Predictions of Any Classifier

Despite widespread adoption, machine learning models remain mostly black...
research
08/11/2023

FunnyBirds: A Synthetic Vision Dataset for a Part-Based Analysis of Explainable AI Methods

The field of explainable artificial intelligence (XAI) aims to uncover t...
research
05/21/2021

Probabilistic Sufficient Explanations

Understanding the behavior of learned classifiers is an important task, ...

Please sign up or login with your details

Forgot password? Click here to reset