The Struggles of Feature-Based Explanations: Shapley Values vs. Minimal Sufficient Subsets

09/23/2020
by   Oana-Maria Camburu, et al.
0

For neural models to garner widespread public trust and ensure fairness, we must have human-intelligible explanations for their predictions. Recently, an increasing number of works focus on explaining the predictions of neural models in terms of the relevance of the input features. In this work, we show that feature-based explanations pose problems even for explaining trivial models. We show that, in certain cases, there exist at least two ground-truth feature-based explanations, and that, sometimes, neither of them is enough to provide a complete view of the decision-making process of the model. Moreover, we show that two popular classes of explainers, Shapley explainers and minimal sufficient subsets explainers, target fundamentally different types of ground-truth explanations, despite the apparently implicit assumption that explainers should look for one specific feature-based explanation. These findings bring an additional dimension to consider in both developing and choosing explainers.

READ FULL TEXT
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

10/04/2020

Explaining Deep Neural Networks

Deep neural networks are becoming more and more popular due to their rev...
03/04/2022

Evaluating Local Model-Agnostic Explanations of Learning to Rank Models with Decision Paths

Local explanations of learning-to-rank (LTR) models are thought to extra...
12/21/2020

On Relating 'Why?' and 'Why Not?' Explanations

Explanations of Machine Learning (ML) models often address a 'Why?' ques...
06/16/2021

Best of both worlds: local and global explanations with human-understandable concepts

Interpretability techniques aim to provide the rationale behind a model'...
05/21/2021

Probabilistic Sufficient Explanations

Understanding the behavior of learned classifiers is an important task, ...
03/10/2017

Right for the Right Reasons: Training Differentiable Models by Constraining their Explanations

Neural networks are among the most accurate supervised learning methods ...
06/15/2020

Explaining reputation assessments

Reputation is crucial to enabling human or software agents to select amo...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.