The Struggles of Feature-Based Explanations: Shapley Values vs. Minimal Sufficient Subsets

by   Oana-Maria Camburu, et al.

For neural models to garner widespread public trust and ensure fairness, we must have human-intelligible explanations for their predictions. Recently, an increasing number of works focus on explaining the predictions of neural models in terms of the relevance of the input features. In this work, we show that feature-based explanations pose problems even for explaining trivial models. We show that, in certain cases, there exist at least two ground-truth feature-based explanations, and that, sometimes, neither of them is enough to provide a complete view of the decision-making process of the model. Moreover, we show that two popular classes of explainers, Shapley explainers and minimal sufficient subsets explainers, target fundamentally different types of ground-truth explanations, despite the apparently implicit assumption that explainers should look for one specific feature-based explanation. These findings bring an additional dimension to consider in both developing and choosing explainers.



There are no comments yet.


page 1

page 2

page 3

page 4


Explaining Deep Neural Networks

Deep neural networks are becoming more and more popular due to their rev...

Evaluating Local Model-Agnostic Explanations of Learning to Rank Models with Decision Paths

Local explanations of learning-to-rank (LTR) models are thought to extra...

On Relating 'Why?' and 'Why Not?' Explanations

Explanations of Machine Learning (ML) models often address a 'Why?' ques...

Best of both worlds: local and global explanations with human-understandable concepts

Interpretability techniques aim to provide the rationale behind a model'...

Probabilistic Sufficient Explanations

Understanding the behavior of learned classifiers is an important task, ...

Right for the Right Reasons: Training Differentiable Models by Constraining their Explanations

Neural networks are among the most accurate supervised learning methods ...

Explaining reputation assessments

Reputation is crucial to enabling human or software agents to select amo...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.