Problems with Shapley-value-based explanations as feature importance measures

02/25/2020
by   I. Elizabeth Kumar, et al.
19

Game-theoretic formulations of feature importance have become popular as a way to "explain" machine learning models. These methods define a cooperative game between the features of a model and distribute influence among these input elements using some form of the game's unique Shapley values. Justification for these methods rests on two pillars: their desirable mathematical properties, and their applicability to specific motivations for explanations. We show that mathematical problems arise when Shapley values are used for feature importance and that the solutions to mitigate these necessarily induce further complexity, such as the need for causal reasoning. We also draw on additional literature to argue that Shapley values do not provide explanations which suit human-centric goals of explainability.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/31/2021

Attention Flows are Shapley Value Explanations

Shapley Values, a solution to the credit assignment problem in cooperati...
research
11/03/2020

Causal Shapley Values: Exploiting Causal Knowledge to Explain Individual Predictions of Complex Models

Shapley values underlie one of the most popular model-agnostic methods w...
research
09/17/2019

The Explanation Game: Explaining Machine Learning Models with Cooperative Game Theory

Recently, a number of techniques have been proposed to explain a machine...
research
01/29/2021

Epistemic values in feature importance methods: Lessons from feminist epistemology

As the public seeks greater accountability and transparency from machine...
research
10/22/2020

A Multilinear Sampling Algorithm to Estimate Shapley Values

Shapley values are great analytical tools in game theory to measure the ...
research
03/10/2023

Feature Importance: A Closer Look at Shapley Values and LOCO

There is much interest lately in explainability in statistics and machin...
research
07/14/2022

From Shapley back to Pearson: Hypothesis Testing via the Shapley Value

Machine learning models, in particular artificial neural networks, are i...

Please sign up or login with your details

Forgot password? Click here to reset