The Explanation Game: Explaining Machine Learning Models with Cooperative Game Theory

09/17/2019
by   Luke Merrick, et al.
0

Recently, a number of techniques have been proposed to explain a machine learning (ML) model's prediction by attributing it to the corresponding input features. Popular among these are techniques that apply the Shapley value method from cooperative game theory. While existing papers focus on the axiomatic motivation of Shapley values, and efficient techniques for computing them, they do not justify the game formulations used. For instance, we find that the SHAP algorithm's formulation (Lundberg and Lee 2017) may give substantial attributions to features that play no role in a model. In this work, we study the game formulations underpinning several existing methods. Using a series of simple models, we illustrate how their subtle differences can yield large differences in attribution for the same prediction. We then present a general game formulation that unifies existing methods. After discussing the primitive of single-reference games, we decompose the Shapley values of the general game formulation into Shapley values of single-reference games. This is instructive in several ways. First, it enables confidence intervals on estimated attributions, which are not offered by previous works. Second, it enables different contrastive explanations of a prediction through comparison with different groups of reference inputs. We tie this idea to classic work on Norm Theory (Kahneman and Miller 1986) in cognitive psychology, and propose a general framework for generating explanations for ML models, called formulate, approximate, and explain (FAE). We apply this framework to explaining black-box models trained on two UCI datasets and a Lending Club dataset.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/25/2020

Problems with Shapley-value-based explanations as feature importance measures

Game-theoretic formulations of feature importance have become popular as...
research
08/22/2019

The many Shapley values for model explanation

The Shapley value has become a popular method to attribute the predictio...
research
02/22/2021

Mutual information-based group explainers with coalition structure for machine learning model explanations

In this article, we propose and investigate ML group explainers in a gen...
research
07/12/2020

Explaining the data or explaining a model? Shapley values that uncover non-linear dependencies

Shapley values have become increasingly popular in the machine learning ...
research
11/03/2022

A k-additive Choquet integral-based approach to approximate the SHAP values for local interpretability in machine learning

Besides accuracy, recent studies on machine learning models have been ad...
research
06/26/2022

Explaining the root causes of unit-level changes

Existing methods of explainable AI and interpretable ML cannot explain c...
research
05/31/2021

Attention Flows are Shapley Value Explanations

Shapley Values, a solution to the credit assignment problem in cooperati...

Please sign up or login with your details

Forgot password? Click here to reset