True to the Model or True to the Data?

by   Hugh Chen, et al.

A variety of recent papers discuss the application of Shapley values, a concept for explaining coalitional games, for feature attribution in machine learning. However, the correct way to connect a machine learning model to a coalitional game has been a source of controversy. The two main approaches that have been proposed differ in the way that they condition on known features, using either (1) an interventional or (2) an observational conditional expectation. While previous work has argued that one of the two approaches is preferable in general, we argue that the choice is application dependent. Furthermore, we argue that the choice comes down to whether it is desirable to be true to the model or true to the data. We use linear models to investigate this choice. After deriving an efficient method for calculating observational conditional expectation Shapley values for linear models, we investigate how correlation in simulated data impacts the convergence of observational conditional expectation Shapley values. Finally, we present two real data examples that we consider to be representative of possible use cases for feature attribution – (1) credit risk modeling and (2) biological discovery. We show how a different choice of value function performs better in each scenario, and how possible attributions are impacted by modeling choices.


page 1

page 2

page 3

page 4


RKHS-SHAP: Shapley Values for Kernel Methods

Feature attribution for kernel methods is often heuristic and not indivi...

Exact Shapley Values for Local and Model-True Explanations of Decision Tree Ensembles

Additive feature explanations using Shapley values have become popular f...

Explaining the Model and Feature Dependencies by Decomposition of the Shapley Value

Shapley values have become one of the go-to methods to explain complex m...

Planning with Expectation Models

Distribution and sample models are two popular model choices in model-ba...

Challenges and Opportunities of Shapley values in a Clinical Context

With the adoption of machine learning-based solutions in routine clinica...

Mutual information-based group explainers with coalition structure for machine learning model explanations

In this article, we propose and investigate ML group explainers in a gen...

An exploration of the influence of path choice in game-theoretic attribution algorithms

We compare machine learning explainability methods based on the theory o...

Please sign up or login with your details

Forgot password? Click here to reset