An exploration of the influence of path choice in game-theoretic attribution algorithms

07/08/2020
by   Geoff Ward, et al.
0

We compare machine learning explainability methods based on the theory of atomic (Shapley, 1953) and infinitesimal (Aumann and Shapley, 1974) games, in a theoretical and experimental investigation into how the model and choice of integration path can influence the resulting feature attributions. To gain insight into differences in attributions resulting from interventional Shapley values (Sundararajan and Najmi, 2019; Janzing et al., 2019; Chen et al., 2019) and Generalized Integrated Gradients (GIG) (Merrill et al., 2019) we note interventional Shapley is equivalent to a multi-path integration along n! paths where n is the number of model input features. Applying Stoke's theorem we show that the path symmetry of these two methods results in the same attributions when the model is composed of a sum of separable functions of individual features and a sum of two-feature products. We then perform a series of experiments with varying degrees of data missingness to demonstrate how interventional Shapley's multi-path approach can yield less consistent attributions than the single straight-line path of Aumann-Shapley. We argue this is because the multiple paths employed by interventional Shaply extend away from the training data manifold and are therefore more likely to pass through regions where the model has little support. In the absence of a more meaningful path choice, we therefore advocate the straight-line path since it will almost always pass closer to the data manifold. Among straight-line path attribution algorithms, GIG is uniquely robust since it will still yield Shapley values for atomic games modeled by decision trees.

READ FULL TEXT

page 13

page 17

page 18

research
03/25/2021

Symmetry-Preserving Paths in Integrated Gradients

We provide rigorous proofs that the Integrated Gradients (IG) attributio...
research
05/31/2023

Integrated Decision Gradients: Compute Your Attributions Where the Model Makes Its Decision

Attribution algorithms are frequently employed to explain the decisions ...
research
08/08/2020

A note on deterministic zombies

"Zombies and Survivor" is a variant of the well-studied game of "Cops an...
research
06/11/2018

A Note about: Local Explanation Methods for Deep Neural Networks lack Sensitivity to Parameter Values

Local explanation methods, also known as attribution methods, attribute ...
research
09/22/2021

On Bonus-Based Exploration Methods in the Arcade Learning Environment

Research on exploration in reinforcement learning, as applied to Atari 2...
research
05/23/2023

Towards credible visual model interpretation with path attribution

Originally inspired by game-theory, path attribution framework stands ou...
research
06/29/2020

True to the Model or True to the Data?

A variety of recent papers discuss the application of Shapley values, a ...

Please sign up or login with your details

Forgot password? Click here to reset