DeepAI AI Chat
Log In Sign Up

Unifying local and global model explanations by functional decomposition of low dimensional structures

08/12/2022
by   Munir Hiabu, et al.
Københavns Uni
0

We consider a global explanation of a regression or classification function by decomposing it into the sum of main components and interaction components of arbitrary order. When adding an identification constraint that is motivated by a causal interpretation, we find q-interaction SHAP to be the unique solution to that constraint. Here, q denotes the highest order of interaction present in the decomposition. Our result provides a new perspective on SHAP values with various practical and theoretical implications: If SHAP values are decomposed into main and all interaction effects, they provide a global explanation with causal interpretation. In principle, the decomposition can be applied to any machine learning model. However, since the number of possible interactions grows exponentially with the number of features, exact calculation is only feasible for methods that fit low dimensional structures or ensembles of those. We provide an algorithm and efficient implementation for gradient boosted trees (xgboost and random planted forests that calculates this decomposition. Conducted experiments suggest that our method provides meaningful explanations and reveals interactions of higher orders. We also investigate further potential of our new insights by utilizing the global explanation for motivating a new measure of feature importance, and for reducing direct and indirect bias by post-hoc component removal.

READ FULL TEXT

page 1

page 2

page 3

page 4

09/08/2022

From Shapley Values to Generalized Additive Models and back

In explainable machine learning, local post-hoc explanation algorithms a...
07/26/2021

Feature Synergy, Redundancy, and Independence in Global Model Explanations using SHAP Vector Decomposition

We offer a new formalism for global explanations of pairwise feature dep...
06/23/2022

Explanatory causal effects for model agnostic explanations

This paper studies the problem of estimating the contributions of featur...
08/13/2021

Data-driven advice for interpreting local and global model predictions in bioinformatics problems

Tree-based algorithms such as random forests and gradient boosted trees ...
06/15/2021

Decomposition of Global Feature Importance into Direct and Associative Components (DEDACT)

Global model-agnostic feature importance measures either quantify whethe...
06/23/2021

groupShapley: Efficient prediction explanation with Shapley values for feature groups

Shapley values has established itself as one of the most appropriate and...