SHAP for additively modeled features in a boosted trees model

07/29/2022
by   Michael Mayer, et al.
0

An important technique to explore a black-box machine learning (ML) model is called SHAP (SHapley Additive exPlanation). SHAP values decompose predictions into contributions of the features in a fair way. We will show that for a boosted trees model with some or all features being additively modeled, the SHAP dependence plot of such a feature corresponds to its partial dependence plot up to a vertical shift. We illustrate the result with XGBoost.

READ FULL TEXT

page 6

page 7

research
07/02/2020

Explaining predictive models with mixed features using Shapley values and conditional inference trees

It is becoming increasingly important to explain complex, black-box mach...
research
01/28/2019

Fairwashing: the risk of rationalization

Black-box explanation is the problem of explaining how a machine learnin...
research
04/18/2021

SurvNAM: The machine learning survival model explanation

A new modification of the Neural Additive Model (NAM) called SurvNAM and...
research
08/10/2021

Attention-like feature explanation for tabular data

A new method for local and global explanation of the machine learning bl...
research
05/01/2019

Please Stop Permuting Features: An Explanation and Alternatives

This paper advocates against permute-and-predict (PaP) methods for inter...
research
01/31/2022

Fair Wrapping for Black-box Predictions

We introduce a new family of techniques to post-process ("wrap") a black...
research
06/13/2023

iPDP: On Partial Dependence Plots in Dynamic Modeling Scenarios

Post-hoc explanation techniques such as the well-established partial dep...

Please sign up or login with your details

Forgot password? Click here to reset