Explaining individual predictions when features are dependent: More accurate approximations to Shapley values

03/25/2019
by   Kjersti Aas, et al.
0

Explaining complex or seemingly simple machine learning models is a practical and ethical question, as well as a legal issue. Can I trust the model? Is it biased? Can I explain it to others? We want to explain individual predictions from a complex machine learning model by learning simple, interpretable explanations. Of existing work on interpreting complex models, Shapley values is the only method with a solid theoretical foundation. Kernel SHAP is a computationally efficient approximation to Shapley values in higher dimensions. Like most other existing methods, this approach assumes independent features, which may give very wrong explanations. This is the case even if a simple linear model is used for predictions. We extend the Kernel SHAP method to handle dependent features. We provide several examples of linear and non-linear models with linear and non-linear feature dependence, where our method gives more accurate approximations to the true Shapley values. We also propose a method for aggregating individual Shapley values, such that the prediction can be explained by groups of dependent variables.

READ FULL TEXT
research
07/02/2020

Explaining predictive models with mixed features using Shapley values and conditional inference trees

It is becoming increasingly important to explain complex, black-box mach...
research
06/23/2021

groupShapley: Efficient prediction explanation with Shapley values for feature groups

Shapley values has established itself as one of the most appropriate and...
research
07/12/2020

Explaining the data or explaining a model? Shapley values that uncover non-linear dependencies

Shapley values have become increasingly popular in the machine learning ...
research
09/11/2020

Accurate and Intuitive Contextual Explanations using Linear Model Trees

With the ever-increasing use of complex machine learning models in criti...
research
02/12/2021

Explaining predictive models using Shapley values and non-parametric vine copulas

The original development of Shapley values for prediction explanation re...
research
11/26/2021

Using Shapley Values and Variational Autoencoders to Explain Predictive Models with Dependent Mixed Features

Shapley values are today extensively used as a model-agnostic explanatio...
research
06/07/2021

Accurate and robust Shapley Values for explaining predictions and focusing on local important variables

Although Shapley Values (SV) are widely used in explainable AI, they can...

Please sign up or login with your details

Forgot password? Click here to reset