Explaining predictive models with mixed features using Shapley values and conditional inference trees

07/02/2020
by   Annabelle Redelmeier, et al.
0

It is becoming increasingly important to explain complex, black-box machine learning models. Although there is an expanding literature on this topic, Shapley values stand out as a sound method to explain predictions from any type of machine learning model. The original development of Shapley values for prediction explanation relied on the assumption that the features being described were independent. This methodology was then extended to explain dependent features with an underlying continuous distribution. In this paper, we propose a method to explain mixed (i.e. continuous, discrete, ordinal, and categorical) dependent features by modeling the dependence structure of the features using conditional inference trees. We demonstrate our proposed method against the current industry standards in various simulation studies and find that our method often outperforms the other approaches. Finally, we apply our method to a real financial data set used in the 2018 FICO Explainable Machine Learning Challenge and show how our explanations compare to the FICO challenge Recognition Award winning team.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
11/26/2021

Using Shapley Values and Variational Autoencoders to Explain Predictive Models with Dependent Mixed Features

Shapley values are today extensively used as a model-agnostic explanatio...
research
03/25/2019

Explaining individual predictions when features are dependent: More accurate approximations to Shapley values

Explaining complex or seemingly simple machine learning models is a prac...
research
07/29/2022

SHAP for additively modeled features in a boosted trees model

An important technique to explore a black-box machine learning (ML) mode...
research
02/12/2021

Explaining predictive models using Shapley values and non-parametric vine copulas

The original development of Shapley values for prediction explanation re...
research
05/24/2019

Ex-Twit: Explainable Twitter Mining on Health Data

Since most machine learning models provide no explanations for the predi...
research
09/23/2020

Explaining Chemical Toxicity using Missing Features

Chemical toxicity prediction using machine learning is important in drug...
research
09/09/2022

Shapley value-based approaches to explain the robustness of classifiers in machine learning

In machine learning, the use of algorithm-agnostic approaches is an emer...

Please sign up or login with your details

Forgot password? Click here to reset