Shapley variable importance clouds for interpretable machine learning

10/06/2021
by   Yilin Ning, et al.
0

Interpretable machine learning has been focusing on explaining final models that optimize performance. The current state-of-the-art is the Shapley additive explanations (SHAP) that locally explains variable impact on individual predictions, and it is recently extended for a global assessment across the dataset. Recently, Dong and Rudin proposed to extend the investigation to models from the same class as the final model that are "good enough", and identified a previous overclaim of variable importance based on a single model. However, this method does not directly integrate with existing Shapley-based interpretations. We close this gap by proposing a Shapley variable importance cloud that pools information across good models to avoid biased assessments in SHAP analyses of final models, and communicate the findings via novel visualizations. We demonstrate the additional insights gain compared to conventional explanations and Dong and Rudin's method using criminal justice and electronic medical records data.

READ FULL TEXT

page 19

page 20

research
12/16/2022

Shapley variable importance cloud for machine learning models

Current practice in interpretable machine learning often focuses on expl...
research
01/10/2019

Variable Importance Clouds: A Way to Explore Variable Importance for the Set of Good Models

Variable importance is central to scientific studies, including the soci...
research
10/05/2018

Local Interpretable Model-agnostic Explanations of Bayesian Predictive Models via Kullback-Leibler Projections

We introduce a method, KL-LIME, for explaining predictions of Bayesian p...
research
05/16/2022

Model Agnostic Local Explanations of Reject

The application of machine learning based decision making systems in saf...
research
01/05/2023

Instance-based Explanations for Gradient Boosting Machine Predictions with AXIL Weights

We show that regression predictions from linear and tree-based models ca...
research
09/10/2019

NormLime: A New Feature Importance Metric for Explaining Deep Neural Networks

The problem of explaining deep learning models, and model predictions ge...
research
08/16/2021

Locally Interpretable Model Agnostic Explanations using Gaussian Processes

Owing to tremendous performance improvements in data-intensive domains, ...

Please sign up or login with your details

Forgot password? Click here to reset