Shapley variable importance cloud for machine learning models

12/16/2022
by   Yilin Ning, et al.
0

Current practice in interpretable machine learning often focuses on explaining the final model trained from data, e.g., by using the Shapley additive explanations (SHAP) method. The recently developed Shapley variable importance cloud (ShapleyVIC) extends the current practice to a group of "nearly optimal models" to provide comprehensive and robust variable importance assessments, with estimated uncertainty intervals for a more complete understanding of variable contributions to predictions. ShapleyVIC was initially developed for applications with traditional regression models, and the benefits of ShapleyVIC inference have been demonstrated in real-life prediction tasks using the logistic regression model. However, as a model-agnostic approach, ShapleyVIC application is not limited to such scenarios. In this work, we extend ShapleyVIC implementation for machine learning models to enable wider applications, and propose it as a useful complement to the current SHAP analysis to enable more trustworthy applications of these black-box models.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/06/2021

Shapley variable importance clouds for interpretable machine learning

Interpretable machine learning has been focusing on explaining final mod...
research
10/27/2020

A robust low data solution: dimension prediction of semiconductor nanorods

Precise control over dimension of nanocrystals is critical to tune the p...
research
04/29/2023

EBLIME: Enhanced Bayesian Local Interpretable Model-agnostic Explanations

We propose EBLIME to explain black-box machine learning models and obtai...
research
04/15/2023

The XAISuite framework and the implications of explanatory system dissonance

Explanatory systems make machine learning models more transparent. Howev...
research
12/14/2021

Classifying Emails into Human vs Machine Category

It is an essential product requirement of Yahoo Mail to distinguish betw...
research
10/20/2022

vivid: An R package for Variable Importance and Variable Interactions Displays for Machine Learning Models

We present vivid, an R package for visualizing variable importance and v...

Please sign up or login with your details

Forgot password? Click here to reset