Visualizing Variable Importance and Variable Interaction Effects in Machine Learning Models

08/09/2021
by   Alan Inglis, et al.
0

Variable importance, interaction measures, and partial dependence plots are important summaries in the interpretation of statistical and machine learning models. In this paper we describe new visualization techniques for exploring these model summaries. We construct heatmap and graph-based displays showing variable importance and interaction jointly, which are carefully designed to highlight important aspects of the fit. We describe a new matrix-type layout showing all single and bivariate partial dependence plots, and an alternative layout based on graph Eulerians focusing on key subsets. Our new visualizations are model-agnostic and are applicable to regression and classification supervised learning settings. They enhance interpretation even in situations where the number of variables is large. Our R package vivid (variable importance and variable interaction displays) provides an implementation.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/20/2022

vivid: An R package for Variable Importance and Variable Interactions Displays for Machine Learning Models

We present vivid, an R package for visualizing variable importance and v...
research
08/05/2022

Explanation of Machine Learning Models of Colon Cancer Using SHAP Considering Interaction Effects

When using machine learning techniques in decision-making processes, the...
research
07/15/2019

A Stratification Approach to Partial Dependence for Codependent Variables

Model interpretability is important to machine learning practitioners, a...
research
08/18/2022

Visualizations for Bayesian Additive Regression Trees

Tree-based regression and classification has become a standard tool in m...
research
01/26/2021

Model-agnostic interpretation by visualization of feature perturbations

Interpretation of machine learning models has become one of the most imp...
research
04/09/2021

Transforming Feature Space to Interpret Machine Learning Models

Model-agnostic tools for interpreting machine-learning models struggle t...
research
08/21/2022

Statistical Aspects of SHAP: Functional ANOVA for Model Interpretation

SHAP is a popular method for measuring variable importance in machine le...

Please sign up or login with your details

Forgot password? Click here to reset