Triplot: model agnostic measures and visualisations for variable importance in predictive models that take into account the hierarchical correlation structure

04/07/2021
by   Katarzyna Pekala, et al.
0

One of the key elements of explanatory analysis of a predictive model is to assess the importance of individual variables. Rapid development of the area of predictive model exploration (also called explainable artificial intelligence or interpretable machine learning) has led to the popularization of methods for local (instance level) and global (dataset level) methods, such as Permutational Variable Importance, Shapley Values (SHAP), Local Interpretable Model Explanations (LIME), Break Down and so on. However, these methods do not use information about the correlation between features which significantly reduce the explainability of the model behaviour. In this work, we propose new methods to support model analysis by exploiting the information about the correlation between variables. The dataset level aspect importance measure is inspired by the block permutations procedure, while the instance level aspect importance measure is inspired by the LIME method. We show how to analyze groups of variables (aspects) both when they are proposed by the user and when they should be determined automatically based on the hierarchical structure of correlations between variables. Additionally, we present the new type of model visualisation, triplot, which exploits a hierarchical structure of variable grouping to produce a high information density model visualisation. This visualisation provides a consistent illustration for either local or global model and data exploration. We also show an example of real-world data with 5k instances and 37 features in which a significant correlation between variables affects the interpretation of the effect of variable importance. The proposed method is, to our knowledge, the first to allow direct use of the correlation between variables in exploratory model analysis.

READ FULL TEXT
research
11/03/2021

From global to local MDI variable importances for random forests and when they are Shapley values

Random forests have been widely used for their ability to provide so-cal...
research
03/10/2023

Feature Importance: A Closer Look at Shapley Values and LOCO

There is much interest lately in explainability in statistics and machin...
research
07/19/2022

Lazy Estimation of Variable Importance for Large Neural Networks

As opaque predictive models increasingly impact many areas of modern lif...
research
07/13/2018

Improved Methods for Making Inferences About Multiple Skipped Correlations

A skipped correlation has the advantage of dealing with outliers in a ma...
research
07/21/2021

GLIME: A new graphical methodology for interpretable model-agnostic explanations

Explainable artificial intelligence (XAI) is an emerging new domain in w...
research
07/19/2023

Beyond Single-Feature Importance with ICECREAM

Which set of features was responsible for a certain output of a machine ...

Please sign up or login with your details

Forgot password? Click here to reset