On the Trustworthiness of Tree Ensemble Explainability Methods

by   Angeline Yasodhara, et al.

The recent increase in the deployment of machine learning models in critical domains such as healthcare, criminal justice, and finance has highlighted the need for trustworthy methods that can explain these models to stakeholders. Feature importance methods (e.g. gain and SHAP) are among the most popular explainability methods used to address this need. For any explainability technique to be trustworthy and meaningful, it has to provide an explanation that is accurate and stable. Although the stability of local feature importance methods (explaining individual predictions) has been studied before, there is yet a knowledge gap about the stability of global features importance methods (explanations for the whole model). Additionally, there is no study that evaluates and compares the accuracy of global feature importance methods with respect to feature ordering. In this paper, we evaluate the accuracy and stability of global feature importance methods through comprehensive experiments done on simulations as well as four real-world datasets. We focus on tree-based ensemble methods as they are used widely in industry and measure the accuracy and stability of explanations under two scenarios: 1) when inputs are perturbed 2) when models are perturbed. Our findings provide a comparison of these methods under a variety of settings and shed light on the limitations of global feature importance methods by indicating their lack of accuracy with and without noisy inputs, as well as their lack of stability with respect to: 1) increase in input dimension or noise in the data; 2) perturbations in models initialized by different random seeds or hyperparameter settings.



There are no comments yet.


page 1

page 2

page 3

page 4


Problems with Shapley-value-based explanations as feature importance measures

Game-theoretic formulations of feature importance have become popular as...

Q-FIT: The Quantifiable Feature Importance Technique for Explainable Machine Learning

We introduce a novel framework to quantify the importance of each input ...

Algorithm-Agnostic Explainability for Unsupervised Clustering

Supervised machine learning explainability has greatly expanded in recen...

Inferring feature importance with uncertainties in high-dimensional data

Estimating feature importance is a significant aspect of explaining data...

On Locality of Local Explanation Models

Shapley values provide model agnostic feature attributions for model out...

Beyond Importance Scores: Interpreting Tabular ML by Visualizing Feature Semantics

Interpretability is becoming an active research topic as machine learnin...

Fast Estimation Method for the Stability of Ensemble Feature Selectors

It is preferred that feature selectors be stable for better interpretabi...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.