An empirical study of the effect of background data size on the stability of SHapley Additive exPlanations (SHAP) for deep learning models

04/24/2022
by   Han Yuan, et al.
0

Nowadays, the interpretation of why a machine learning (ML) model makes certain inferences is as crucial as the accuracy of such inferences. Some ML models like the decision tree possess inherent interpretability that can be directly comprehended by humans. Others like artificial neural networks (ANN), however, rely on external methods to uncover the deduction mechanism. SHapley Additive exPlanations (SHAP) is one of such external methods, which requires a background dataset when interpreting ANNs. Generally, a background dataset consists of instances randomly sampled from the training dataset. However, the sampling size and its effect on SHAP remain to be unexplored. In our empirical study on the MIMIC-III dataset, we show that the two core explanations - SHAP values and variable rankings fluctuate when using different background datasets acquired from random sampling, indicating that users cannot unquestioningly trust the one-shot interpretation from SHAP. Luckily, such fluctuation decreases with the increase of the background dataset size. Also, we notice an U-shape in the stability assessment of SHAP variable rankings, demonstrating that SHAP is more reliable in ranking the most and least important variables compared to moderately important ones. Overall, our results suggest that users should take into account how background data affects SHAP results, with improved SHAP stability as the background sample size increases.

READ FULL TEXT
research
01/31/2020

Statistical stability indices for LIME: obtaining reliable explanations for Machine Learning models

Nowadays we are witnessing a transformation of the business processes to...
research
06/10/2020

OptiLIME: Optimized LIME Explanations for Diagnostic Computer Algorithms

Local Interpretable Model-Agnostic Explanations (LIME) is a popular meth...
research
08/14/2023

Can we Agree? On the Rashōmon Effect and the Reliability of Post-Hoc Explainable AI

The Rashōmon effect poses challenges for deriving reliable knowledge fro...
research
08/28/2023

TRIVEA: Transparent Ranking Interpretation using Visual Explanation of Black-Box Algorithmic Rankers

Ranking schemes drive many real-world decisions, like, where to study, w...
research
06/09/2021

A general approach for Explanations in terms of Middle Level Features

Nowadays, it is growing interest to make Machine Learning (ML) systems m...
research
02/17/2021

Intuitively Assessing ML Model Reliability through Example-Based Explanations and Editing Model Inputs

Interpretability methods aim to help users build trust in and understand...

Please sign up or login with your details

Forgot password? Click here to reset