Evaluating Tree Explanation Methods for Anomaly Reasoning: A Case Study of SHAP TreeExplainer and TreeInterpreter

by   Pulkit Sharma, et al.

Understanding predictions made by Machine Learning models is critical in many applications. In this work, we investigate the performance of two methods for explaining tree-based models- Tree Interpreter (TI) and SHapley Additive exPlanations TreeExplainer (SHAP-TE). Using a case study on detecting anomalies in job runtimes of applications that utilize cloud-computing platforms, we compare these approaches using a variety of metrics, including computation time, significance of attribution value, and explanation accuracy. We find that, although the SHAP-TE offers consistency guarantees over TI, at the cost of increased computation, consistency does not necessarily improve the explanation performance in our case study.


page 1

page 2

page 3

page 4


Computing Abductive Explanations for Boosted Trees

Boosted trees is a dominant ML model, exhibiting high accuracy. However,...

Fast TreeSHAP: Accelerating SHAP Value Computation for Trees

SHAP (SHapley Additive exPlanation) values are one of the leading tools ...

Evaluating computational models of explanation using human judgments

We evaluate four computational models of explanation in Bayesian network...

Coalitional strategies for efficient individual prediction explanation

As Machine Learning (ML) is now widely applied in many domains, in both ...

Instance-Level Explanations for Fraud Detection: A Case Study

Fraud detection is a difficult problem that can benefit from predictive ...

Quantitative Evaluations on Saliency Methods: An Experimental Study

It has been long debated that eXplainable AI (XAI) is an important topic...

Explaining an increase in predicted risk for clinical alerts

Much work aims to explain a model's prediction on a static input. We con...