Evaluating Tree Explanation Methods for Anomaly Reasoning: A Case Study of SHAP TreeExplainer and TreeInterpreter

10/13/2020
by   Pulkit Sharma, et al.
0

Understanding predictions made by Machine Learning models is critical in many applications. In this work, we investigate the performance of two methods for explaining tree-based models- Tree Interpreter (TI) and SHapley Additive exPlanations TreeExplainer (SHAP-TE). Using a case study on detecting anomalies in job runtimes of applications that utilize cloud-computing platforms, we compare these approaches using a variety of metrics, including computation time, significance of attribution value, and explanation accuracy. We find that, although the SHAP-TE offers consistency guarantees over TI, at the cost of increased computation, consistency does not necessarily improve the explanation performance in our case study.

READ FULL TEXT

page 1

page 2

page 3

page 4

09/16/2022

Computing Abductive Explanations for Boosted Trees

Boosted trees is a dominant ML model, exhibiting high accuracy. However,...
09/20/2021

Fast TreeSHAP: Accelerating SHAP Value Computation for Trees

SHAP (SHapley Additive exPlanation) values are one of the leading tools ...
09/26/2013

Evaluating computational models of explanation using human judgments

We evaluate four computational models of explanation in Bayesian network...
04/01/2021

Coalitional strategies for efficient individual prediction explanation

As Machine Learning (ML) is now widely applied in many domains, in both ...
06/19/2018

Instance-Level Explanations for Fraud Detection: A Case Study

Fraud detection is a difficult problem that can benefit from predictive ...
12/31/2020

Quantitative Evaluations on Saliency Methods: An Experimental Study

It has been long debated that eXplainable AI (XAI) is an important topic...
07/10/2019

Explaining an increase in predicted risk for clinical alerts

Much work aims to explain a model's prediction on a static input. We con...