DeepAI AI Chat
Log In Sign Up

A Human-Grounded Evaluation of SHAP for Alert Processing

07/07/2019
by   Hilde J. P. Weerts, et al.
TU Eindhoven
Rabobank
0

In the past years, many new explanation methods have been proposed to achieve interpretability of machine learning predictions. However, the utility of these methods in practical applications has not been researched extensively. In this paper we present the results of a human-grounded evaluation of SHAP, an explanation method that has been well-received in the XAI and related communities. In particular, we study whether this local model-agnostic explanation method can be useful for real human domain experts to assess the correctness of positive predictions, i.e. alerts generated by a classifier. We performed experimentation with three different groups of participants (159 in total), who had basic knowledge of explainable machine learning. We performed a qualitative analysis of recorded reflections of experiment participants performing alert processing with and without SHAP information. The results suggest that the SHAP explanations do impact the decision-making process, although the model's confidence score remains to be a leading source of evidence. We statistically test whether there is a significant difference in task utility metrics between tasks for which an explanation was available and tasks in which it was not provided. As opposed to common intuitions, we did not find a significant difference in alert processing performance when a SHAP explanation is available compared to when it is not.

READ FULL TEXT

page 1

page 2

page 3

page 4

01/16/2018

A Human-Grounded Evaluation Benchmark for Local Explanations of Machine Learning

In order for people to be able to trust and take advantage of the result...
05/05/2021

Explainable Artificial Intelligence for Human Decision-Support System in Medical Domain

In the present paper we present the potential of Explainable Artificial ...
01/21/2021

How can I choose an explainer? An Application-grounded Evaluation of Post-hoc Explanations

There have been several research works proposing new Explainable AI (XAI...
07/17/2020

Sequential Explanations with Mental Model-Based Policies

The act of explaining across two parties is a feedback loop, where one p...
07/22/2018

Knowledge-based Transfer Learning Explanation

Machine learning explanation can significantly boost machine learning's ...
03/30/2022

Example-based Explanations with Adversarial Attacks for Respiratory Sound Analysis

Respiratory sound classification is an important tool for remote screeni...
12/17/2021

Explain, Edit, and Understand: Rethinking User Study Design for Evaluating Model Explanations

In attempts to "explain" predictions of machine learning models, researc...