A Human-Grounded Evaluation of SHAP for Alert Processing

07/07/2019
by   Hilde J. P. Weerts, et al.
0

In the past years, many new explanation methods have been proposed to achieve interpretability of machine learning predictions. However, the utility of these methods in practical applications has not been researched extensively. In this paper we present the results of a human-grounded evaluation of SHAP, an explanation method that has been well-received in the XAI and related communities. In particular, we study whether this local model-agnostic explanation method can be useful for real human domain experts to assess the correctness of positive predictions, i.e. alerts generated by a classifier. We performed experimentation with three different groups of participants (159 in total), who had basic knowledge of explainable machine learning. We performed a qualitative analysis of recorded reflections of experiment participants performing alert processing with and without SHAP information. The results suggest that the SHAP explanations do impact the decision-making process, although the model's confidence score remains to be a leading source of evidence. We statistically test whether there is a significant difference in task utility metrics between tasks for which an explanation was available and tasks in which it was not provided. As opposed to common intuitions, we did not find a significant difference in alert processing performance when a SHAP explanation is available compared to when it is not.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
01/16/2018

A Human-Grounded Evaluation Benchmark for Local Explanations of Machine Learning

In order for people to be able to trust and take advantage of the result...
research
05/05/2021

Explainable Artificial Intelligence for Human Decision-Support System in Medical Domain

In the present paper we present the potential of Explainable Artificial ...
research
01/21/2021

How can I choose an explainer? An Application-grounded Evaluation of Post-hoc Explanations

There have been several research works proposing new Explainable AI (XAI...
research
07/22/2018

Knowledge-based Transfer Learning Explanation

Machine learning explanation can significantly boost machine learning's ...
research
07/17/2020

Sequential Explanations with Mental Model-Based Policies

The act of explaining across two parties is a feedback loop, where one p...
research
02/16/2023

Assisting Human Decisions in Document Matching

Many practical applications, ranging from paper-reviewer assignment in p...
research
09/04/2023

Why Change My Design: Explaining Poorly Constructed Visualization Designs with Explorable Explanations

Although visualization tools are widely available and accessible, not ev...

Please sign up or login with your details

Forgot password? Click here to reset