A Human-Grounded Evaluation Benchmark for Local Explanations of Machine Learning

01/16/2018
by   Sina Mohseni, et al.
0

In order for people to be able to trust and take advantage of the results of advanced machine learning and artificial intelligence solutions for real decision making, people need to be able to understand the machine rationale for given output. Research in explain artificial intelligence (XAI) addresses the aim, but there is a need for evaluation of human relevance and understandability of explanations. Our work contributes a novel methodology for evaluating the quality or human interpretability of explanations for machine learning models. We present an evaluation benchmark for instance explanations from text and image classifiers. The explanation meta-data in this benchmark is generated from user annotations of image and text samples. We describe the benchmark and demonstrate its utility by a quantitative evaluation on explanations generated from a recent machine learning algorithm. This research demonstrates how human-grounded evaluation could be used as a measure to qualify local machine-learning explanations.

READ FULL TEXT

page 1

page 4

page 5

research
06/27/2022

"Explanation" is Not a Technical Term: The Problem of Ambiguity in XAI

There is broad agreement that Artificial Intelligence (AI) systems, part...
research
11/20/2017

The Promise and Peril of Human Evaluation for Model Interpretability

Transparency, user trust, and human comprehension are popular ethical mo...
research
05/15/2022

Textual Explanations and Critiques in Recommendation Systems

Artificial intelligence and machine learning algorithms have become ubiq...
research
07/16/2019

Evaluating Explanation Without Ground Truth in Interpretable Machine Learning

Interpretable Machine Learning (IML) has become increasingly important i...
research
07/07/2019

A Human-Grounded Evaluation of SHAP for Alert Processing

In the past years, many new explanation methods have been proposed to ac...
research
05/31/2021

Bounded logit attention: Learning to explain image classifiers

Explainable artificial intelligence is the attempt to elucidate the work...
research
06/24/2021

Human-in-the-loop model explanation via verbatim boundary identification in generated neighborhoods

The black-box nature of machine learning models limits their use in case...

Please sign up or login with your details

Forgot password? Click here to reset