Evaluating Explanation Without Ground Truth in Interpretable Machine Learning

07/16/2019
by   Fan Yang, et al.
7

Interpretable Machine Learning (IML) has become increasingly important in many applications, such as autonomous cars and medical diagnosis, where explanations are preferred to help people better understand how machine learning systems work and further enhance their trust towards systems. Particularly in robotics, explanations from IML are significantly helpful in providing reasons for those adverse and inscrutable actions, which could impair the safety and profit of the public. However, due to the diversified scenarios and subjective nature of explanations, we rarely have the ground truth for benchmark evaluation in IML on the quality of generated explanations. Having a sense of explanation quality not only matters for quantifying system boundaries, but also helps to realize the true benefits to human users in real-world applications. To benchmark evaluation in IML, in this paper, we rigorously define the problem of evaluating explanations, and systematically review the existing efforts. Specifically, we summarize three general aspects of explanation (i.e., predictability, fidelity and persuasibility) with formal definitions, and respectively review the representative methodologies for each of them under different tasks. Further, a unified evaluation framework is designed according to the hierarchical needs from developers and end-users, which could be easily adopted for different scenarios in practice. In the end, open problems are discussed, and several limitations of current evaluation techniques are raised for future explorations.

READ FULL TEXT

page 1

page 7

page 9

research
01/16/2018

A Human-Grounded Evaluation Benchmark for Local Explanations of Machine Learning

In order for people to be able to trust and take advantage of the result...
research
02/02/2018

How do Humans Understand Explanations from Machine Learning Systems? An Evaluation of the Human-Interpretability of Explanation

Recent years have seen a boom in interest in machine learning systems th...
research
05/04/2023

Are Human Explanations Always Helpful? Towards Objective Evaluation of Human Natural Language Explanations

Human-annotated labels and explanations are critical for training explai...
research
07/26/2019

How model accuracy and explanation fidelity influence user trust

Machine learning systems have become popular in fields such as marketing...
research
11/18/2020

Data Representing Ground-Truth Explanations to Evaluate XAI Methods

Explainable artificial intelligence (XAI) methods are currently evaluate...
research
06/15/2020

A systematic review and taxonomy of explanations in decision support and recommender systems

With the recent advances in the field of artificial intelligence, an inc...
research
06/15/2021

On the Objective Evaluation of Post Hoc Explainers

Many applications of data-driven models demand transparency of decisions...

Please sign up or login with your details

Forgot password? Click here to reset