A Meta Survey of Quality Evaluation Criteria in Explanation Methods

03/25/2022
by   Helena Löfström, et al.
0

Explanation methods and their evaluation have become a significant issue in explainable artificial intelligence (XAI) due to the recent surge of opaque AI models in decision support systems (DSS). Since the most accurate AI models are opaque with low transparency and comprehensibility, explanations are essential for bias detection and control of uncertainty. There are a plethora of criteria to choose from when evaluating explanation method quality. However, since existing criteria focus on evaluating single explanation methods, it is not obvious how to compare the quality of different methods. This lack of consensus creates a critical shortage of rigour in the field, although little is written about comparative evaluations of explanation methods. In this paper, we have conducted a semi-systematic meta-survey over fifteen literature surveys covering the evaluation of explainability to identify existing criteria usable for comparative evaluations of explanation methods. The main contribution in the paper is the suggestion to use appropriate trust as a criterion to measure the outcome of the subjective evaluation criteria and consequently make comparative evaluations possible. We also present a model of explanation quality aspects. In the model, criteria with similar definitions are grouped and related to three identified aspects of quality; model, explanation, and user. We also notice four commonly accepted criteria (groups) in the literature, covering all aspects of explanation quality: Performance, appropriate trust, explanation satisfaction, and fidelity. We suggest the model be used as a chart for comparative evaluations to create more generalisable research in explanation quality.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
09/21/2023

On the Definition of Appropriate Trust and the Tools that Come with it

Evaluating the efficiency of human-AI interactions is challenging, inclu...
research
10/27/2017

State of the art of Trust and Reputation Systems in E-Commerce Context

This article proposes in depth comparative study of the most popular, us...
research
05/15/2021

XAI Method Properties: A (Meta-)study

In the meantime, a wide variety of terminologies, motivations, approache...
research
09/01/2022

How to Evaluate Explainability? – A Case for Three Criteria

The increasing complexity of software systems and the influence of softw...
research
02/14/2023

The Meta-Evaluation Problem in Explainable AI: Identifying Reliable Estimators with MetaQuantus

Explainable AI (XAI) is a rapidly evolving field that aims to improve tr...
research
11/11/2022

Behaviour Trees for Conversational Explanation Experiences

Explainable AI (XAI) has the potential to make a significant impact on b...

Please sign up or login with your details

Forgot password? Click here to reset