Data Representing Ground-Truth Explanations to Evaluate XAI Methods

11/18/2020
by   Shideh Shams Amiri, et al.
0

Explainable artificial intelligence (XAI) methods are currently evaluated with approaches mostly originated in interpretable machine learning (IML) research that focus on understanding models such as comparison against existing attribution approaches, sensitivity analyses, gold set of features, axioms, or through demonstration of images. There are problems with these methods such as that they do not indicate where current XAI approaches fail to guide investigations towards consistent progress of the field. They do not measure accuracy in support of accountable decisions, and it is practically impossible to determine whether one XAI method is better than the other or what the weaknesses of existing models are, leaving researchers without guidance on which research questions will advance the field. Other fields usually utilize ground-truth data and create benchmarks. Data representing ground-truth explanations is not typically used in XAI or IML. One reason is that explanations are subjective, in the sense that an explanation that satisfies one user may not satisfy another. To overcome these problems, we propose to represent explanations with canonical equations that can be used to evaluate the accuracy of XAI methods. The contributions of this paper include a methodology to create synthetic data representing ground-truth explanations, three data sets, an evaluation of LIME using these data sets, and a preliminary analysis of the challenges and potential benefits in using these data to evaluate existing XAI approaches. Evaluation methods based on human-centric studies are outside the scope of this paper.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/11/2023

A novel approach to generate datasets with XAI ground truth to evaluate image models

With the increased usage of artificial intelligence (AI), it is imperati...
research
08/11/2023

FunnyBirds: A Synthetic Vision Dataset for a Part-Based Analysis of Explainable AI Methods

The field of explainable artificial intelligence (XAI) aims to uncover t...
research
07/20/2020

Towards Ground Truth Explainability on Tabular Data

In data science, there is a long history of using synthetic data for met...
research
11/22/2020

Registration of serial sections: An evaluation method based on distortions of the ground truths

Registration of histological serial sections is a challenging task. Seri...
research
05/20/2021

Evaluating the Correctness of Explainable AI Algorithms for Classification

Explainable AI has attracted much research attention in recent years wit...
research
02/23/2023

The Generalizability of Explanations

Due to the absence of ground truth, objective evaluation of explainabili...
research
07/16/2019

Evaluating Explanation Without Ground Truth in Interpretable Machine Learning

Interpretable Machine Learning (IML) has become increasingly important i...

Please sign up or login with your details

Forgot password? Click here to reset