FunnyBirds: A Synthetic Vision Dataset for a Part-Based Analysis of Explainable AI Methods

08/11/2023
by   robin-hesse, et al.
0

The field of explainable artificial intelligence (XAI) aims to uncover the inner workings of complex deep neural models. While being crucial for safety-critical domains, XAI inherently lacks ground-truth explanations, making its automatic evaluation an unsolved problem. We address this challenge by proposing a novel synthetic vision dataset, named FunnyBirds, and accompanying automatic evaluation protocols. Our dataset allows performing semantically meaningful image interventions, e.g., removing individual object parts, which has three important implications. First, it enables analyzing explanations on a part level, which is closer to human comprehension than existing methods that evaluate on a pixel level. Second, by comparing the model output for inputs with removed parts, we can estimate ground-truth part importances that should be reflected in the explanations. Third, by mapping individual explanations into a common space of part importances, we can analyze a variety of different explanation types in a single common framework. Using our tools, we report results for 24 different combinations of neural models and XAI methods, demonstrating the strengths and weaknesses of the assessed methods in a fully automatic and systematic manner.

READ FULL TEXT

page 3

page 7

page 8

page 12

page 16

page 17

page 18

research
11/18/2020

Data Representing Ground-Truth Explanations to Evaluate XAI Methods

Explainable artificial intelligence (XAI) methods are currently evaluate...
research
09/22/2020

What Do You See? Evaluation of Explainable Artificial Intelligence (XAI) Interpretability through Neural Backdoors

EXplainable AI (XAI) methods have been proposed to interpret how a deep ...
research
02/11/2023

A novel approach to generate datasets with XAI ground truth to evaluate image models

With the increased usage of artificial intelligence (AI), it is imperati...
research
05/29/2023

Can We Trust Explainable AI Methods on ASR? An Evaluation on Phoneme Recognition

Explainable AI (XAI) techniques have been widely used to help explain an...
research
09/23/2020

The Struggles of Feature-Based Explanations: Shapley Values vs. Minimal Sufficient Subsets

For neural models to garner widespread public trust and ensure fairness,...
research
03/09/2023

Explainable Goal Recognition: A Framework Based on Weight of Evidence

We introduce and evaluate an eXplainable Goal Recognition (XGR) model th...
research
05/28/2023

Decoding the Underlying Meaning of Multimodal Hateful Memes

Recent studies have proposed models that yielded promising performance f...

Please sign up or login with your details

Forgot password? Click here to reset