What I Cannot Predict, I Do Not Understand: A Human-Centered Evaluation Framework for Explainability Methods

12/06/2021
by   Thomas Fel, et al.
0

A multitude of explainability methods and theoretical evaluation scores have been proposed. However, it is not yet known: (1) how useful these methods are in real-world scenarios and (2) how well theoretical measures predict the usefulness of these methods for practical use by a human. To fill this gap, we conducted human psychophysics experiments at scale to evaluate the ability of human participants (n=1,150) to leverage representative attribution methods to learn to predict the decision of different image classifiers. Our results demonstrate that theoretical measures used to score explainability methods poorly reflect the practical usefulness of individual attribution methods in real-world scenarios. Furthermore, the degree to which individual attribution methods helped human participants predict classifiers' decisions varied widely across categorization tasks and datasets. Overall, our results highlight fundamental challenges for the field – suggesting a critical need to develop better explainability methods and to deploy human-centered evaluation approaches. We will make the code of our framework available to ease the systematic evaluation of novel explainability methods.

READ FULL TEXT

page 1

page 6

page 13

page 15

page 17

page 19

page 20

page 21

research
11/17/2022

CRAFT: Concept Recursive Activation FacTorization for Explainability

Attribution methods are a popular class of explainability methods that u...
research
06/27/2021

Crowdsourcing Evaluation of Saliency-based XAI Methods

Understanding the reasons behind the predictions made by deep neural net...
research
09/28/2021

Who Explains the Explanation? Quantitatively Assessing Feature Attribution Methods

AI explainability seeks to increase the transparency of models, making t...
research
05/31/2021

The effectiveness of feature attribution methods and its correlation with automatic evaluation scores

Explaining the decisions of an Artificial Intelligence (AI) model is inc...
research
06/11/2023

A Holistic Approach to Unifying Automatic Concept Extraction and Concept Importance Estimation

In recent years, concept-based approaches have emerged as some of the mo...
research
07/18/2023

Gradient strikes back: How filtering out high frequencies improves explanations

Recent years have witnessed an explosion in the development of novel pre...
research
07/23/2022

A general-purpose method for applying Explainable AI for Anomaly Detection

The need for explainable AI (XAI) is well established but relatively lit...

Please sign up or login with your details

Forgot password? Click here to reset