Challenging common interpretability assumptions in feature attribution explanations

12/04/2020
by   Jonathan Dinu, et al.
0

As machine learning and algorithmic decision making systems are increasingly being leveraged in high-stakes human-in-the-loop settings, there is a pressing need to understand the rationale of their predictions. Researchers have responded to this need with explainable AI (XAI), but often proclaim interpretability axiomatically without evaluation. When these systems are evaluated, they are often tested through offline simulations with proxy metrics of interpretability (such as model complexity). We empirically evaluate the veracity of three common interpretability assumptions through a large scale human-subjects experiment with a simple "placebo explanation" control. We find that feature attribution explanations provide marginal utility in our task for a human decision maker and in certain cases result in worse decisions due to cognitive and contextual confounders. This result challenges the assumed universal benefit of applying these methods and we hope this work will underscore the importance of human evaluation in XAI research. Supplemental materials – including anonymized data from the experiment, code to replicate the study, an interactive demo of the experiment, and the models used in the analysis – can be found at: https://doi.pizza/challenging-xai.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/05/2020

Don't Explain without Verifying Veracity: An Evaluation of Explainable AI with Video Activity Recognition

Explainable machine learning and artificial intelligence models have bee...
research
04/16/2021

Faithful and Plausible Explanations of Medical Code Predictions

Machine learning models that offer excellent predictive performance ofte...
research
09/27/2022

Learning When to Advise Human Decision Makers

Artificial intelligence (AI) systems are increasingly used for providing...
research
02/24/2023

Explainable AI is Dead, Long Live Explainable AI! Hypothesis-driven decision support

In this paper, we argue for a paradigm shift from the current model of e...
research
01/21/2021

How can I choose an explainer? An Application-grounded Evaluation of Post-hoc Explanations

There have been several research works proposing new Explainable AI (XAI...
research
05/16/2022

Realistic utility functions prove difficult for state-of-the-art interactive multiobjective optimization algorithms

Improvements to the design of interactive Evolutionary Multiobjective Al...
research
12/22/2022

Impossibility Theorems for Feature Attribution

Despite a sea of interpretability methods that can produce plausible exp...

Please sign up or login with your details

Forgot password? Click here to reset