Let's Go to the Alien Zoo: Introducing an Experimental Framework to Study Usability of Counterfactual Explanations for Machine Learning

05/06/2022
by   Ulrike Kuhl, et al.
9

To foster usefulness and accountability of machine learning (ML), it is essential to explain a model's decisions in addition to evaluating its performance. Accordingly, the field of explainable artificial intelligence (XAI) has resurfaced as a topic of active research, offering approaches to address the "how" and "why" of automated decision-making. Within this domain, counterfactual explanations (CFEs) have gained considerable traction as a psychologically grounded approach to generate post-hoc explanations. To do so, CFEs highlight what changes to a model's input would have changed its prediction in a particular way. However, despite the introduction of numerous CFE approaches, their usability has yet to be thoroughly validated at the human level. Thus, to advance the field of XAI, we introduce the Alien Zoo, an engaging, web-based and game-inspired experimental framework. The Alien Zoo provides the means to evaluate usability of CFEs for gaining new knowledge from an automated system, targeting novice users in a domain-general context. As a proof of concept, we demonstrate the practical efficacy and feasibility of this approach in a user study. Our results suggest that users benefit from receiving CFEs compared to no explanation, both in terms of objective performance in the proposed iterative learning task, and subjective usability. With this work, we aim to equip research groups and practitioners with the means to easily run controlled and well-powered user studies to complement their otherwise often more technology-oriented work. Thus, in the interest of reproducible research, we provide the entire code, together with the underlying models and user data.

READ FULL TEXT

page 9

page 11

page 25

page 26

page 27

research
06/13/2023

For Better or Worse: The Impact of Counterfactual Explanations' Directionality on User Behavior in xAI

Counterfactual explanations (CFEs) are a popular approach in explainable...
research
09/21/2023

Predictability and Comprehensibility in Post-Hoc XAI Methods: A User-Centered Analysis

Post-hoc explainability methods aim to clarify predictions of black-box ...
research
04/25/2022

Integrating Prior Knowledge in Post-hoc Explanations

In the field of eXplainable Artificial Intelligence (XAI), post-hoc inte...
research
10/20/2020

Counterfactual Explanations for Machine Learning: A Review

Machine learning plays a role in many deployed decision systems, often i...
research
03/16/2023

Explaining Groups of Instances Counterfactually for XAI: A Use Case, Algorithm and User Study for Group-Counterfactuals

Counterfactual explanations are an increasingly popular form of post hoc...
research
05/10/2023

Achieving Diversity in Counterfactual Explanations: a Review and Discussion

In the field of Explainable Artificial Intelligence (XAI), counterfactua...

Please sign up or login with your details

Forgot password? Click here to reset