The Impact of Imperfect XAI on Human-AI Decision-Making

07/25/2023
by   Katelyn Morrison, et al.
0

Explainability techniques are rapidly being developed to improve human-AI decision-making across various cooperative work settings. Consequently, previous research has evaluated how decision-makers collaborate with imperfect AI by investigating appropriate reliance and task performance with the aim of designing more human-centered computer-supported collaborative tools. Several human-centered explainable AI (XAI) techniques have been proposed in hopes of improving decision-makers' collaboration with AI; however, these techniques are grounded in findings from previous studies that primarily focus on the impact of incorrect AI advice. Few studies acknowledge the possibility for the explanations to be incorrect even if the AI advice is correct. Thus, it is crucial to understand how imperfect XAI affects human-AI decision-making. In this work, we contribute a robust, mixed-methods user study with 136 participants to evaluate how incorrect explanations influence humans' decision-making behavior in a bird species identification task taking into account their level of expertise and an explanation's level of assertiveness. Our findings reveal the influence of imperfect XAI and humans' level of expertise on their reliance on AI and human-AI team performance. We also discuss how explanations can deceive decision-makers during human-AI collaboration. Hence, we shed light on the impacts of imperfect XAI in the field of computer-supported cooperative work and provide guidelines for designers of human-AI collaboration systems.

READ FULL TEXT

page 12

page 21

page 33

page 36

research
05/12/2023

In Search of Verifiability: Explanations Rarely Enable Complementary Performance in AI-Advised Decision Making

The current literature on AI-advised decision making – involving explain...
research
01/12/2021

Expanding Explainability: Towards Social Transparency in AI systems

As AI-powered systems increasingly mediate consequential decision-making...
research
01/08/2022

Modeling Human-AI Team Decision Making

AI and humans bring complementary skills to group deliberations. Modelin...
research
01/25/2023

Knowing About Knowing: An Illusion of Human Competence Can Hinder Appropriate Reliance on AI Systems

The dazzling promises of AI systems to augment humans in various tasks h...
research
12/19/2022

Explaining Classifications to Non Experts: An XAI User Study of Post Hoc Explanations for a Classifier When People Lack Expertise

Very few eXplainable AI (XAI) studies consider how users understanding o...
research
09/11/2021

An Objective Metric for Explainable AI: How and Why to Estimate the Degree of Explainability

Numerous government initiatives (e.g. the EU with GDPR) are coming to th...
research
02/04/2023

Appropriate Reliance on AI Advice: Conceptualization and the Effect of Explanations

AI advice is becoming increasingly popular, e.g., in investment and medi...

Please sign up or login with your details

Forgot password? Click here to reset