Does the Whole Exceed its Parts? The Effect of AI Explanations on Complementary Team Performance

06/26/2020
by   Gagan Bansal, et al.
20

Increasingly, organizations are pairing humans with AI systems to improve decision-making and reducing costs. Proponents of human-centered AI argue that team performance can even further improve when the AI model explains its recommendations. However, a careful analysis of existing literature reveals that prior studies observed improvements due to explanations only when the AI, alone, outperformed both the human and the best human-AI team. This raises an important question: can explanations lead to complementary performance, i.e., with accuracy higher than both the human and the AI working alone? We address this question by devising comprehensive studies on human-AI teaming, where participants solve a task with help from an AI system without explanations and from one with varying types of AI explanation support. We carefully controlled to ensure comparable human and AI accuracy across experiments on three NLP datasets (two for sentiment analysis and one for question answering). While we found complementary improvements from AI augmentation, they were not increased by state-of-the-art explanations compared to simpler strategies, such as displaying the AI's confidence. We show that explanations increase the chance that humans will accept the AI's recommendation regardless of whether the AI is correct. While this clarifies the gains in team performance from explanations in prior work, it poses new challenges for human-centered AI: how can we best design systems to produce complementary performance? Can we develop explanatory approaches that help humans decide whether and when to trust AI input?

READ FULL TEXT

page 14

page 16

research
05/12/2023

In Search of Verifiability: Explanations Rarely Enable Complementary Performance in AI-Advised Decision Making

The current literature on AI-advised decision making – involving explain...
research
01/13/2021

Understanding the Effect of Out-of-distribution Examples and Interactive Explanations on Human-AI Decision Making

Although AI holds promise for improving human decision making in societa...
research
04/27/2020

Optimizing AI for Teamwork

In many high-stakes domains such as criminal justice, finance, and healt...
research
02/05/2020

`Why not give this work to them?' Explaining AI-Moderated Task-Allocation Outcomes using Negotiation Trees

The problem of multi-agent task allocation arises in a variety of scenar...
research
12/13/2021

Role of Human-AI Interaction in Selective Prediction

Recent work has shown the potential benefit of selective prediction syst...
research
08/25/2023

AdvisingNets: Learning to Distinguish Correct and Wrong Classifications via Nearest-Neighbor Explanations

Besides providing insights into how an image classifier makes its predic...
research
10/23/2022

Learning to Advise Humans By Leveraging Algorithm Discretion

Expert decision-makers (DMs) in high-stakes AI-advised (AIDeT) settings ...

Please sign up or login with your details

Forgot password? Click here to reset