Understanding the Role of Human Intuition on Reliance in Human-AI Decision-Making with Explanations

01/18/2023
by   Valerie Chen, et al.
0

AI explanations are often mentioned as a way to improve human-AI decision-making. Yet, empirical studies have not found consistent evidence of explanations' effectiveness and, on the contrary, suggest that they can increase overreliance when the AI system is wrong. While many factors may affect reliance on AI support, one important factor is how decision-makers reconcile their own intuition – which may be based on domain knowledge, prior task experience, or pattern recognition – with the information provided by the AI system to determine when to override AI predictions. We conduct a think-aloud, mixed-methods study with two explanation types (feature- and example-based) for two prediction tasks to explore how decision-makers' intuition affects their use of AI predictions and explanations, and ultimately their choice of when to rely on AI. Our results identify three types of intuition involved in reasoning about AI predictions and explanations: intuition about the task outcome, features, and AI limitations. Building on these, we summarize three observed pathways for decision-makers to apply their own intuition and override AI predictions. We use these pathways to explain why (1) the feature-based explanations we used did not improve participants' decision outcomes and increased their overreliance on AI, and (2) the example-based explanations we used improved decision-makers' performance over feature-based explanations and helped achieve complementary human-AI performance. Overall, our work identifies directions for further development of AI decision-support systems and explanation methods that help decision-makers effectively apply their intuition to achieve appropriate reliance on AI.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
08/08/2022

An Empirical Evaluation of Predicted Outcomes as Explanations in Human-AI Decision-Making

In this work, we empirically examine human-AI decision-making in the pre...
research
05/12/2023

In Search of Verifiability: Explanations Rarely Enable Complementary Performance in AI-Advised Decision Making

The current literature on AI-advised decision making – involving explain...
research
02/08/2022

Machine Explanations and Human Understanding

Explanations are hypothesized to improve human understanding of machine ...
research
01/23/2023

Selective Explanations: Leveraging Human Input to Align Explainable AI

While a vast collection of explainable AI (XAI) algorithms have been dev...
research
12/12/2022

PERFEX: Classifier Performance Explanations for Trustworthy AI Systems

Explainability of a classification model is crucial when deployed in rea...
research
12/13/2022

Explanations Can Reduce Overreliance on AI Systems During Decision-Making

Prior work has identified a resilient phenomenon that threatens the perf...
research
07/02/2022

PhilaeX: Explaining the Failure and Success of AI Models in Malware Detection

The explanation to an AI model's prediction used to support decision mak...

Please sign up or login with your details

Forgot password? Click here to reset