Appropriate Reliance on AI Advice: Conceptualization and the Effect of Explanations

02/04/2023
by   Max Schemmer, et al.
0

AI advice is becoming increasingly popular, e.g., in investment and medical treatment decisions. As this advice is typically imperfect, decision-makers have to exert discretion as to whether actually follow that advice: they have to "appropriately" rely on correct and turn down incorrect advice. However, current research on appropriate reliance still lacks a common definition as well as an operational measurement concept. Additionally, no in-depth behavioral experiments have been conducted that help understand the factors influencing this behavior. In this paper, we propose Appropriateness of Reliance (AoR) as an underlying, quantifiable two-dimensional measurement concept. We develop a research model that analyzes the effect of providing explanations for AI advice. In an experiment with 200 participants, we demonstrate how these explanations influence the AoR, and, thus, the effectiveness of AI advice. Our work contributes fundamental concepts for the analysis of reliance behavior and the purposeful design of AI advisors.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
04/14/2022

Should I Follow AI-based Advice? Measuring Appropriate Reliance in Human-AI Decision-Making

Many important decisions in daily life are made with the help of advisor...
research
04/27/2022

On the Relationship Between Explanations, Fairness Perceptions, and Decisions

It is known that recommendations of AI-based systems can be incorrect or...
research
09/23/2022

On Explanations, Fairness, and Appropriate Reliance in Human-AI Decision-Making

Explanations have been framed as an essential feature for better and fai...
research
07/25/2023

The Impact of Imperfect XAI on Human-AI Decision-Making

Explainability techniques are rapidly being developed to improve human-A...
research
08/08/2022

An Empirical Evaluation of Predicted Outcomes as Explanations in Human-AI Decision-Making

In this work, we empirically examine human-AI decision-making in the pre...
research
01/23/2023

Selective Explanations: Leveraging Human Input to Align Explainable AI

While a vast collection of explainable AI (XAI) algorithms have been dev...
research
04/27/2022

Exploring How Anomalous Model Input and Output Alerts Affect Decision-Making in Healthcare

An important goal in the field of human-AI interaction is to help users ...

Please sign up or login with your details

Forgot password? Click here to reset