Understanding the Effect of Counterfactual Explanations on Trust and Reliance on AI for Human-AI Collaborative Clinical Decision Making

08/08/2023
by   Min Hun Lee, et al.
0

Artificial intelligence (AI) is increasingly being considered to assist human decision-making in high-stake domains (e.g. health). However, researchers have discussed an issue that humans can over-rely on wrong suggestions of the AI model instead of achieving human AI complementary performance. In this work, we utilized salient feature explanations along with what-if, counterfactual explanations to make humans review AI suggestions more analytically to reduce overreliance on AI and explored the effect of these explanations on trust and reliance on AI during clinical decision-making. We conducted an experiment with seven therapists and ten laypersons on the task of assessing post-stroke survivors' quality of motion, and analyzed their performance, agreement level on the task, and reliance on AI without and with two types of AI explanations. Our results showed that the AI model with both salient features and counterfactual explanations assisted therapists and laypersons to improve their performance and agreement level on the task when `right' AI outputs are presented. While both therapists and laypersons over-relied on `wrong' AI outputs, counterfactual explanations assisted both therapists and laypersons to reduce their over-reliance on `wrong' AI outputs by 21% compared to salient feature explanations. Specifically, laypersons had higher performance degrades by 18.0 f1-score with salient feature explanations and 14.0 f1-score with counterfactual explanations than therapists with performance degrades of 8.6 and 2.8 f1-scores respectively. Our work discusses the potential of counterfactual explanations to better estimate the accuracy of an AI model and reduce over-reliance on `wrong' AI outputs and implications for improving human-AI collaborative decision-making.

READ FULL TEXT

page 5

page 6

page 11

page 12

page 13

page 15

research
06/06/2022

Improving Model Understanding and Trust with Counterfactual Explanations of Model Confidence

In this paper, we show that counterfactual explanations of confidence sc...
research
07/05/2023

Beyond Known Reality: Exploiting Counterfactual Explanations for Medical Research

This study employs counterfactual explanations to explore "what if?" sce...
research
04/18/2023

On the Interdependence of Reliance Behavior and Accuracy in AI-Assisted Decision-Making

In AI-assisted decision-making, a central promise of putting a human in ...
research
04/27/2023

Why not both? Complementing explanations with uncertainty, and the role of self-confidence in Human-AI collaboration

AI and ML models have already found many applications in critical domain...
research
02/08/2022

Machine Explanations and Human Understanding

Explanations are hypothesized to improve human understanding of machine ...
research
12/23/2021

Human-AI Collaboration for UX Evaluation: Effects of Explanation and Synchronization

Analyzing usability test videos is arduous. Although recent research sho...
research
02/19/2021

To Trust or to Think: Cognitive Forcing Functions Can Reduce Overreliance on AI in AI-assisted Decision-making

People supported by AI-powered decision support tools frequently overrel...

Please sign up or login with your details

Forgot password? Click here to reset