Exploring How Anomalous Model Input and Output Alerts Affect Decision-Making in Healthcare

04/27/2022
by   Marissa Radensky, et al.
9

An important goal in the field of human-AI interaction is to help users more appropriately trust AI systems' decisions. A situation in which the user may particularly benefit from more appropriate trust is when the AI receives anomalous input or provides anomalous output. To the best of our knowledge, this is the first work towards understanding how anomaly alerts may contribute to appropriate trust of AI. In a formative mixed-methods study with 4 radiologists and 4 other physicians, we explore how AI alerts for anomalous input, very high and low confidence, and anomalous saliency-map explanations affect users' experience with mockups of an AI clinical decision support system (CDSS) for evaluating chest x-rays for pneumonia. We find evidence suggesting that the four anomaly alerts are desired by non-radiologists, and the high-confidence alerts are desired by both radiologists and non-radiologists. In a follow-up user study, we investigate how high- and low-confidence alerts affect the accuracy and thus appropriate trust of 33 radiologists working with AI CDSS mockups. We observe that these alerts do not improve users' accuracy or experience and discuss potential reasons why.

READ FULL TEXT
research
01/07/2020

Effect of Confidence and Explanation on Accuracy and Trust Calibration in AI-Assisted Decision Making

Today, AI is being increasingly used to help human experts make decision...
research
01/26/2022

Cybertrust: From Explainable to Actionable and Interpretable AI (AI2)

To benefit from AI advances, users and operators of AI systems must have...
research
04/27/2023

Why not both? Complementing explanations with uncertainty, and the role of self-confidence in Human-AI collaboration

AI and ML models have already found many applications in critical domain...
research
01/25/2023

Knowing About Knowing: An Illusion of Human Competence Can Hinder Appropriate Reliance on AI Systems

The dazzling promises of AI systems to augment humans in various tasks h...
research
09/21/2023

On the Definition of Appropriate Trust and the Tools that Come with it

Evaluating the efficiency of human-AI interactions is challenging, inclu...
research
02/16/2022

The Response Shift Paradigm to Quantify Human Trust in AI Recommendations

Explainability, interpretability and how much they affect human trust in...
research
02/04/2023

Appropriate Reliance on AI Advice: Conceptualization and the Effect of Explanations

AI advice is becoming increasingly popular, e.g., in investment and medi...

Please sign up or login with your details

Forgot password? Click here to reset