Explainable AI for medical imaging: Explaining pneumothorax diagnoses with Bayesian Teaching

06/08/2021
by   Tomas Folke, et al.
11

Limited expert time is a key bottleneck in medical imaging. Due to advances in image classification, AI can now serve as decision-support for medical experts, with the potential for great gains in radiologist productivity and, by extension, public health. However, these gains are contingent on building and maintaining experts' trust in the AI agents. Explainable AI may build such trust by helping medical experts to understand the AI decision processes behind diagnostic judgements. Here we introduce and evaluate explanations based on Bayesian Teaching, a formal account of explanation rooted in the cognitive science of human learning. We find that medical experts exposed to explanations generated by Bayesian Teaching successfully predict the AI's diagnostic decisions and are more likely to certify the AI for cases when the AI is correct than when it is wrong, indicating appropriate trust. These results show that Explainable AI can be used to support human-AI collaboration in medical imaging.

READ FULL TEXT

page 5

page 8

page 14

research
04/26/2022

User Trust on an Explainable AI-based Medical Diagnosis Support System

Recent research has supported that system explainability improves user t...
research
02/07/2021

Mitigating belief projection in explainable artificial intelligence via Bayesian Teaching

State-of-the-art deep-learning systems use decision rules that are chall...
research
06/01/2023

Using generative AI to investigate medical imagery models and datasets

AI models have shown promise in many medical imaging tasks. However, our...
research
08/10/2023

Explainable AI applications in the Medical Domain: a systematic review

Artificial Intelligence in Medicine has made significant progress with e...
research
01/15/2020

CheXplain: Enabling Physicians to Explore and UnderstandData-Driven, AI-Enabled Medical Imaging Analysis

The recent development of data-driven AI promises to automate medical di...
research
05/16/2021

Abstraction, Validation, and Generalization for Explainable Artificial Intelligence

Neural network architectures are achieving superhuman performance on an ...
research
02/19/2021

To Trust or to Think: Cognitive Forcing Functions Can Reduce Overreliance on AI in AI-assisted Decision-making

People supported by AI-powered decision support tools frequently overrel...

Please sign up or login with your details

Forgot password? Click here to reset