Context-dependent Explainability and Contestability for Trustworthy Medical Artificial Intelligence: Misclassification Identification of Morbidity Recognition Models in Preterm

12/17/2022
by   Isil Guzey, et al.
0

Although machine learning (ML) models of AI achieve high performances in medicine, they are not free of errors. Empowering clinicians to identify incorrect model recommendations is crucial for engendering trust in medical AI. Explainable AI (XAI) aims to address this requirement by clarifying AI reasoning to support the end users. Several studies on biomedical imaging achieved promising results recently. Nevertheless, solutions for models using tabular data are not sufficient to meet the requirements of clinicians yet. This paper proposes a methodology to support clinicians in identifying failures of ML models trained with tabular data. We built our methodology on three main pillars: decomposing the feature set by leveraging clinical context latent space, assessing the clinical association of global explanations, and Latent Space Similarity (LSS) based local explanations. We demonstrated our methodology on ML-based recognition of preterm infant morbidities caused by infection. The risk of mortality, lifelong disability, and antibiotic resistance due to model failures was an open research question in this domain. We achieved to identify misclassification cases of two models with our approach. By contextualizing local explanations, our solution provides clinicians with actionable insights to support their autonomy for informed final decisions.

READ FULL TEXT
research
12/19/2019

Measuring the Quality of Explanations: The System Causability Scale (SCS). Comparing Human and Machine Explanations

Recent success in Artificial Intelligence (AI) and Machine Learning (ML)...
research
07/22/2022

TRUST-LAPSE: An Explainable Actionable Mistrust Scoring Framework for Model Monitoring

Continuous monitoring of trained ML models to determine when their predi...
research
02/11/2023

Informing clinical assessment by contextualizing post-hoc explanations of risk prediction models in type-2 diabetes

Medical experts may use Artificial Intelligence (AI) systems with greate...
research
10/14/2022

Machine Learning in Transaction Monitoring: The Prospect of xAI

Banks hold a societal responsibility and regulatory requirements to miti...
research
03/02/2021

Test Automation with Grad-CAM Heatmaps – A Future Pipe Segment in MLOps for Vision AI?

Machine Learning (ML) is a fundamental part of modern perception systems...
research
06/21/2023

Evaluation of Popular XAI Applied to Clinical Prediction Models: Can They be Trusted?

The absence of transparency and explainability hinders the clinical adop...

Please sign up or login with your details

Forgot password? Click here to reset