Evaluation of Popular XAI Applied to Clinical Prediction Models: Can They be Trusted?

06/21/2023
by   Aida Branković, et al.
0

The absence of transparency and explainability hinders the clinical adoption of Machine learning (ML) algorithms. Although various methods of explainable artificial intelligence (XAI) have been suggested, there is a lack of literature that delves into their practicality and assesses them based on criteria that could foster trust in clinical environments. To address this gap this study evaluates two popular XAI methods used for explaining predictive models in the healthcare context in terms of whether they (i) generate domain-appropriate representation, i.e. coherent with respect to the application task, (ii) impact clinical workflow and (iii) are consistent. To that end, explanations generated at the cohort and patient levels were analysed. The paper reports the first benchmarking of the XAI methods applied to risk prediction models obtained by evaluating the concordance between generated explanations and the trigger of a future clinical deterioration episode recorded by the data collection system. We carried out an analysis using two Electronic Medical Records (EMR) datasets sourced from Australian major hospitals. The findings underscore the limitations of state-of-the-art XAI methods in the clinical context and their potential benefits. We discuss these limitations and contribute to the theoretical development of trustworthy XAI solutions where clinical decision support guides the choice of intervention by suggesting the pattern or drivers for clinical deterioration in the future.

READ FULL TEXT
research
08/04/2021

VBridge: Connecting the Dots Between Features and Data to Explain Healthcare Models

Machine learning (ML) is increasingly applied to Electronic Health Recor...
research
09/14/2020

An explainable XGBoost-based approach towards assessing the risk of cardiovascular disease in patients with Type 2 Diabetes Mellitus

Cardiovascular Disease (CVD) is an important cause of disability and dea...
research
02/11/2023

Informing clinical assessment by contextualizing post-hoc explanations of risk prediction models in type-2 diabetes

Medical experts may use Artificial Intelligence (AI) systems with greate...
research
01/26/2021

The Consequences of the Framing of Machine Learning Risk Prediction Models: Evaluation of Sepsis in General Wards

Objectives: To evaluate the consequences of the framing of machine learn...
research
07/05/2023

Beyond Known Reality: Exploiting Counterfactual Explanations for Medical Research

This study employs counterfactual explanations to explore "what if?" sce...

Please sign up or login with your details

Forgot password? Click here to reset