Directive Explanations for Monitoring the Risk of Diabetes Onset: Introducing Directive Data-Centric Explanations and Combinations to Support What-If Explorations

02/21/2023
by   Aditya Bhattacharya, et al.
0

Explainable artificial intelligence is increasingly used in machine learning (ML) based decision-making systems in healthcare. However, little research has compared the utility of different explanation methods in guiding healthcare experts for patient care. Moreover, it is unclear how useful, understandable, actionable and trustworthy these methods are for healthcare experts, as they often require technical ML knowledge. This paper presents an explanation dashboard that predicts the risk of diabetes onset and explains those predictions with data-centric, feature-importance, and example-based explanations. We designed an interactive dashboard to assist healthcare experts, such as nurses and physicians, in monitoring the risk of diabetes onset and recommending measures to minimize risk. We conducted a qualitative study with 11 healthcare experts and a mixed-methods study with 45 healthcare experts and 51 diabetic patients to compare the different explanation methods in our dashboard in terms of understandability, usefulness, actionability, and trust. Results indicate that our participants preferred our representation of data-centric explanations that provide local explanations with a global overview over other methods. Therefore, this paper highlights the importance of visually directive data-centric explanation method for assisting healthcare experts to gain actionable insights from patient health records. Furthermore, we share our design implications for tailoring the visual representation of different explanation methods for healthcare experts.

READ FULL TEXT

page 4

page 7

page 9

page 11

page 12

research
09/18/2023

Evaluation of Human-Understandability of Global Model Explanations using Decision Tree

In explainable artificial intelligence (XAI) research, the predominant f...
research
09/24/2021

Explanation Strategies as an Empirical-Analytical Lens for Socio-Technical Contextualization of Machine Learning Interpretability

During a research project in which we developed a machine learning (ML) ...
research
01/14/2018

Hire the Experts: Combinatorial Auction Based Scheme for Experts Selection in E-Healthcare

During the last decade, scheduling the healthcare services (such as staf...
research
10/26/2021

Provably Robust Model-Centric Explanations for Critical Decision-Making

We recommend using a model-centric, Boolean Satisfiability (SAT) formali...
research
02/06/2018

Granger-causal Attentive Mixtures of Experts

Several methods have recently been proposed to detect salient input feat...
research
10/04/2020

Explanation Ontology in Action: A Clinical Use-Case

We addressed the problem of a lack of semantic representation for user-c...
research
07/26/2023

Machine Learning Applications In Healthcare: The State Of Knowledge and Future Directions

Detection of easily missed hidden patterns with fast processing power ma...

Please sign up or login with your details

Forgot password? Click here to reset