A Comparative Approach to Explainable Artificial Intelligence Methods in Application to High-Dimensional Electronic Health Records: Examining the Usability of XAI

03/08/2021
by   Jamie Andrew Duell, et al.
0

Explainable Artificial Intelligence (XAI) is a rising field in AI. It aims to produce a demonstrative factor of trust, which for human subjects is achieved through communicative means, which Machine Learning (ML) algorithms cannot solely produce, illustrating the necessity of an extra layer producing support to the model output. When approaching the medical field, we can see challenges arise when dealing with the involvement of human-subjects, the ideology behind trusting a machine to tend towards the livelihood of a human poses an ethical conundrum - leaving trust as the basis of the human-expert in acceptance to the machines decision. The aim of this paper is to apply XAI methods to demonstrate the usability of explainable architectures as a tertiary layer for the medical domain supporting ML predictions and human-expert opinion, XAI methods produce visualization of the feature contribution towards a given models output on both a local and global level. The work in this paper uses XAI to determine feature importance towards high-dimensional data-driven questions to inform domain-experts of identifiable trends with a comparison of model-agnostic methods in application to ML algorithms. The performance metrics for a glass-box method is also provided as a comparison against black-box capability for tabular data. Future work will aim to produce a user-study using metrics to evaluate human-expert usability and opinion of the given models.

READ FULL TEXT
research
03/02/2021

Understanding the Usability Challenges of Machine Learning In High-Stakes Decision Making

Machine learning (ML) is being applied to a diverse and ever-growing set...
research
12/29/2021

Towards a Shapley Value Graph Framework for Medical peer-influence

eXplainable Artificial Intelligence (XAI) is a sub-field of Artificial I...
research
09/11/2021

Comparative evaluation of contribution-value plots for machine learning understanding

The field of explainable artificial intelligence aims to help experts un...
research
04/09/2021

Model LineUpper: Supporting Interactive Model Comparison at Multiple Levels for AutoML

Automated Machine Learning (AutoML) is a rapidly growing set of technolo...
research
02/08/2021

Enhancing Human-Machine Teaming for Medical Prognosis Through Neural Ordinary Differential Equations (NODEs)

Machine Learning (ML) has recently been demonstrated to rival expert-lev...
research
01/17/2023

MAFUS: a Framework to predict mortality risk in MAFLD subjects

Metabolic (dysfunction) associated fatty liver disease (MAFLD) establish...
research
06/24/2022

On the Importance of Application-Grounded Experimental Design for Evaluating Explainable ML Methods

Machine Learning (ML) models now inform a wide range of human decisions,...

Please sign up or login with your details

Forgot password? Click here to reset