QXAI: Explainable AI Framework for Quantitative Analysis in Patient Monitoring Systems

09/19/2023
by   Thanveer Shaik, et al.
0

Artificial Intelligence techniques can be used to classify a patient's physical activities and predict vital signs for remote patient monitoring. Regression analysis based on non-linear models like deep learning models has limited explainability due to its black-box nature. This can require decision-makers to make blind leaps of faith based on non-linear model results, especially in healthcare applications. In non-invasive monitoring, patient data from tracking sensors and their predisposing clinical attributes act as input features for predicting future vital signs. Explaining the contributions of various features to the overall output of the monitoring application is critical for a clinician's decision-making. In this study, an Explainable AI for Quantitative analysis (QXAI) framework is proposed with post-hoc model explainability and intrinsic explainability for regression and classification tasks in a supervised learning approach. This was achieved by utilizing the Shapley values concept and incorporating attention mechanisms in deep learning models. We adopted the artificial neural networks (ANN) and attention-based Bidirectional LSTM (BiLSTM) models for the prediction of heart rate and classification of physical activities based on sensor data. The deep learning models achieved state-of-the-art results in both prediction and classification tasks. Global explanation and local explanation were conducted on input data to understand the feature contribution of various patient data. The proposed QXAI framework was evaluated using PPG-DaLiA data to predict heart rate and mobile health (MHEALTH) data to classify physical activities based on sensor data. Monte Carlo approximation was applied to the framework to overcome the time complexity and high computation power requirements required for Shapley value calculations.

READ FULL TEXT

page 1

page 10

page 11

page 27

page 28

research
08/28/2017

Explainable Artificial Intelligence: Understanding, Visualizing and Interpreting Deep Learning Models

With the availability of large databases and recent improvements in deep...
research
03/14/2022

A Novel Perspective to Look At Attention: Bi-level Attention-based Explainable Topic Modeling for News Classification

Many recent deep learning-based solutions have widely adopted the attent...
research
05/26/2021

Designing ECG Monitoring Healthcare System with Federated Transfer Learning and Explainable AI

Deep learning play a vital role in classifying different arrhythmias usi...
research
08/20/2021

Improvement of a Prediction Model for Heart Failure Survival through Explainable Artificial Intelligence

Cardiovascular diseases and their associated disorder of heart failure a...
research
07/29/2023

Towards the Visualization of Aggregated Class Activation Maps to Analyse the Global Contribution of Class Features

Deep learning (DL) models achieve remarkable performance in classificati...
research
01/14/2019

A Self-Correcting Deep Learning Approach to Predict Acute Conditions in Critical Care

In critical care, intensivists are required to continuously monitor high...
research
05/19/2021

Explainable Health Risk Predictor with Transformer-based Medicare Claim Encoder

In 2019, The Centers for Medicare and Medicaid Services (CMS) launched a...

Please sign up or login with your details

Forgot password? Click here to reset