Explainable Artificial Intelligence for Pharmacovigilance: What Features Are Important When Predicting Adverse Outcomes?

12/25/2021
by   Isaac Ronald Ward, et al.
9

Explainable Artificial Intelligence (XAI) has been identified as a viable method for determining the importance of features when making predictions using Machine Learning (ML) models. In this study, we created models that take an individual's health information (e.g. their drug history and comorbidities) as inputs, and predict the probability that the individual will have an Acute Coronary Syndrome (ACS) adverse outcome. Using XAI, we quantified the contribution that specific drugs had on these ACS predictions, thus creating an XAI-based technique for pharmacovigilance monitoring, using ACS as an example of the adverse outcome to detect. Individuals aged over 65 who were supplied Musculo-skeletal system (anatomical therapeutic chemical (ATC) class M) or Cardiovascular system (ATC class C) drugs between 1993 and 2009 were identified, and their drug histories, comorbidities, and other key features were extracted from linked Western Australian datasets. Multiple ML models were trained to predict if these individuals would have an ACS related adverse outcome (i.e., death or hospitalisation with a discharge diagnosis of ACS), and a variety of ML and XAI techniques were used to calculate which features – specifically which drugs – led to these predictions. The drug dispensing features for rofecoxib and celecoxib were found to have a greater than zero contribution to ACS related adverse outcome predictions (on average), and it was found that ACS related adverse outcomes can be predicted with 72 Furthermore, the XAI libraries LIME and SHAP were found to successfully identify both important and unimportant features, with SHAP slightly outperforming LIME. ML models trained on linked administrative health datasets in tandem with XAI algorithms can successfully quantify feature importance, and with further development, could potentially be used as pharmacovigilance monitoring techniques.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/28/2023

Predicting Adverse Neonatal Outcomes for Preterm Neonates with Multi-Task Learning

Diagnosis of adverse neonatal outcomes is crucial for preterm survival s...
research
01/17/2023

MAFUS: a Framework to predict mortality risk in MAFLD subjects

Metabolic (dysfunction) associated fatty liver disease (MAFLD) establish...
research
08/17/2023

Explainable AI for tool wear prediction in turning

This research aims develop an Explainable Artificial Intelligence (XAI) ...
research
04/04/2023

Characterizing the contribution of dependent features in XAI methods

Explainable Artificial Intelligence (XAI) provides tools to help underst...
research
10/05/2018

Predicting and Explaining Behavioral Data with Structured Feature Space Decomposition

Modeling human behavioral data is challenging due to its scale, sparsene...

Please sign up or login with your details

Forgot password? Click here to reset