DeepAI AI Chat
Log In Sign Up

Rationale production to support clinical decision-making

by   Niall Taylor, et al.
University of Oxford

The development of neural networks for clinical artificial intelligence (AI) is reliant on interpretability, transparency, and performance. The need to delve into the black-box neural network and derive interpretable explanations of model output is paramount. A task of high clinical importance is predicting the likelihood of a patient being readmitted to hospital in the near future to enable efficient triage. With the increasing adoption of electronic health records (EHRs), there is great interest in applications of natural language processing (NLP) to clinical free-text contained within EHRs. In this work, we apply InfoCal, the current state-of-the-art model that produces extractive rationales for its predictions, to the task of predicting hospital readmission using hospital discharge notes. We compare extractive rationales produced by InfoCal to competitive transformer-based models pretrained on clinical text data and for which the attention mechanism can be used for interpretation. We find each presented model with selected interpretability or feature importance methods yield varying results, with clinical language domain expertise and pretraining critical to performance and subsequent interpretability.


page 1

page 2

page 3

page 4


EffiCare: Better Prognostic Models via Resource-Efficient Health Embeddings

Recent medical prognostic models adapted from high data-resource fields ...

An Analysis of Attention over Clinical Notes for Predictive Tasks

The shift to electronic medical records (EMRs) has engendered research i...

Identifying roles of clinical pharmacy with survey evaluation

The survey data sets are important sources of data and their successful ...

"The Human Body is a Black Box": Supporting Clinical Decision-Making with Deep Learning

Machine learning technologies are increasingly developed for use in heal...