RETAIN: An Interpretable Predictive Model for Healthcare using Reverse Time Attention Mechanism

08/19/2016
by   Edward Choi, et al.
0

Accuracy and interpretability are two dominant features of successful predictive models. Typically, a choice must be made in favor of complex black box models such as recurrent neural networks (RNN) for accuracy versus less accurate but more interpretable traditional models such as logistic regression. This tradeoff poses challenges in medicine where both accuracy and interpretability are important. We addressed this challenge by developing the REverse Time AttentIoN model (RETAIN) for application to Electronic Health Records (EHR) data. RETAIN achieves high accuracy while remaining clinically interpretable and is based on a two-level neural attention model that detects influential past visits and significant clinical variables within those visits (e.g. key diagnoses). RETAIN mimics physician practice by attending the EHR data in a reverse time order so that recent clinical visits are likely to receive higher attention. RETAIN was tested on a large health system EHR dataset with 14 million visits completed by 263K patients over an 8 year period and demonstrated predictive accuracy and computational scalability comparable to state-of-the-art methods such as RNN, and ease of interpretability comparable to traditional models.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
09/08/2020

Enhancing the Interpretability of Deep Models in Heathcare Through Attention: Application to Glucose Forecasting for Diabetic People

The adoption of deep learning in healthcare is hindered by their "black ...
research
09/15/2021

Interpretable Additive Recurrent Neural Networks For Multivariate Clinical Time Series

Time series models with recurrent neural networks (RNNs) can have high a...
research
12/03/2018

Predicting Blood Pressure Response to Fluid Bolus Therapy Using Attention-Based Neural Networks for Clinical Interpretability

Determining whether hypotensive patients in intensive care units (ICUs) ...
research
03/24/2020

TRACER: A Framework for Facilitating Accurate and Interpretable Analytics for High Stakes Applications

In high stakes applications such as healthcare and finance analytics, th...
research
05/28/2018

RetainVis: Visual Analytics with Interpretable and Interactive Recurrent Neural Networks on Electronic Medical Records

In the past decade, we have seen many successful applications of recurre...
research
09/08/2020

Interpreting Deep Glucose Predictive Models for Diabetic People Using RETAIN

Progress in the biomedical field through the use of deep learning is hin...
research
04/27/2020

Interpretable Multi-Task Deep Neural Networks for Dynamic Predictions of Postoperative Complications

Accurate prediction of postoperative complications can inform shared dec...

Please sign up or login with your details

Forgot password? Click here to reset