TRACER: A Framework for Facilitating Accurate and Interpretable Analytics for High Stakes Applications

03/24/2020
by   Kaiping Zheng, et al.
15

In high stakes applications such as healthcare and finance analytics, the interpretability of predictive models is required and necessary for domain practitioners to trust the predictions. Traditional machine learning models, e.g., logistic regression (LR), are easy to interpret in nature. However, many of these models aggregate time-series data without considering the temporal correlations and variations. Therefore, their performance cannot match up to recurrent neural network (RNN) based models, which are nonetheless difficult to interpret. In this paper, we propose a general framework TRACER to facilitate accurate and interpretable predictions, with a novel model TITV devised for healthcare analytics and other high stakes applications such as financial investment and risk management. Different from LR and other existing RNN-based models, TITV is designed to capture both the time-invariant and the time-variant feature importance using a feature-wise transformation subnetwork and a self-attention subnetwork, for the feature influence shared over the entire time series and the time-related importance respectively. Healthcare analytics is adopted as a driving use case, and we note that the proposed TRACER is also applicable to other domains, e.g., fintech. We evaluate the accuracy of TRACER extensively in two real-world hospital datasets, and our doctors/clinicians further validate the interpretability of TRACER in both the patient level and the feature level. Besides, TRACER is also validated in a high stakes financial application and a critical temperature forecasting application. The experimental results confirm that TRACER facilitates both accurate and interpretable analytics for high stakes applications.

READ FULL TEXT

page 16

page 24

research
09/15/2021

Interpretable Additive Recurrent Neural Networks For Multivariate Clinical Time Series

Time series models with recurrent neural networks (RNNs) can have high a...
research
02/17/2021

Dynamic and interpretable hazard-based models of traffic incident durations

Understanding and predicting the duration or "return-to-normal" time of ...
research
08/19/2016

RETAIN: An Interpretable Predictive Model for Healthcare using Reverse Time Attention Mechanism

Accuracy and interpretability are two dominant features of successful pr...
research
10/26/2020

Benchmarking Deep Learning Interpretability in Time Series Predictions

Saliency methods are used extensively to highlight the importance of inp...
research
09/08/2020

Enhancing the Interpretability of Deep Models in Heathcare Through Attention: Application to Glucose Forecasting for Diabetic People

The adoption of deep learning in healthcare is hindered by their "black ...
research
05/31/2023

EAMDrift: An interpretable self retrain model for time series

The use of machine learning for time series prediction has become increa...

Please sign up or login with your details

Forgot password? Click here to reset