Deep Physiological State Space Model for Clinical Forecasting

12/04/2019 ∙ by Yuan Xue, et al. ∙ Google 0

Clinical forecasting based on electronic medical records (EMR) can uncover the temporal correlations between patients' conditions and outcomes from sequences of longitudinal clinical measurements. In this work, we propose an intervention-augmented deep state space generative model to capture the interactions among clinical measurements and interventions by explicitly modeling the dynamics of patients' latent states. Based on this model, we are able to make a joint prediction of the trajectories of future observations and interventions. Empirical evaluations show that our proposed model compares favorably to several state-of-the-art methods on real EMR data.



There are no comments yet.


page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

The wide adoption of electronic medical records (EMR) has resulted in the collection of an enormous amount of patient measurements over time in the form of time-series data. These retrospective data contain valuable information that captures the intricate relationships between patient conditions and outcomes, and present a promising avenue for improving patient healthcare.

Recently, machine learning methods have been increasingly applied to EMR data to predict patient outcomes such as mortality or diagnosis 

Rajkomar and et al. (2018); Sha and Wang (2017); Che et al. (2018); Choi et al. (2015); Lipton et al. (2016); Song et al. (2018); Liu and Hauskrecht (2013, 2016); M et al. (2017). Yet, the integration of the prediction results into clinicians’ workflows still faces significant challenges as the alerts generated by these machine learning algorithms provide few insights into why the predictions are made, and how to act on the predictions. In this paper, we present a deep state space generative model, augmented with intervention forecasting, which provides a principled way to capture the interactions among observations, interventions, hidden patient states and their uncertainty. Based on this model, we are able to provide simultaneous forecasting of biomarker trajectories and guides clinicians with intervention suggestions. The ability to jointly forecast multiple clinical variables provides clinicians with a full picture of a patient’s medical condition and better supports them with decision making.

2 Learning Task and Model

Consider a longitudinal EMR system with patients. We discretize and calibrate patient ’s longitudinal records to a time window , where time and represent the time when the patient first and last interacts with the system. When the context is clear, we simplify notation with . We consider two types of time series data in EMR: 1) observations

, a real-valued vector of

-dimension. Each dimension corresponds to one type of clinical measurement including vital signs and lab results (e.g., mean blood pressure, serum lactate). We use to denote the sequence of measurements at discrete time points ; 2) interventions , a real-valued vector of -dimension. Each dimension corresponds to one type of clinical intervention, and its value indicates the presence and the level of intervention such as the dosage of medication being administrated or the settings of a mechanical ventilator. Similarly, denotes the sequence of interventions at . At prediction time , given the sequence of observations and interventions ,

, we estimate the distribution of observations

and interventions where . denotes forecasting horizon.

Figure 1: Graph Model For Patient Physiological State.

To provide a joint forecast, we need a powerful model that captures the temporal correlations among observations and interventions. To this end, we adopt a Gaussian state space model to explicitly model the latent patient physiological state as shown in Fig. 1. Let be the latent variable vector that represents the physiological state at time and be the sequence of such latent variables. The system dynamics are defined as:


where Eq. (1) defines the state transition: function defines the system transition without external influence, i.e., how patient state will evolve from to without intervention. captures the effect of intervention on patient state . In Eq. (2), captures the relationship between internal state and observable measurements . and are process and measurement noise covariance matrices. We assume them to be time-invariant. Eq. (1) and (2) subsume a large family of linear and non-linear state space models. For example, by setting to be matrices, we obtain linear state space models. By parameterizing

via deep neural networks, we have deep state space models.

Intervention Forecast. Contrary to classical state space models, where interventions are usually considered as external factors, when inferring patient states from EMR data, interventions are an integral part of the system, as they are determined by clinicians based on their estimation of patient states and medical knowledge/clinical guidelines. To model this relationship, we augment the state space model with additional dependency from to as shown in Fig. 1.


Similarly, in Eq.(3) can be either a matrix for a linear model or parameterized by a neural network for a nonlinear model. is the intervention covariance.

3 Method

Our state space model is fully specified by the generative parameter . In this section, we present two learning learning objectives and their associated variational lower bounds that support the clinical forecast tasks as described in Sec. 2. We also present the algorithm and the neural network models used for learning.

3.1 System Identification

One classical method of estimating these parameters is to maximize the data likelihood in the entire patient record. We consider maximizing the joint likelihood of observations and interventions. Note that the objective here is slightly different from the learning of classical state space model which maximizes the conditional likelihood of observations given interventions Fraccaro et al. (2016, 2017); Krishnan et al. (2015). This task is referred to as system identification.


This log likelihood is intractable when inferring the posterior . We adopt the variational inference method by introducing a variational distribution that approximates this posterior. To simply the notations, we assume to be a fixed zero vector and use for , for , and for . We optimize the evidence lower bound (ELBO) given as follows:


Similar to Krishnan et al. (2015), this ELBO can be factorized along time as:


The lower bound in Eq.(6) has two components: 1) the reconstruction loss for both observations and interventions; 2) the regularization loss which measures the difference between the encoder and the simple prior distribution of the latent state given the transition model between and as defined in the state space model (Eq.(1)).

3.2 Trajectory Forecast

While the system identification task tries to capture the inherit dynamics of a patient, it does not directly optimize for forecasting the values of the observations at a given time point over the next period, unless the system dynamics are homogeneous. Here we present an explicit model for trajectory forecast by maximizing the joint likelihood of observation and intervention in the forecast horizon , given their historical values within time range . The joint likelihood, the corresponding ELBO and its time-factorized form are provided below. To simply the notations, we use to represent the forecast value , to represent the historical value , to represent the latent state connecting history to the forecast horizon.


The above forecast ELBO has two components: 1) the forecast loss for both observations and interventions over the forecast horizon (Eq.(9)); and 2) the regularization loss for latent state from the history to the forecast horizon. Note that the encoder only depends on the historical values and rolls out the state for the future with their forecast values.

3.3 Learning Algorithm and Model Architecture

Give the ELBOs of the above tasks, our learning algorithm proceeds the following steps: 1) inference of from , and by an encoder network ; 2) sampling based on the current estimate of the posterior to either reconstruct the observation and the next step intervention (for system identification task), or forecast the next observation and the intervention afterwards (for trajectory prediction) based on the generative model . For the latter case, the generative model will be used to roll out multiple time steps into the forecast horizon; 3) estimating gradients of the loss (negative ELBO) with respect to and and updating parameters of the model. Gradients are averaged across stochastically sampled mini-batches of the training set. We follow the same model architecture as in  Krishnan et al. (2015)

and use a LSTM as the encoder network, MLP for the state transition and observation emission. All models were implemented in TensorFlow 

Abadi and et al. (2015)

and the code will be open sourced.

4 Experiments

We use Medical Information Mart for Intensive Care (MIMIC) data Johnson et al. (2016) in our empirical study. We select inpatients from MIMIC-III who are still alive hours after admission as our study cohort and forecast their vital signs and lab measurements jointly with interventions. There are in-patient encounters included in the study with observed in hospital death. We select the most frequently used observational data features and types of vasopressors and antibiotics,

most recorded ventilation and dialysis machine settings as intervention features. All observation and intervention values are normalized using z-score.

Observational data is recorded at irregular intervals in EMR, resulting in a large number of missing values when sampled at regular time steps. We adopt a simple method where the most recent value is used to impute the missing ones for observations. For interventions, we need to differentiate the case where a missing value represents that the intervention is not performed or completed vs. the case where a missing value means the same setting is continued at this time step. Specifically, we pick the

-percentile at the distribution of inter-medication-administration time and the inter-intervention-setting time as the cut-off threshold. If two consecutive interventions are within the time range of their corresponding thresholds, then we consider the missing value as an indication of a continuous action and use the last setting for its missing value. If it falls outside of this range, then a missing value is considered as no action.

The hyperparameters including the learning rate, the hidden state size for LSTM, the number of units and layers for MLP, the noise co-variance are tuned. The experiment uses a hidden state size of

for LSTM and hidden units with layers for MLPs.

We use the mean absolute error (MAE) to evaluate the performance of trajectory prediction over different forecast horizons. We use

-fold cross validation and estimate the standard error of the mean. For each fold, we split the dataset into train/eval/test according to

// based on the hash value of the patient ID. We compare the following models in our study:

  • History rollout (HR) is a baseline model, which follows the method in  Rangapuram et al. (2018). It trains a deep state space model based on the historical observations before the prediction time and rolls out the state predictions in the forecast horizon.

  • Kalman Filter (KF) Kalman (1960) provides a baseline of linear forecast model. In this method, the generative parameters are all matrices. The posterior state estimation of is performed via close-form formula.

  • Trajectory forecast (TF) is another baseline which directly uses the trajectory forecast ELBO defined in Eq.(7) to train the model.

  • System identification + Trajectory forecast(SI+TF) is our proposed method. Here we pretrain the deep state space model based on the system identification ELBO as defined in Eq.(6) then we fine tune the model based on the trajectory forecast loss (Eq.(7)).

MAE@24hr MAE@48hr MAE@72hr
History rollout (HR) 0.473(0.019) 0.492(0.021) 0.571(0.037)
Kalman Filter (KF) 0.614(0.036) 0.622(0.045) 0.731(0.053)
Trajectory forecast (TF) 0.512(0.017) 0.528(0.019) 0.546(0.022)
System identification (SI+TF) 0.453(0.012) 0.453(0.012) 0.514(0.020)
Table 1: Trajectory Forecast Results. Parentheses denote standard error.

The results in Table 1 show that SI+TF consistently outperforms all baselines over all forecast horizons. For all methods, the forecast error gracefully increases with the length of forecasting horizon. As a linear baseline KF performs the worst, which demonstrates the predictive power of deep state space model. As forecast horizon increases from 24hr to 72hr, HR shows the largest performance penalty () among all the deep models, while TF has a penalty around , as TF optimizes the future measurement likelihood directly, but HR relies on the consistency of the dynamics from the history to the future.

5 Conclusion

In this work, we present a joint prediction of clinical measurement and intervention trajectories with the progression of the patient condition. Our prediction model is built upon on the deep state space model of patient physiological state, which provides a principled way to capture the interactions among observations, interventions and physiological state. Experiment study over MIMIC datasets shows that our proposed outperforms the state-of-art methods.


  • M. Abadi and et al. (2015) TensorFlow: a system for large-scale machine learning. Cited by: §3.3.
  • Z. Che, S. Purushotham, K. Cho, D. Sontag, and Y. Liu (2018) Recurrent neural networks for multivariate time series with missing values. Sci. Rep. 8 (1). Cited by: §1.
  • E. Choi, M. T. Bahadori, A. Schuetz, W. F. Stewart, and J. Sun (2015) Doctor AI: predicting clinical events via recurrent neural networks. In Proceedings of the 1st Machine Learning for Healthcare Conference, Cited by: §1.
  • M. Fraccaro, S. Kamronn, U. Paquet, and O. Winther (2017)

    A disentangled recognition and nonlinear dynamics model for unsupervised learning

    In NIPS, Cited by: §3.1.
  • M. Fraccaro, S. K. Sønderby, U. Paquet, and O. Winther (2016) Sequential neural models with stochastic layers. In NIPS, Cited by: §3.1.
  • A. E.W. Johnson, T. J. Pollard, L. Shen, L. H. Lehman, M. Feng, M. Ghassemi, B. Moody, P. Szolovits, L. A. Celi, and R. G. Mark (2016) MIMIC-III, a freely accessible critical care database. Scientific Data 3. Note: Article number: 160035 Cited by: §4.
  • R. E. Kalman (1960) A new approach to linear filtering and prediction problems. Transactions of the ASME–Journal of Basic Engineering 82 (Series D), pp. 35–45. Cited by: 2nd item.
  • R. G. Krishnan, U. Shalit, and D. Sontag (2015) Deep kalman filters. CoRR abs/1511.05121. Cited by: §3.1, §3.1, §3.3.
  • Z. C. Lipton, D. C. Kale, C. Elkan, and R. Wetzel (2016) Learning to diagnose with LSTM recurrent neural networks. In International Conference on Learning Representations (ICLR), Cited by: §1.
  • Z. Liu and M. Hauskrecht (2013) Clinical time series prediction with a hierarchical dynamical system. Artificial Intelligence in Medicine, pp. 227–237. Cited by: §1.
  • Z. Liu and M. Hauskrecht (2016) Learning adaptive forecasting models from irregularly sampled multivariate clinical data. In AAAI, Cited by: §1.
  • W. M, G. M, F. M, C. LA, S. P, and D. F (2017) Understanding vasopressor intervention and weaning: risk prediction in a public heterogeneous clinical time series database. J Am Med Inform Assoc, pp. 488–495. Cited by: §1.
  • A. Rajkomar and et al. (2018)

    Scalable and accurate deep learning with electronic health records

    Digital Medicine 1. Note: Article number: 18 Cited by: §1.
  • S. S. Rangapuram, M. W. Seeger, J. Gasthaus, L. Stella, Y. Wang, and T. Januschowski (2018) Deep state space models for time series forecasting. In NeurIPS, pp. 7785–7794. Cited by: 1st item.
  • Y. Sha and M. D. Wang (2017) Interpretable predictions of clinical outcomes with an attention-based recurrent neural network. In Proceedings of the 8th ACM International Conference on Bioinformatics, Computational Biology,and Health Informatics, Cited by: §1.
  • H. Song, D. Rajan, J. J. Thiagarajan, and A. Spanias (2018)

    Attend and diagnose: clinical time series analysis using attention models

    In AAAI, Cited by: §1.