Patient representation learning is one of the popular topics in the field of machine learning for healthcare. The generality of supervised representations is usually constrained by the amount of labeled data, while unsupervised representations can leverage information from all data, labeled or not. Hence, unsupervised learning can produce representations ofgeneral utility dosovitskiy2014unsupervisedForImages ; mikolov14doc2vec ; mikolov13word2vec ; miotto16_deep_patient_unsupervised , which can be useful in case downstream tasks are not known a priori.
Conditions like the ones described above are especially true in the medical domain. Routine medical practice generates a wealth of patient-related time series, while data annotation often requires medical experts, whose time is very limited. Additionally, new tasks of interest emerge, and different hospitals or health systems often define tasks in different ways. Thus, generally useful representations, providing good performance over a broad range of downstream tasks, are highly desired.
In this work, we investigate unsupervised representation learning on medical time series, which remains relatively unexplored. We propose adapted and novel models well suited for this objective and elucidate under which conditions they provide a performance benefit over end-to-end supervised learning with respect to predicting clinically relevant outcomes.
2 Related Work
The unsupervised learning approaches studied in this paper are rooted in the autoencoding principle bengio2013representation . The basic autoencoding architecture has been extended in several ways, such as denoising vincent2010stacked , variational kingma2013auto , convolutional masci2011stacked , or contractive rifai2011contractive autoencoders. Sequence-to-sequence (Seq2Seq) sutskever2014sequence architectures have been used successfully in translation weiss2017sequence , and on text and images chen2015mind ; gregor2015draw . Seq2Seq models have also been pre-trained in an unsupervised way ramachandran2016unsupervised and fine-tuned with labeled data.
Several models for unsupervised representation learning have been successfully employed in medical applications pivovarov2015learning ; miotto16_deep_patient_unsupervised ; suresh17_use_autoencoders_discovering ; jones16_canonical_correlation_analysis ; choi2016multi
. While in many cases representations were obtained with both descriptive as well as predictive utility, the optimal reconstruction principles and loss functions leading to accurate clinical outcome prediction have not been widely studied.
Attention mechanisms can improve performance and interpretability and have enjoyed wide use across domains chorowski2015attention ; xu2015show ; kumar2016ask ; choi2016retain . Although attention has been used in the context of unsupervised representation learning of natural language jang2018RNNSVAE , attention architectures in the medical domain have been so far exclusively focused on predicting specific supervised tasks.
3 Representation Learning Models
3.1 Baselines: Autoencoders
Autoencoding consists of two steps: encoding maps the input data space to an representation space , where typically , while decoding maps in the reverse direction to reconstruct the data from representations. The objective of autoencoding is to minimize the reconstruction error between the input data and the reconstructions.
Principal Component analysis (PCA) and its inverse together can be considered as a simple autoencoding process, where the encoding is a learned linear projection. An autoencoder (AE) is a neural network composed of an encoder and a decoder, each implemented as a multi-layer perceptron; it encodes the data in a non-linear way. Our goal is to encode temporal sequences of physiological signal vectors, but the inherent architecture of PCA and AE does not allow them to exploit the temporal structure in time series. To make data compatible with the input format of PCA and AE, we flatten a -dimensional time series (i.e. time samples, each of dimensions) into a -dimensional vector.
While Seq2Seq models are often used in supervised training settings in natural language processingsutskever2014sequence ; ramachandran2016unsupervised ; weiss2017sequence , we use it in an unsupervised way by minimizing the input reconstruction error as an objective; we refer to such a model as a S2S-AE. Figure 1
shows the structure of a S2S-AE model. A Long Short-Term Memory (LSTM) cell is used for both encoder and decoder recurrent neural network (RNN) units, because it can retain information over more time-steps compared to simple RNN cellshochreiter1998vanishing ; hochreiter2001gradient .
At time , the encoder receives a sequence of signal vectors from a time window of size as input and produces a representation , where is the last hidden state of the encoder. The decoder, given , outputs a sequence of reconstructions for the same window. Let and denote the encoder and decoder respectively, with parameters and . Then the S2S-AE model can be formulated like
where is the average reconstruction error for one window of a single patient’s input signals from until the current time . The loss for patient is then the average error over their windows, indexed by
, sliding with stride 1. To train the S2S-AE model we average the patient-wise loss over allpatients. The representation from a S2S-AE model summarizes a fixed length of the medical history of a patient up to time , which reflects the current state of the patient.
3.2 Sequential forecasting model (S2S-F)
We hypothesize that the requirement to forecast future time points in the patient’s signal would force the encoding LSTM to extract meaningful representations of the past time series. For this purpose, we design another Seq2Seq-based variant, S2S-F (“F” for forecasting), where the decoder predicts the future time series instead of reconstructing the past time series in the input. In this way, the representations still reflect the current patient state but are also optimized to predict the future patient state. We modify (1) and (2) to get the decoder function and the loss function for S2S-F:
3.3 Forecasting with attention (S2S-F-A)
The idea behind applying attention mechanisms to time series forecasting is to enable the decoder to preferentially “attend” to specific parts of the input sequence during decoding. This allows for particularly relevant events (e.g. drastic changes in heart rate), to contribute more to the generation of different points in the output sequence. Since autoencoding with attention is trivial (an effective attention mechanism would learn to only point to the corresponding input at each time point), we only augment S2S-F with the attention mechanism, calling the architecture S2S-F-A (shown in Figure 2).
Formally, at time during decoding, the objective is to produce a context vector which is a weighted combination of the hidden states of the decoder: The weights are softmax-normalized versions of weights computed by the attention mechanism , which considers both the current state of the decoder and each state of the encoder in turn: and . To implement , we use a single-layer perceptron with a tanhactivation function and scalar output, following luong2015effective :
Each reflects the importance of time point in the input sequence for decoding time point in the output. The context vector is thus an explicit resummarization of the input data in light of the current decoding task. The context vector is concatenated to the usual input fed to the decoder at , which is (see Figure 2).
The attention mechanism breaks the "bottleneck" principle of usual Seq2Seq models, and it is not obvious how to choose a self-contained representation. Following our practice for S2S-AE and S2S-F, we take the final state of the encoder, as the representation. Although we experimented with additionally including context vectors as part of the representation, an interesting finding was that simply taking was sufficient in the prediction of downstream tasks. Table A2 summarizes the characteristics of the unsupervised representation models we analyze.
4 Experiments and results
The eICU Collaborative Research Database v1.2 (goldberger00_physiobank_physitoolkit_physionet, )
was used for all experiments described in this paper. 94 time series variables including periodic and aperiodic vital signs and irregularly measured lab tests were extracted. The data was resampled to be hourly, with implausible data rejection and imputation performed online; see AppendixA.1 for more details. Overall, the dataset consists of 20,878 patients with 72-720 hours of history, extending from ICU admission to dispatch. We use a window size of 12 hours (i.e. 12 time points) and representation dimension 94.
Reconstructing past and predicting future
We aim to evaluate the ability of representations to reconstruct past and future data. Some representations are obtained from models optimized to reconstruct past data (PCA, AE and S2S-AE), while others from models optimized to predict future data (S2S-F and S2S-F-A). To produce a fair comparison independent of a specific decoder, we use the representations themselves as input features to a 1-layer LSTM trained to either reconstruct the past 12 hours, or predict the next 12. The performance for each set of representations are shown in Table 1, evaluated using mean-squared error (MSE). Not surprisingly, representations from forecaster models perform better in future prediction and the attention mechanism further improves performance. However, the extent to which attention helps is surprising.
Predicting mortality and discharge status within the next 24 hours
Besides evaluating the ability of representations in past/future signal prediction, we are also interested in whether we can use them to predict future clinical events. Here we focus on predicting whether patients will be discharged from the ICU in a stable state (“24h Discharge”), or die within the next 24 hours (“24h Mortality”). We trained 1-layer LSTM classifiers (LSTM-1) using representations as input to predict these two events and report the area under ROC curve (AUROC) and the area under precision-recall curve (AUPRC) in Table2. In addition, we also include the performance of a 3-layer LSTM classifier (LSTM-3), a “deeper” model, trained on the original input signals as a baseline.
|24h Discharge||24h Mortality|
|LSTM-1 +||PCA rep.|
|LSTM-3 +||raw signals|
Improved performance in limited data setting
Here we evaluate how unsupervised representations help boost prediction performance in the limited labeled data scenario.
We simulate this setting by reducing the quantity of labeled data available for the classification problems described in the previous section, with as few as (N = 75 patients) training examples. The results under this varying data scarcity are shown in Figure 3, for the different representation-learning approaches. We also include the prediction performance of classifiers, namely LSTM-1 and LSTM-3, trained in an end-to-end supervised fashion on the available labeled data, as baselines.
We observe from Figure 3 that when labels are scarce, the model trained using time-series representations as input features outperforms the end-to-end supervised model, confirming the benefit of unsupervised representation learning in limited data settings. Even when we use all labeled samples at our disposal to train a more complex classifier, the best unsupervised representations still lead to a better performance than supervised representations. For all models, however, performance does not saturate when increasing the training set size, which indicates that the entire regime examined here is the data scarcity regime. Given more data, the purely supervised models might eventually surpass the ones using learned representations.
We have studied the performance of several methods for learning unsupervised representations of patient time series, and proposed a new architecture, S2S-F-A, which is optimized for forecasting using an attention mechanism. We empirically showed that in scenarios where labeled medical time series data is scarce, training classifiers on unsupervised representations provides performance gains over end-to-end supervised learning using raw input signals, thus making effective use of information available in a separate, unlabeled training set. The proposed model, explored for the first time in the context of unsupervised patient representation learning, produces representations with the highest performance in future signal prediction and clinical outcome prediction, exceeding several baselines.
Alexey Dosovitskiy, Jost Tobias Springenberg, Martin Riedmiller, and Thomas
Discriminative unsupervised feature learning with convolutional neural networks.In Z. Ghahramani, M. Welling, C. Cortes, N. D. Lawrence, and K. Q. Weinberger, editors, Advances in Neural Information Processing Systems 27, pages 766–774. Curran Associates, Inc., 2014.
-  Quoc V. Le and Tomas Mikolov. Distributed representations of sentences and documents. CoRR, abs/1405.4053, 2014.
-  Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg Corrado, and Jeffrey Dean. Distributed representations of words and phrases and their compositionality. CoRR, abs/1310.4546, 2013.
-  Riccardo Miotto, Li Li, Brian A Kidd, and Joel T Dudley. Deep patient: An unsupervised representation to predict the future of patients from the electronic health records. Scientific reports, 6, 2016.
-  Yoshua Bengio, Aaron Courville, and Pascal Vincent. Representation learning: A review and new perspectives. IEEE transactions on pattern analysis and machine intelligence, 35(8):1798–1828, 2013.
Pascal Vincent, Hugo Larochelle, Isabelle Lajoie, Yoshua Bengio, and
Stacked denoising autoencoders: Learning useful representations in a deep network with a local denoising criterion.Journal of Machine Learning Research, 11(Dec):3371–3408, 2010.
-  Diederik P Kingma and Max Welling. Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114, 2013.
Jonathan Masci, Ueli Meier, Dan Cireşan, and Jürgen Schmidhuber.
Stacked convolutional auto-encoders for hierarchical feature extraction.In International Conference on Artificial Neural Networks, pages 52–59. Springer, 2011.
-  Salah Rifai, Pascal Vincent, Xavier Muller, Xavier Glorot, and Yoshua Bengio. Contractive auto-encoders: Explicit invariance during feature extraction. In Proceedings of the 28th International Conference on International Conference on Machine Learning, pages 833–840. Omnipress, 2011.
-  Ilya Sutskever, Oriol Vinyals, and Quoc V Le. Sequence to sequence learning with neural networks. In Advances in neural information processing systems, pages 3104–3112, 2014.
-  Ron J Weiss, Jan Chorowski, Navdeep Jaitly, Yonghui Wu, and Zhifeng Chen. Sequence-to-sequence models can directly transcribe foreign speech. arXiv preprint arXiv:1703.08581, 2017.
-  Xinlei Chen and C Lawrence Zitnick. Mind’s eye: A recurrent visual representation for image caption generation. In , pages 2422–2431, 2015.
-  Karol Gregor, Ivo Danihelka, Alex Graves, Danilo Jimenez Rezende, and Daan Wierstra. Draw: A recurrent neural network for image generation. arXiv preprint arXiv:1502.04623, 2015.
-  Prajit Ramachandran, Peter J Liu, and Quoc V Le. Unsupervised pretraining for sequence to sequence learning. arXiv preprint arXiv:1611.02683, 2016.
-  Rimma Pivovarov, Adler J Perotte, Edouard Grave, John Angiolillo, Chris H Wiggins, and Noémie Elhadad. Learning probabilistic phenotypes from heterogeneous ehr data. Journal of biomedical informatics, 58:156–165, 2015.
-  Harini Suresh, Peter Szolovits, and Marzyeh Ghassemi. The use of autoencoders for discovering patient phenotypes. arXiv preprint arXiv:1703.07004, 2017.
-  Corinne L Jones, Sham M Kakade, Lucas W Thornblade, David R Flum, and Abraham D Flaxman. Canonical correlation analysis for analyzing sequences of medical billing codes. arXiv preprint arXiv:1612.00516, 2016.
-  Edward Choi, Mohammad Taha Bahadori, Elizabeth Searles, Catherine Coffey, Michael Thompson, James Bost, Javier Tejedor-Sojo, and Jimeng Sun. Multi-layer representation learning for medical concepts. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pages 1495–1504. ACM, 2016.
-  Jan K Chorowski, Dzmitry Bahdanau, Dmitriy Serdyuk, Kyunghyun Cho, and Yoshua Bengio. Attention-based models for speech recognition. In Advances in neural information processing systems, pages 577–585, 2015.
-  Kelvin Xu, Jimmy Ba, Ryan Kiros, Kyunghyun Cho, Aaron Courville, Ruslan Salakhudinov, Rich Zemel, and Yoshua Bengio. Show, attend and tell: Neural image caption generation with visual attention. In International Conference on Machine Learning, pages 2048–2057, 2015.
-  Ankit Kumar, Ozan Irsoy, Peter Ondruska, Mohit Iyyer, James Bradbury, Ishaan Gulrajani, Victor Zhong, Romain Paulus, and Richard Socher. Ask me anything: Dynamic memory networks for natural language processing. In International Conference on Machine Learning, pages 1378–1387, 2016.
-  Edward Choi, Mohammad Taha Bahadori, Jimeng Sun, Joshua Kulas, Andy Schuetz, and Walter Stewart. Retain: An interpretable predictive model for healthcare using reverse time attention mechanism. In Advances in Neural Information Processing Systems, pages 3504–3512, 2016.
-  Myeongjun Jang, Seungwan Seo, and Pilsung Kang. Recurrent neural network-based semantic variational autoencoder for sequence-to-sequence learning. CoRR, abs/1802.03238, 2018.
The vanishing gradient problem during learning recurrent neural nets and problem solutions.International Journal of Uncertainty, Fuzziness and Knowledge-Based Systems, 6(02):107–116, 1998.
-  Sepp Hochreiter, Yoshua Bengio, Paolo Frasconi, Jürgen Schmidhuber, et al. Gradient flow in recurrent nets: the difficulty of learning long-term dependencies, 2001.
-  Minh-Thang Luong, Hieu Pham, and Christopher D Manning. Effective approaches to attention-based neural machine translation. arXiv preprint arXiv:1508.04025, 2015.
-  Ary L Goldberger, Luis AN Amaral, Leon Glass, Jeffrey M Hausdorff, Plamen Ch Ivanov, Roger G Mark, Joseph E Mietus, George B Moody, Chung-Kang Peng, and H Eugene Stanley. Physiobank, physiotoolkit, and physionet. Circulation, 101(23):e215–e220, 2000.
Appendix A Appendix
The eICU Collaborative Research Database v1.2  was used for all experiments described in this paper.
94 time series variables (shown in Table A1 ) including periodic and aperiodic vital signs and irregularly measured lab tests were extracted from the raw database. A variable
was included in our analysis if at least 10% of patients in the cohort had at least one record for this variable. As preprocessing, the raw data was
resampled to a regular time-grid format with an interval size of 60 minutes, extending from admission to the ICU to dispatch from the unit.
During computation of the time grid, rejection of implausible data and imputation were performed with an online algorithm. An observation was rejected if
it is a statistical outlier with respect to pre-computed 5th/95th dataset percentiles. Values on the regular time grid were imputed using
a combination of forward filling, personalized history mean filling and population median filling. Forward filling was used if the last value
was recorded no earlier than 1 hour (periodic vital signs), 5 hours (aperiodic vital signs) or 1 day (lab tests) prior to the grid point, respectively.
Otherwise, if there have been previous observations of that variable, the mean of all such observations was used to fill in the time grid point. If there were
no observations in a patient’s history, the grid value was filled with the population median for that variable.
) including periodic and aperiodic vital signs and irregularly measured lab tests were extracted from the raw database. A variable was included in our analysis if at least 10% of patients in the cohort had at least one record for this variable. As preprocessing, the raw data was resampled to a regular time-grid format with an interval size of 60 minutes, extending from admission to the ICU to dispatch from the unit. During computation of the time grid, rejection of implausible data and imputation were performed with an online algorithm. An observation was rejected if it is a statistical outlier with respect to pre-computed 5th/95th dataset percentiles. Values on the regular time grid were imputed using a combination of forward filling, personalized history mean filling and population median filling. Forward filling was used if the last value was recorded no earlier than 1 hour (periodic vital signs), 5 hours (aperiodic vital signs) or 1 day (lab tests) prior to the grid point, respectively. Otherwise, if there have been previous observations of that variable, the mean of all such observations was used to fill in the time grid point. If there were no observations in a patient’s history, the grid value was filled with the population median for that variable.
Overall, the dataset consists of 20878 patients with 72-240 hours of history.
|vitalPeriodic||cvp, heartrate, respiration, sao2, st1, st2, st3, systemicdiastolic, systemicmean, systemicsystolic, temperature|
|vitalAperiodic||noninvasivediastolic, noninvasivemean, noninvasivesystolic|
|Lab||-bands, -basos, -eos, -lymphs, -monos, -polys, ALT (SGPT), AST (SGOT), BNP, BUN, Base Deficit, Base Excess, CPK, CPK-MB, CPK-MB index, Carboxyhemoglobin, Fe, Ferritin, FiO2, HCO3, HDL, Hct, Hgb, LDL, LPM O2, MCH, MCHC, MCV, MPV, Methemoglobin, O2 Content, O2 Sat (%), PT, PT - INR, PTT, RBC, RDW, Respiratory Rate, TIBC, TSH, TV, Total CO2, Vancomycin - trough, Vent Rate, Vitamin B12, WBC x 1000, WBC’s in urine, albumin, alkaline phos., ammonia, anion gap, bedside glucose, bicarbonate, calcium, chloride, creatinine, direct bilirubin, fibrinogen, glucose, ionized calcium, lactate, lipase, magnesium, pH, paCO2, paO2, peep, phosphate, platelets x 1000, potassium, sodium, temporature, total bilirubin, total cholesterol, total protein, triglycerides, troponin - I, troponin - T, urinary sodium, urinary specific gravity|
a.1.1 Cohort selection
Among the >200,000 ICU stays available in the dataset, we included only patients with one stay, such that data splits do not have to be stratified with respect to patient ID. In the second filtering step, ICU stays shorter than 3 days or longer than 10 days were excluded. The filtering yielded a set of 20878 patients/ICU stays.
a.1.2 Data splits
From the pre-filtered dataset we created 5 replicates of
random partitions into train, validation and 2 test sets, with respect
to patients, i.e. the entire data of a patient was contained
in exactly one of the 4 sets. Size ratios of 40:40:10:10 for
train/validation/test1/test2 sets were used. The training set was used to
train the representations, the validation set was used to tune free
hyperparameters of the representation method (if any). The classifiers were trained on the
patient representations obtained from the validation set, optimized its
hyperparameters on the representations from the first test set, and its
predictive performance was evaluated on the unseen representations from the
second test set. 5 independent experiments have been performed on the
From the pre-filtered dataset we created 5 replicates of random partitions into train, validation and 2 test sets, with respect to patients, i.e. the entire data of a patient was contained in exactly one of the 4 sets. Size ratios of 40:40:10:10 for train/validation/test1/test2 sets were used. The training set was used to train the representations, the validation set was used to tune free hyperparameters of the representation method (if any). The classifiers were trained on the patient representations obtained from the validation set, optimized its hyperparameters on the representations from the first test set, and its predictive performance was evaluated on the unseen representations from the second test set. 5 independent experiments have been performed on the replicates.
a.2 Representation learning
For each representation learning method, representations were extracted from the
training set. Feature columns were standard-scaled (subtracting mean /
dividing by the standard deviation) before training the models to obtain representations. The validation set was used to implement an early stopping
heuristic for the training process, in the case of the deep learning
models. At this point, all trained representations were saved to
disk. For the deep learning models, we used grid search to find the best set of hyper-parameters.
For each representation learning method, representations were extracted from the training set. Feature columns were standard-scaled (subtracting mean / dividing by the standard deviation) before training the models to obtain representations. The validation set was used to implement an early stopping heuristic for the training process, in the case of the deep learning models. At this point, all trained representations were saved to disk. For the deep learning models, we used grid search to find the best set of hyper-parameters.
For basic autoencoders, we train with a mini-batch of 512
randomly sampled records, and for the recurrent
autoencoders we train with a mini-batch of 4 patients
with full history. We use early
stopping based on the validation set loss to avoid overfitting,
i.e. we stop training if we observe that validation set loss is
non-decreasing for 10 consecutive epochs. We additionally use the
validation set to perform hyperparameter optimization over the optimal
learning rate and activation functions.
For basic autoencoders, we train with a mini-batch of 512 randomly sampled records, and for the recurrent autoencoders we train with a mini-batch of 4 patients with full history. We use early stopping based on the validation set loss to avoid overfitting, i.e. we stop training if we observe that validation set loss is non-decreasing for 10 consecutive epochs. We additionally use the validation set to perform hyperparameter optimization over the optimal learning rate and activation functions.
a.3 Representation evaluation
For evaluating the future signal and task prediction performance, representations of the first 12 hours of a patient recording were excluded. In this way the results are not affected by the model-specific ways of handling incomplete histories, which occur at the beginning of the patient stay.
a.4 Model complexity
Table A2 shows the traits of the unsupervised learning models used in the paper. An advantage of Seq2Seq-based models is that the number of parameters they use does not depend on the length of the input time series to be compressed.
|name||nonlinear||temporal||decoder output||attention||number of parameters|
a.5 Impact of representation dimension
In this section we investigate the relationship between the dimensionality of representations and their performance across tasks. In the previously described experiments, we used a representation dimension of , implying a compression factor of 12 (as the windows consist of 12 hourly measurements of 94 variables). Here we vary the value of to explore how much compression is possible while retaining prediction performance.
Table A3 shows the AUROC values using S2S-F-A representations for prediction. Compared with the AUROC scores corresponding to using raw features in Table 2, even the S2S-F-A representations with very low dimension still obtain reasonable performance.
|AUROC||S2S-F-A (m=2)||S2S-F-A (m=50)||S2S-F-A (m=94)|