In hospitals, data is constantly recorded – often in the form of physiological signals such as blood oxygen, heart rate, blood pressure and more. In this paper, we address the problem of hypoxemia (low arterial blood oxygen) within operating rooms; one of the common yet potentially serious concerns that anesthesiologists deal with while safeguarding patients. In particular, one recent study found that hypoxemic events occur every 29 surgery hours on average for two hospitals’ operating rooms (efvmsg10, ). These events can cause serious patient harm during general anesthesia and surgery and are correlated with cardiac arrest, cardiac arryhythmias, decreased cognitive function, and more (dbch14, ; sn01, ).
In order to assist anesthesiologists, Lundberg et. al. (2017) developed a method for hypoxemia prediction on operating room data with the aim of interpretability (Lundberg206540, )
. They compared parzen windows, linear SVM, linear lasso, and gradient boosting trees. They found that gradient boosting trees with pre-processed features was the most performant method for hypoxemia predictions. In their evaluation of their machine learning technique, they found that their method made more accurate hypoxemia predictions than five practicing anesthesiologists. Lundberg et. al. showed that gradient boosting trees are very powerful methods for prediction (particularly for encoding interactions), but the preprocessed time series features used – exponential moving averages/variances – can be improved. Representation learning would take advantage of the recurrence/memory that neural networks can encode, and pass latent representations into a gradient boosting tree to further improve on doctor performance. In general, representation learning has already achieved great success in speech recognition, signal processing, object recognition, and natural language processing(DBLP:journals/corr/abs-1206-5538, ). It is well suited to time series biomedical data because inherent/latent structure is a reasonable assumption in physiological data. We explore this approach and present a framework for forecasting biomedical time series data.
Long short term memory networks Long short term memory networks are a sophisticated way to explicitly retain memory (Hochreiter:1997:LSM:1246443.1246450, )
. In comparison to the autoregressive methods used in the previous exploration of hypoxemia (Lundberg et. al.), recurrent neural networks like LSTM networks are capable of capturing more complex dependencies. We train these networks in Python using Keras, an open source neural network library, with a Tensorflow backend(chollet2015, ; DBLP:journals/corr/AbadiABBCCCDDDG16, ). We utilize 72 CPUs (Intel(R) Xeon(R) CPU E5-2699 v3 @ 2.30GHz) to train our networks and our tree models.
In terms of the design, the networks in this paper simply utilize two layers, because adding too many layers made convergence difficult for our application. We found that important steps in training LSTM networks for operating room data are to impute missing values by the training mean, standardize data, and to randomize sample ordering (for time series in particular) prior to training. To prevent overfitting, we utilized dropouts between layers as well as recurrent dropouts for the LSTM nodes(Srivastava:2014:DSW:2627435.2670313, )
. Using a learning rate of 0.001, rmsprop for an optimizer, and a sigmoid output layer gave us the best final results. The LSTM models were run until their validation accuracy did not improve for twenty rounds.
To the best of our knowledge this is one of the first applications of RNNs within an operating room setting, and we are unaware of any others. RNNs have been applied in other health settings: Lipton et al., (2015) applied LSTM networks in a clinical setting, Che et al. applied GRUs for imputation in clinical and synthetic data sets (2016) (DBLP:journals/corr/ChePCSL16, ; DBLP:journals/corr/LiptonKEW15, ). Chauhan and Vig (2015) and Rajpurkar et. al. (2017) have done work that focused on univariate applications of neural networks in health settings (cv15, ; DBLP:journals/corr/RajpurkarHHBN17, ).
Gradient Boosting Trees
Gradient boosting trees work well in practice due to their ease of use and flexibility. Imputing, standardizing, and randomizing are all unnecessary because gradient boosting trees are based on splits in the training data. We postulate that gradient boosting trees are better at incorporating important static features in predictions than LSTM networks and saw very good performance with simple methods of processing time series features (exponential moving averages/variances). We found that a learning rate of 0.02, a max tree depth of 6, subsampling rate of 0.5, and a logistic objective gave us good performance. All XGB models were run until their validation accuracy was non-improving for five rounds. We train these models in Python using xgboost, an open source library for gradient boosting trees(Chen:2016:XST:2939672.2939785, ).
Run XGB using processed times series data (EMA/EMV) and raw static data.
Identify the most important features.
Run supervised learning with univariate LSTM networks on these features.
Use second to last LSTM layer to create features for XGB.
Retrain XGB model with additional hidden features for the final model.
3 Experimental Results
Data 57,000 surgeries containing real-time features sampled minute by minute such as SaO2 (blood oxygen), ETCO2 (exhaled carbon dioxide), etc. as well as static summary information such as height, weight, ASA codes, etc. obtained under appropriate Institutional Review Board (IRB) approval. After splitting surgeries into multiple time points, there are samples with positive examples. These labels represent a time series binary hypoxemia classification problem where SaO2 less than 92% is considered hypoxemia.
Evaluation Metric Area under the precision-recall (PR) curve
is our evaluation metric. PR curves are widely used for binary classification tasks to summarize the predictive accuracy of a model. Rather than ROC curves, PR curves are better suited to classification problems with imbalanced labels. True positives () are positive sample points that are classified as positive whereas true negatives () are negative sample points that are classified as negative. Then, false positives () are negative sample points that are classified as positive whereas false negatives () are positive sample points that are classified as negative. Precision is defined as and recall is . The PR curve is plotted with precision (y-axis) for each value of recall (x-axis). In order to summarize this curve, it is conventional to use area under the curve (AUC) to measure prediction performance.
Performance We demonstrate that long short term memory networks outperform autoregressive methods.
Figure 1 reveals two things. First, we see that on a univariate signal, LSTM networks substantially outperform autoregressive methods – even performant ones like gradient boosting trees. Second, we see that LSTM networks improve in performance even after the autoregressive methods saturate in terms of accuracy. Utilizing lookbacks of 60 rather than 30 markedly improves LSTM network performance – suggesting that they successfully identify complex long term patterns relevant to prediction. Moving forward, the LSTM networks use lookbacks of 60 minutes to predict desaturation.
|Model||Test PR-AUC||Train Time|
|1||LSTM network (200x200) on raw (SaO2)||0.23183||hours|
|2||LSTM network (400x400) on raw (40 features)||0.23315||hours|
Although multivariate LSTM networks to predict hypoxemia appear promising, they take a substantial amount of time to train. Despite having twice as many nodes, Table 1 Model 2 (a multivariate LSTM network with 40 features) only gives a slight improvement over Table 1 Model 1 (a univariate LSTM network with SaO2). Based on training the gradient boosting model, we suspect that static features contain more information relevant to prediction than the multivariate LSTM networks are capable of utilizing (Figure 2 justifies the importance of static features). Additionally, the cost of training the LSTM network is already increasing drastically with the model size. In order to successfully capture all relationships in a complete multivariate network, the network size would need to increase to an intractably large size. Preliminary results with a GeForce GTX 1050 GPU were approximately three times faster, but a multivariate network would likely still be intractable.
|1||XGB on processed (SaO2 only)||0.20580|
|2||XGB on processed (40 features)||0.22999|
|3||XGB on LSTM hidden (200 features)||0.23354|
|4||XGB on processed (40 features) + LSTM output (1 feature)||0.24714|
|5||XGB on processed (40 features) + LSTM hidden (200 features)||0.24836|
|6||XGB on processed - no SaO2 (39 features) + LSTM output (1 feature)||0.24251|
|7||XGB on processed - no SaO2 (39 features) + LSTM hidden (200 features)||0.24678|
for all time series features. LSTM output denotes using the probability for hypoxemia output by the final sigmoid layer as an input to XGB. LSTM hidden denotes using the outputs from the penultimate LSTM layer as inputs to XGB. The LSTM output/hidden features are generating using Model 1 from Table1.
Then, Table 2 shows that XGB utilizing only processed SaO2 (Table 2 Model 1) does not immediately do well. Yet when XGB has access to all 40 processed features (Table 2 Model 2) PR-AUC increases drastically – from to . This result suggests that gradient boosting trees are able to leverage the 39 remaining features to improve predictive accuracy in a way that LSTM networks alone could not. In figure 2, SaO2 dominates the top 10 most important features for hypoxemia prediction, but static features like weight and age are also of importance. Training models to effectively utilize these additional features did not appear tractable with LSTM networks (Table 1), thus motivating our exploration of a hybrid approach.
Then, in Model 3 of Table 2 XGB utilizing LSTM latent representations gives slightly better performance than the LSTM network alone, suggesting that gradient boosting trees are able to capture the non-linearity in the final layer of the LSTM network. Next, Model 4 shows that augmenting XGB with a supervised latent representation of SaO2 from our LSTM network gives substantially better performance than either individual model. Since gradient boosting trees work well on hidden features alone (Model 3), we combine them in Model 5 with the processed features, which yields the most performant model for hypoxemia prediction. Because we included the EMAs/EMV in Models 4 and 5, how much of the predictive performance is due to the EMAs/EMV and how much is due to the hidden LSTM features is still an open question. In Models 6 and 7 the hidden features are revealed to be much more informative than the single predictive probability, suggesting that the hidden features not only serve as fairly powerful surrogates for the EMAs/EMV, but that they also capture additional information that EMAs/EMV features do not. This suggests that supervised representation learning is capable of finding sensible representations. If these representations can be learned for many tasks and data sets, machine learning research scientists may be able to work additively on impactful questions in health. Through representation learning, motifs in time series data can be conveyed between research groups with access to different data sets in anonymous yet meaningful ways – protecting patient privacy.
At a high level, this hybrid approach is very simplistic. Yet this simplicity offers two nice properties, the first being generalizability – our hybrid approach to machine learning can easily be applied to any problem that would benefit from more sophisticated time series processing. The second property being accessibility – using the open source packages available, these two methodologies can easily be combined in very performative ways for biomedical data. Accessibility ideally encourages a trend towards representation learning as a means to make additive progress (collaboration rather than competition) on biomedical prediction tasks.
-  Jesse M. Ehrenfeld, L. M. Funk, Schalkwyk J. Van, A. F. Merry, W. S. Sandberg, and A. Gawande. The incidence of hypoxemia during surgery: evidence from two institutions. Canadian Journal of Anesthesia, 57(10):888–897, 2010.
-  C. M. Dunham, A. E. Hutchinson B. M. Hileman, E. A. Chance, and G. S. Huang. Perioperative hypoxemia is common with horizontal positioning during general anesthesia and is associated with major adverse outcomes: a retrospective study of consecutive patients. BMC Anesthesiol., 14:43, 2014.
-  L. Strachan and D. W. Noble. Hypoxia and surgical patients–prevention and treatment of an unnecessary cause of morbidity and mortality. J. R. Coll. Surg. Edinb., 46:297–302, 2001.
-  Scott M Lundberg, Bala Nair, Monica S Vavilala, Mayumi Horibe, Michael J Eisses, Trevor Adams, David E Liston, Daniel King-Wai Low, Shu-Fang Newman, Jerry Kim, and Su-In Lee. Explainable machine learning predictions to help anesthesiologists prevent hypoxemia during surgery. bioRxiv, 2017.
-  Yoshua Bengio, Aaron C. Courville, and Pascal Vincent. Unsupervised feature learning and deep learning: A review and new perspectives. CoRR, abs/1206.5538, 2012.
-  Sepp Hochreiter and Jürgen Schmidhuber. Long short-term memory. Neural Comput., 9(8):1735–1780, November 1997.
-  François Chollet. keras. https://github.com/fchollet/keras, 2015.
-  Martín Abadi, Ashish Agarwal, Paul Barham, Eugene Brevdo, Zhifeng Chen, Craig Citro, Gregory S. Corrado, Andy Davis, Jeffrey Dean, Matthieu Devin, Sanjay Ghemawat, Ian J. Goodfellow, Andrew Harp, Geoffrey Irving, Michael Isard, Yangqing Jia, Rafal Józefowicz, Lukasz Kaiser, Manjunath Kudlur, Josh Levenberg, Dan Mané, Rajat Monga, Sherry Moore, Derek Gordon Murray, Chris Olah, Mike Schuster, Jonathon Shlens, Benoit Steiner, Ilya Sutskever, Kunal Talwar, Paul A. Tucker, Vincent Vanhoucke, Vijay Vasudevan, Fernanda B. Viégas, Oriol Vinyals, Pete Warden, Martin Wattenberg, Martin Wicke, Yuan Yu, and Xiaoqiang Zheng. Tensorflow: Large-scale machine learning on heterogeneous distributed systems. CoRR, abs/1603.04467, 2016.
-  Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. Dropout: A simple way to prevent neural networks from overfitting. J. Mach. Learn. Res., 15(1):1929–1958, January 2014.
-  Zhengping Che, Sanjay Purushotham, Kyunghyun Cho, David Sontag, and Yan Liu. Recurrent neural networks for multivariate time series with missing values. CoRR, abs/1606.01865, 2016.
-  Zachary Chase Lipton, David C. Kale, Charles Elkan, and Randall C. Wetzel. Learning to diagnose with LSTM recurrent neural networks. CoRR, abs/1511.03677, 2015.
S. Chauhan and L. Vig.
Anomaly detection in ecg timesignals via deep long short-term memory
Proc. IEEE International Conference on Data Science and Advanced Analytics (DSAA), 2015. 36678 2015, pages 1–7, October 2015.
-  Pranav Rajpurkar, Awni Y. Hannun, Masoumeh Haghpanahi, Codie Bourn, and Andrew Y. Ng. Cardiologist-level arrhythmia detection with convolutional neural networks. CoRR, abs/1707.01836, 2017.
-  Tianqi Chen and Carlos Guestrin. Xgboost: A scalable tree boosting system. In Proceedings of the 22Nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD ’16, pages 785–794, New York, NY, USA, 2016. ACM.