Deep Representation for Patient Visits from Electronic Health Records

03/26/2018 ∙ by Jean-Baptiste Escudié, et al. ∙ 0

We show how to learn low-dimensional representations (embeddings) of patient visits from the corresponding electronic health record (EHR) where International Classification of Diseases (ICD) diagnosis codes are removed. We expect that these embeddings will be useful for the construction of predictive statistical models anticipated to drive personalized medicine and improve healthcare quality. These embeddings are learned using a deep neural network trained to predict ICD diagnosis categories. We show that our embeddings capture relevant clinical informations and can be used directly as input to standard machine learning algorithms like multi-output classifiers for ICD code prediction. We also show that important medical informations correspond to particular directions in our embedding space.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Over the past 10 years, hospital adoption of electronic health record (EHR) system has risen to an unprecedented level. According to the latest report from the Office of the National Coordinator for Health Information Technology, nearly 84% of hospitals have adopted at least a basic EHR system in the USA[1]. EHR systems store data associated with each patient encountered and are primarily designed for improving healthcare efficiency from an operational standpoint. This paper explores secondary use of EHRs [2]

in a deep learning framework and builds a representation of the raw data which will be the first required step for any further deep learning analysis.

The representation of raw data is a fundamental issue spanning all types of machine learning frameworks. Traditionally, input features to a machine learning algorithm are hand-crafted from raw data, relying on practitioner expertise and domain knowledge to determine explicit patterns of interest. The engineering process of creating and selecting appropriate features is laborious and time consuming. In contrast, deep learning techniques learn optimal features directly from the raw data itself without human guidance, allowing for the automatic discovery of latent data structures.

In this paper, we develop a supervised deep learning algorithm to learn low-dimensional representations (also called embeddings) of patient visits. EHR data is challenging to represent and model due to its high dimensionality, noise and sparseness[3]

. A common first step for mining EHR consists for a domain expert to designate the patterns to look for (i.e. the learning task) and to specify the appropriate clinical variables (i.e. the input features). Although appropriate in some cases, this methodology scales poorly, does not generalize well and misses opportunities to discover new patterns. To address these shortcomings, we design an auxilary machine learning task which consists in predicting the ICD codes of each visit. We build a deep neural network (NN) architecture encoding the raw data into an embedding and train this NN to predict the presence or absence of ICD codes. As a result, we obtain an encoder mapping raw data to a dense vector of

where is a free parameter (fixed to in the rest of the paper, see below).

The International Classification of Dieseases (ICD) is a health care classification system maintained by the World Health Organization, which provides a hierarchy of diagnostic codes of diseases, disorders, injuries, signs, symptoms, etc. Given a clinical context, reported in the EHR in the form of free text and structured data, appropriate ICD codes are attributed manually by the physician or other healthcare professionals following the coding guidelines. In our work, we use ICD coding as a form of supervision (i.e. labeling of our dataset) in order to allow our algorithm to learn a valuable representation of the raw data. This approach is similar to recent use of image inpainting in the computer vision community to learn context

[5]. Our algorithm is asked to produce low-dimensional embeddings of the raw data in order to predict some information withheld intentionally.

In this paper, we build a generalist embedding of patient visits. Although we use the task of ICD codes prediction to do so, this is not our main goal. Indeed, we demonstrate that our representation is a first step towards developing quantitative models for patients than can be used to predict health status, as well as to help preventing diseases or disabilities. In order to test the validity of our representation, we show that it contains a compressed version of the medical informations of the EHR.

Contributions

We present a deep learning approach to build a low-dimensional representation for patient visits based on the raw EHR data. Specifically, our NN takes as inputs both clinical free-text notes and strucutred and semi-structured data present in the EHR. Using the MIMIC-III dataset[7], we demonstrate the medical pertinence of the representations found by our algorithm.

2 Related work

The use of deep learning on EHR increased rapidly after adoption of EHR systems[1] and development of deep learning methods[6]. In a well-known work[8], authors develop a framework called ‘deep patient’ to represent patients by a set of general features, which are inferred automatically from a large-scale EHR database through a stack of denoising auto-encoders. To prove the effectiveness of the proposed representation, deep patient is used to predict future diseases. As opposed to our approach, the representation built for deep patient[8] is done in an unsupervised manner. In particular, ICD codes are given as inputs of the algorithm, so that deep patient cannot be used for automated ICD coding. Subsequent work extends this approach by modeling the temporal sequence of events that occured in a patient’s record with convolutional networks[9].

Very recently, based on data from two academic hospitals with a general patient population, authors[10] demonstrate the effectiveness of using deep learning models in a wide variety of predictive problems and settings. Three deep learning NN are used. In contrast to our approach, these three NN need to be trained for each separate task (such as prediciting in-hospital mortality, 30-day unplanned readmission, etc.) which allows to get very good performances at the expense of a high computational cost. Moreover, no generalist embedding like ours is learnt following this methodology.

Although this is not our main aim here, there are works towards automated ICD coding. A recent paper[11] formulates the coding task as a general multi-label classification problem on diagnosis descriptions and uses a recurrent NN on the free-text data to assign codes.

Given rapid developments in this field, we point readers to a recent review[12].

3 Methods

In this section, we describe our strategy to prepare the dataset and build features as well as labels for each patient stay. We then describe in detail our deep learning approach to assign a vector embedding for each stay.

3.1 Dataset and preprocessing

We perform the study on the publicly available MIMIC-III dataset[7], which contains de-identified and comprehensive electronic medical records of patient visits in the Beth Israel Deaconess Medical Center from 2001 to 2012.

To each stay, we associate two types of features and a vector of labels

  • Text features: the MIMIC-III dataset contains a large corpus of medical records made of observations and notes written by care providers during the patient’s stay. We use a vector of integers to represent the medical records associated with each stay. Specifically, we define a vocabulary of words with at least occurrences in the medical records of the whole MIMIC-III dataset. We represent each medical record as a sequence of integers representing the index of each word in this vocabulary, and concatenate these sequences to obtain one such vector of integers for each stay. We truncate these vectors to a maximum length of words corresponding to the th percentile of medical reports lengths over all stays.

  • Structured features: we also make use of the large amount of numerical information available for each stay, such as the type and quantity of medications given to the patient, with associated severity and mortality scores, time-dependent vital signs, fluid balance, laboratory results, etc. As a first coarse-grained approach, we associate to each stay a vector of structured features by concatenating all these features and summing them over the time of the stay for those that are time-dependent. This yields a vector of approximately real-valued features for each stay. We note that we exclude from the features the ICD codes, as these will serve as our targets for the learning task (see below).

  • Labels: each discharge contains a set of ICD codes assigned by medical care providers and used for billing purposes. We use these codes as a summary of the medical condition of the patient that can be predicted from the two aforementioned types of features. ICD codes have a hierarchical structure that allow for a variable level of precision in the description of the medical condition of the patient. In the following, we restrict our study to the lowest resolution level, consisting of so-called ‘chapters’. Our label vector for each stay is therefore a -dimensional binary vector indicating whether each code has been assigned to the stay of not.

As explained in the introduction, EHR data is very sparse. This is illustrated in Table 1, where we have grouped together features by type, such as demographics, which contains categorical data such as gender, age and ethnicity information, among a total of different features. Importantly, these features have a very variable prevalence in the dataset, the most represented value being present in of the stays while half of the values are present in only of the stays, as quantified by the median frequency of each value shown in the table. Table 1 shows that other groups of features are similarly sparse.

There is a high number of lab tests and drugs, microbiology and drug prescriptions which represents most of the features, respectively and features. Each such feature is the number of lab tests or prescriptions for each category during the stay. In particular, we ignore when these tests or prescriptions have been done and in this paper, we only count them.

Number of features Median frequency Most frequent
demographics 54 0.1% 97.2%
administrative 90 3.6% 97.2%
microbiology 19547 0.0% 82.2%
input-output 1871 0.0% 45.2%
prescriptions 13715 0.0% 83.2%
icd9 procedures 2683 0.0% 57.0%
Table 1: Sparsity of EHR data

In order to reduce the dimensionality of the structured features, it has proved useful to perform some feature selection. Of the initial

structured features, we discard all but the best predictors of the ICD codes of the stay according to a univariate test.

3.2 Model design

We learn a representation of the patients visits in a supervised way. More precisely, we use a hybrid architecture made of a convolutional neural network on the text features, and a multi-layer perceptron on the structured data, both trained jointly to predict the ICD labels associated with each stay. This yields a multi-label classification task in which we predict, for each code, whether it was assigned to the stay of not. Finally, we extract an embedding for each stay by concatenating the output of the last hidden layer of the two subnetworks.

The resulting architecture is pictured on Figure 1, and the following subsections describe the two subparts of the network in more details. Importantly, we note that this multi-label classification task is not the main object of our work. In fact, we use the ICD codes as mere proxies to design an efficient supervised training strategy to learn our embeddings. In Section 4, we shall validate our approach both by comparing the accuracy of our classification algorithm against baseline models, and independently illustrate the medical relevance of our embeddings.

Figure 1: General description of the Neural Network architecture

3.2.1 Multi-layer perceptron (MLP)

Our model for the structured features is a standard -layers perceptron, with a hidden layer of size

. We use rectified linear units (ReLU)

[14]

as our activation functions on the input and hidden layers. Dropout

[15] is applied after the hidden layer to reduce overfitting. The output of the structured features model is a

-dimensional vector where each entry corresponds to a score for the corresponding ICD chapter. The probability of this chapter being assigned to the sample is given by the sigmoid of this score.

3.2.2 Convolutional neural network (CNN)

Our convolutional architecture for text features is inspired by recent developments in text classification[13]. Specifically, we start by embedding each word in the input sentence in a -dimensional space using a dense lookup table. We then apply a one-dimensional convolutional layer with three types of filters of size , where or are the sizes of the receptive fields, is the word embedding dimension and

is the number of channels. We then use max-pooling over the time dimension to obtain a vector representation of the medical report of size

, and add a dropout layer. Finally, we obtain a -dimensional vector of probabilities of each ICD code by applying a fully connected layer followed by a sigmoid non-linearity.

3.2.3 Representation learning

The output probability vector of the hybrid network is obtained by summing component-wise the -dimensional probability vectors of the multi-layer perceptron and the convolutional neural network. We consider a multi-label classification setting and define our loss to be the sum of the binary cross-entropies on each of the target ICD codes. The hybrid model is trained using the ADAM optimizer [16], with a learning rate of and a batch size of .

Of the patients in the MIMIC III dataset, we kept patients with at least one stay with 5 distinct ICD codes for validation and test, the remaining patients were attributed to the training set. As a result, the patient stays of the MIMIC-III dataset is split into a training set of stays, a validation set of stays and a test set of stays. The training set has text documents. A summary is given below:

total train validation test
Patients 46520 36520 5000 5000
Stays 58976 44147 7472 7357
Documents 2083180 1661586 219751 201843
Table 2: Summary of train, validation and test sets.

We stop the training when the loss on the training set stops decaying, which typically happens after a few hours of training on a GPU. Upon completion of the training, we extract an embedding for each stay by concatenating the -dimensional the hidden layer of the MLP with the -dimensional representation obtained with the CNN. We therefore obtain a -dimensional representation of a stay, summarizing both the textual data contained in the medical reports and the structured data associated to the stay.

4 Results

In the following, we present our results both on the multi-label ICD code classification task and on the relevance of the learned embedding.

4.1 ICD codes prediction

As a first analysis, we show the performance of our algorithm for the prediction of the presence or absence of ICD codes at the lowest resolution level (i.e. for the ICD chapters). Results are provided in Table 3

. In order to get a sense of the performance of our algorithm, we compare it a baseline random forest (multi-output) classifier trained on the raw structured features. More precisely, we train the random forest classifier on the training + validation set (

visits). For all models, we compute precision, recall and score on the test set. The last column of Table 3 gives the fraction of presence of each code on the test set (which are the roughly the same as in the whole dataset). We note a high imbalance in the labels classes, with ICD chapters presence ranging from for Complications of Pregnancy to for Diseases of the Circulatory System.

Precision Recall
rf deep emb+rf rf deep emb+rf rf deep emb+rf Presence
Diseases Of The Circulato… 0.841 0.994 0.963 0.998 0.999 0.997 0.891 0.997 0.980 0.718
Endocrine, Nutritional An… 0.727 0.746 0.730 0.995 0.930 0.952 0.813 0.828 0.826 0.595
Supplementary Classificat… 0.671 0.742 0.666 0.982 0.748 0.767 0.741 0.745 0.713 0.572
Diseases Of The Respirato… 0.834 0.996 0.974 0.609 0.997 0.991 0.681 0.996 0.982 0.418
Injury And Poisoning 0.956 0.702 0.647 0.406 0.611 0.419 0.515 0.653 0.508 0.387
Diseases Of The Genitouri… 0.816 0.995 0.975 0.493 0.994 0.981 0.582 0.995 0.978 0.366
Diseases Of The Digestive… 0.939 0.992 0.980 0.397 0.995 0.951 0.516 0.994 0.965 0.354
Symptoms, Signs, And Ill-… 0.778 0.614 0.568 0.329 0.497 0.326 0.442 0.549 0.414 0.341
Diseases Of The Blood And… 0.762 0.993 0.967 0.304 0.995 0.908 0.420 0.994 0.936 0.325
Mental Disorders 0.774 0.559 0.531 0.126 0.271 0.066 0.215 0.365 0.117 0.279
Supplementary Classificat… 0.965 0.739 0.701 0.144 0.430 0.150 0.237 0.544 0.247 0.278
Diseases Of The Nervous S… 0.969 0.996 0.995 0.141 0.990 0.820 0.231 0.993 0.899 0.263
Infectious And Parasitic … 0.996 0.711 0.636 0.291 0.501 0.241 0.417 0.588 0.350 0.245
Diseases Of The Musculosk… 1.000 0.987 0.988 0.002 0.973 0.338 0.005 0.980 0.504 0.168
Neoplasms 1.000 0.680 1.000 0.015 0.168 0.001 0.030 0.269 0.002 0.151
Diseases Of The Skin And … 1.000 0.455 0.000 0.001 0.040 0.000 0.003 0.074 0.000 0.101
Certain Conditions Origin… 1.000 0.999 1.000 0.504 1.000 0.958 0.624 0.999 0.979 0.093
Congenital Anomalies 0.895 0.992 1.000 0.082 0.960 0.398 0.148 0.976 0.570 0.051
Complications Of Pregnanc… 0.000 1.000 0.000 0.000 0.792 0.000 0.000 0.884 0.000 0.003
Total average 0.737 0.860 0.841 0.498 0.784 0.683 0.595 0.820 0.754 -
Table 3: Precision, Recall, F1 scores for a random forest classifier (rf) trained on the raw data, our deep neural network (deep) and a random forest classifier trained on the embeddings (emb+rf).

We see that our algorithm has a better

score for all chapters. The global precision and recall of our deep neural network is also better than those of the random forest classifier.

In order to test the quality of our embeddings, we also train the random forest classifier on the embeddings and the results are given in the column ‘’emb+rf‘’ in Table 3. As expected, the performances are lower than the predictions made by our neural network. However, we see that on average the performances of the classifier are higher when trained on the embeddings than on the raw data. This shows that our embeddings which are now of much lower dimension than the raw data still contain the relevant medical information (at least to make good ICD codes predictions).

4.2 Medical semantic information encoding

We also assess the quality of our embeddings by looking at how medical concepts are encoded in the embedding space. In the following, we concentrate on antibiotic resistance and shock. Our network produces a low-dimensional embedding from which ICD codes are predicted. As a result, the network compresses the EHR data in an efficient way. Indeed if a medical concept is very important to predict ICD codes, we should be able to recover it from our embeddings. We show that this is indeed true and that antibiotic resistance (resp. shock) corresponds to a particular dimension of our embeddings.

Figure 2: Encoding of antibiotic resistance in our embeddings

In order to determine this particular dimension, we proceed as follows. Let us first consider two bacterias: Enterococcus sp. and Staph Aureus Coag. To evaluate antibiotic resistance we then define four groups as pictured on figure 2:

  • (a) stays with sensitive Enterococcus sp. microbiology result

  • (b) stays with resistant Enterococcus sp. microbiology result

  • (c) stays with sensitive Staph Aureus Coag. microbiology result

  • (d) stays with resistant Staph Aureus Coag. microbiology result

The semantic relationship between groups (a) and (b) (resp. (c) and (d)) is antibiotic resistance. We then compute the average vector between groups (a) and (b) in our embedding and the average vector between groups (c) and (d), as illustrated on Figure 2. Since each group has a different cardinality (with the smallest group (b) having stays), we compute the centroid of each group to properly define the vectors and . Then, we compute the cosine of the angle which in this case is . As a comparison, two random vectors in our embedding space would have a cosine of mean

and variance

. We see that and are therefore significantly aligned. Indeed, we present in Figure 3 the corresponding cosines computed for each of the pairs of bacteria with at least stays. We see that none of the cosine is negative and indeed alignment is very significant with cosine values ranging between and .

Figure 3: Encoding of medical concepts in our embeddings: antibiotic resistance and shock

We carried out the same experiment for shocks, defining the four following groups:

  • (a) sepsis (as defined with angus criterium)

  • (b) septic shock (angus criterium and vasopressor)

  • (c) acute myocardial infarction (ICD9 code )

  • (d) cardiogenic shock ( and vasopressor)

The semantic relationship between (a) and (b) is similar to that between (c) and (d), and relates to the notion of shock. We compute the four centroids of stays in the embedding space for each group. We then use the cosine between the vectors (a), (b) and (c), (d) as a similarity measure. The less populated of the four groups is (d) and has stays. We find a cosine value of , implying a significantly better alignment than two random vectors.

5 Discussion

One strong advantage of our method is that the representation is learnt without any human intervention where traditional approaches require a clinician expert to determine patterns to look for in the tens of thousands of features and free text included in EHRs as well as computer science expertise to implement them. Despite the notorious difficulty to interpret embeddings, we managed to experimentally prove that clinical concepts such as antiobiotic resistance or shocks are encoded in the resulting vector space. The joint learning allows to leverage information contained both in free text and structured data. This is an important aspect as most of the clinical information can be included only in free text: for example, more than of the auto-immune comorbidities in [17] were not present in the structured data. Other published deep learning works focused either on structured data only[23] [8] or free text data only [21] [22].

Whereas previous works show that the underlying neural networks can be trained in a self-supervised way – see the stacked autoencoders in deep patient

[8] – we chose here a supervised multiclass classification of ICD9 diagnosis chapters, motivated by the goal of learning a general embedding encompassing the whole spectrum of clinical semantics. This is a hard classification task for many reasons, especially because a stay can have multiple classes, but also because of the high class imbalance even at the highest level of the ICD9 classification. At this specific level – corresponding to 19 broad classes – our model however manages to organize finer concepts in its internal representation.

The information on antiobiotic resistance or the broader notion of shocks, whether septic or cardiogenic, although present in the input features, was indeed correctly reorganized as directions in the embedding space, as the colinearity measurement shows. This evaluation can be seen as an equivalent of word analogies for embeddings learnt on raw text[18] (Paris - France and Italy - Rome) and are seen as an indicative quality of the embedding.

The size of the MIMIC III database is a relative limitation to deep learning approaches where larger datasets typically allow for training larger neural networks to train and yield higher perfomance. This dataset has approximately stays and 2 millions documents, when a typical university hospital clinical data warehouse would store a few millions stays[19]. Deep learning can still be sucessfully used on MIMIC III [20]. In this study, some design choices are driven by these limitations. We limit the text input to a vector of words, we aggregate data to one timestamp per stay, losing the temporal dimension, we use shallower neural network architectures, and we reduced the number of structured features from to . The feature selection is automated and complies with our goal that no expert intervene on the feature learning step. It is done by simple chi-square test and is possible thanks to the choice of a supervised classification tasks as auxillary learning tasks.

This work describes an accessible method to produce embeddings on clinical data typically found in clinical data warehouses and advances the medical interpretation of the internal representation yielded by neural networks. It is quite computationally expensive like every deep learning method, but the reuse of the embedding in other tasks justifies the initial cost. Indeed, we show that the embedding improves the performance of less computationally intensive algorithms, such as random forest, on the ICD coding prediction task. Our exploration of the embedding structure also motivates its application to similarity-based patient retrieval. Precomputing the embedding for a large clinical data warehouse could then be useful for other classification tasks or cohort selection. Moreover, our embedding benefits from encapsulating variables dependencies learnt on a large set of stays, which could prove useful to develop an algorithm predicting a specific condition, typically on a smaller dataset.

6 Conclusion

We presented a neural network architecture that successfully takes raw EHR data, both structured and free text as input and automatically builds a stay representation as a buyproduct of learning, to predict the ICD9 diagnosis chapters assigned to each stay. We demonstrate that this embedding conserves a general medical semantic representation of the initial data. The embedding also improves the performance of a random classifier on a prediction tasks compared to the raw data on the MIMIC III dataset.

Our approach is general and flexible and opens the way to a lof of variations. More information could be incorporated into the learning task, and recurrent neural networks will be used to include time dependency. Moreover, different kind of informations can be withheld depending on the ultimate goal in order to modify the supervised learning task, which will lead to the study of the resulting embeddings.

Acknowledgements

Most of this work was done during the Datathon DAT-ICU organized by AP-HP in Paris on January 20-21 by the team DLDIY. In addition to the authors, Fajwel Fogel (Sancare), Mina He (BNP Paribas) and Gérard Weisbuch (ENS) contributed to the project and we thank them for their work. The team was one of the two winning projects111https://www.aphp.fr/contenu/datathon-dat-icu-intensive-care-unit-4-projets-innovants-selectionnes-lissue-de-48h-danalyse

References