Recently, healthcare intelligence has become a hot research topic, which is mainly due to the following factors: 1) the wide utilization of digital healthcare systems that produce huge valuable data such as electronic health records (EHRs); 2) the tremendous advancements of computational models, in particular the deep learning methods; 3) an urgent need of intelligent healthcare systems to assist the junior doctors and solve the inefficiency of medical resources (Figure1 (a)), brought by the emergent public health incidents, such as COVID-19 or Coronavirus Pandemic [He2020TemporalDI]. One of the core EHR-based applications is recommending medications for patients, with the aim to assist or even replace doctors in making effective and safe medication prescriptions for certain patients, as shown in Figure 1 (c).
However, recommending medications for patients is a challenging task due to the complexity of EHR data. As illustrated in Figure 1 (b), this complexity can be attributed to several factors. First, the EHR data typically comprises of multilevel medical records including three key aspects, e.g., laboratory results, diagnosed diseases, and prescribed treatment medications. Within each visit, the multilevel structure is closely related with the decision-making pathway, which is a kind of hierarchical structure. As shown in Figure 1 (b), the hierarchy generally begins with the laboratory results that precisely record the detailed health progression of a patient, the middle is the diseases diagnosed by doctors according to corresponding laboratory results, and the top is the medications prescribed by doctors after comprehensive decision-making processes. Thus, how to fully leverage the inherent multilevel structural information has become a critical factor for modeling the intelligent medication recommendation systems. Most existing medication recommendation studies [Shang2019GAMENetGA, Zhang2017LEAPLT, He2020AttentionAM] put more efforts on modeling the mapping relations between diagnosed diseases and recommended medications. Though these algorithms have achieved early success on the medication recommendation task, they often over-emphasize the visit-level temporal dependency, and overlook the critical influence of such a hierarchy shown in Fig.1.
Second, along with the temporal dependencies of multiple medical sequences, the complex sequential correlations embodied in the multilevel structure of EHR data (Figure 1 (b)) is another challenge for a medication recommendation task. For example, the laboratory results can provide enough hints for certain diseases, e.g., when the anion gap is abnormally high, creatinine is abnormally low, and aspartate is abnormally high, it demonstrates some pulmonary related diseases including respiratory infections, pulmonary emphysema, and the patient needs corresponding treatments such as glucose, sodium bicarbonate, xylitol and budesonide. Such phenomenon clearly indicates multilevel correlations of EHR data. While most existing methods [Shang2019GAMENetGA, Zhang2017LEAPLT, Le2018DualMN] overlook such important relations of medical sequences and only consider the temporal dependencies. Though LSTM-DE [Jin2018KDD] and RAHM [AN2020103502] model the interactions of two sequences, they only considered the effects of related input sequences on the memory cell state while neglecting the influence on the current input state. Thus, in this paper, we consider infusing the interactions of two sequences both on the memory cell state and input cell state simultaneously into the temporal sequence learning network.
Third, unlike the above discussed structure-related limitations, how to recognize and filter out noisy information existing in EHR data at each timestamp is another important challenge that inhibits the recommendation performance. However, few deep learning studies in health informatics focus on infusing the feature selections into the learning process except LSAN[Ye2020LSANML] which considers assigning flexible attention weights to different diagnosis codes via their relevance to corresponding diseases for reducing the effect of irrelevant diagnosis codes in EHR data. However, the doctors practically pay more attention on the critical few factors and neglect the irrelevant medical indicators or historical medical codes. In other words, irrelevant features should be deleted in the decision-making process, and unimportant historical medical codes should be given less attention. In this way, the general attention mechanism might not be appropriate in the learning process.
To address the aforementioned challenges, in this paper, we develop a Multilevel Selective and Interactive Network, called MeSIN. The key idea lies in three aspects. First, a multilevel learning framework is designed to encode the the inherent multilevel structure of EHR data, which imitates the decision-making process of doctors in hospitals. Second, to capture the intra-correlations of multiple visits within each medical sequence and the inter-correlations of multiple sequences of EHR data, we propose a novel interactive temporal sequence learning network. Third, due to the multiple heterogeneous inputs including medical codes embeddings and learned laboratory results embeddings, we introduce multiple attentional selective modules into the framework to make automatic and intelligent selections. Therefore, our developed framework MeSIN consists of three key components including the attentional selective module (ASM), the interactive long-short term memory network (InLSTM), and the global selective fusion module (GSFM). In MeSIN, they tightly work together and significantly enhance each other for medication recommendation.
The main contributions of this study are as follows:
Multilevel Selective and Interactive Network (MeSIN). To the best of our knowledge, MeSIN is the first to formulate medications recommendation task as a multilevel learning framework, which is a challenging process in clinical decision-making systems. It can fully leverage the inherent multilevel structure of EHR data to learn a comprehensive patient representation for reasonable medication recommendation.
Interactive Long-Short Term Memory Network (InLSTM). InLSTM can effectively reinforce the interactions of multiple temporal heterogeneous sequences with the help of a recurrent neural structure, a new calibrated memory-augmented cell and a novel enhanced input gate.
Attentional Selective Module (ASM). We incorporate multiple improved attentional selective modules into MeSIN, which can intelligently assign relevance scores to the learned medical codes embeddings according to their importance with recommended medications.
Global Selective Fusion Module (GSFM). We design a self-attention based global selective fusion module (GSFM) to effectively infuse the obtained heterogeneous embeddings into patient representation according to their respective importance and minimize the adverse effects induced by the irrelevant information.
2 Related works
Related studies in healthcare informatics are reviewed from the following three perspectives: medication recommendation, attention mechanism in health informatics, sequence modeling in health informatics.
2.1 Medication recommendation
Recently, artificial intelligence, particularly computational intelligence and machine learning methods and algorithms, has been naturally applied in the development of recommender systems to improve prediction accuracy[Zhang2020ArtificialII]. Recommending rational and effective medications in time for patients, as a paramount recommendation task in health domain, has attracted great amount of studies. Shang et al. [Shang2019GAMENetGA] categorized medication recommendation-related tasks into instance-based and longitudinal sequential recommendation methods. Instance-based methods are based on the current disease progression of patients. For example, Zhang et al. [Zhang2017LEAP] formulated the medications recommendation task as a sequential decision-making problem and leveraged a recurrent decoder to model label dependency. Wang et al. [Wang2019OrderfreeMC]
addressed the recommendation issues by casting the task as an order-free Markov decision process (MDP) problem. However, they all ignored valuable historical information. Until now, longitudinal sequential recommendation methods mainly consider the impact of historical medical records by modeling their temporal dependencies. For instance, Jin et al.[Jin2018ATE] developed three different LSTMs to model heterogeneous data interactions for predicting the next-period prescriptions. Shang et al. [Shang2019GAMENetGA] incorporated historical diseases and procedure codes, as well as medication records, in their model. Shang et al.[shang2019pre] considered hierarchical knowledge about diagnoses and medications to enhance the code representation for medication recommendation. An et al. [AN2020103502] formulated the medication prediction task as hierarchical multi-task learning framework for improving the interpretability of predicted resutls. However, few of them simultaneously consider all the multiple heterogeneous sequences and the correlations between them in the decision-making of medications recommendation.
2.2 Attention mechanism in health informatics
The attention mechanism has been proposed to automatically assign importance scores according to the information relevance. In this case, larger weights indicate that the corresponding vectors are more relevant to generating the output. Due to its powerful ability, the attention mechanism has been widely used in various neural network based applications such as language understanding tasks[devlin_etal_2019_bert],[Pruthi2020LearningTD]
, computer vision problems[Shen2019SharpAN],[Hu2021AttentionalKE]. Likewise, attention mechanism in health informatics has been prevalent in predictive modelling. For instance, GRAM [Choi2017GRAMGA], KAME [Ma2018KAMEKA], and G-BERT [shang2019pre] leveraged the attention mechanism to integrate domain knowledge into disease or medication code representations for better performance. Retain [Choi2016RETAINAI], Dipole [Ma2017DipoleDP], Timeline [Bai2018InterpretableRL] and LSAN [Ye2020LSANML] all introduced attention mechanism to model the disease progression by considering the dependencies among visits and provide some interpretable insights. In addition, GCT [Choi2020LearningTG] were equipped with advanced attention networks, i.e., Transformer [NIPS2017_7181], to build the correlations between medical codes from every visits based on the automatically learned attention weights. Likely, AMANet [He2020AttentionAM] utilized multiple attention networks including self-attention and inter-attention to capture the intra-view interaction and inter-view interaction. However, the attention mechanism used in above models all generated the dense attention weights without zero weights value, which means that they cannot filter out the noise information and attend focus on the critical aspects.
2.3 Sequence modeling in health informatics
Due to the complexity of clinical scenarios, as shown in Fig.1, EHR systems in hospitals accumulate complex temporal and heterogeneous sequences. Existing studies in health informatics have widely utilize the temporal sequential records from EHRs to solve healthcare problems such as predicting disease progression [Choi2018MiMEMM], [Qiao2019MNNMA], [Zhang2019ATTAINAT], [Ye2020LSANML], medications recommendation [Zhang2017LEAP], [Shang2019GAMENetGA], [Jin2018KDD] and clinical trial recruitment [Biswal2020Doctor2VecDD], [Zhang2020DeepEnrollPM]. However, most of the studies such as T-LSTM [Baytas2017PatientSV], MNN [Qiao2019MNNMA] and LSAN [Ye2020LSANML] mainly focused on modelling the temporal dependencies of multiple visits of homogeneous sequence such as the history diseases sequence. While the medication recommendation task involves multiple temporal and heterogeneous sequences, not only the temporal intra-dependencies but also the inter-correlations between the sequences should be considered when modelling the sequence learning process. Though GAMENet [Shang2019GAMENetGA] utilized two medical sequences to model the temporal dependencies for medications recommendation, it didn’t consider the correlations of sequences. DMNC [Le2018DualMN] presented a two-view sequential learning model to model the complex interactions. However, the complex differentiable neural computer (DNC) blocks used in DMNC [Le2018DualMN] do not explicitly model sequential interactions. In contrast, Jin et al. [Jin2018KDD]
developed three heterogeneous LSTM models to model the correlations of different types of medical sequences by connecting hidden neurons, but neglect the impact on patient’s current status. MiME[Choi2018MiMEMM] modelled the inherent multilevel structures of medical codes by incorporating the relationships between the diagnoses and their treatments into patient visit representations. AMANet [He2020AttentionAM] utilized multiple attention networks to capture the intra- and inter- view interactions of heterogeneous and temporal sequences, but overlook the multilevel nature of EHR data.
3.1 Problem definitions
To facilitate the latter introduction of our computational methods and generalize the applicable dataset, we define the data from electronic health record system (EHR) using the mathematical symbols as follows. The longitudinal EHR data contains a large number of patient records, and each patient can be represented as a sequence of multivariate observations: over time, where , is total number of patients, and is the number of visits for the -th patient. Without loss of generality, we will describe the model for a patient and the subscript (n) will be dropped whenever it is unambiguous. Each visit consists of sequential laboratory indicator-wise results , where denotes the -th indicator result at -th timestamp within -th visit, and categorized data including (a union set of diagnoses codes) and (a union set of medications codes). For simplicity, we use to represent the unified definition of medical codes. denotes the medical code set and denotes the size of medical code set. is the medical code in .
Problem Definition 1 (Medication recommendation)
Given the historical visit records of a patient , the current laboratory results and the diagnosed diseases , our goal is to recommend reasonable medications by generating the multi-label output :
3.2 Multilevel selective and interactive network
We propose a novel architecture, MeSIN, to implement the medications recommendation task. As shown in Fig.2, MeSIN is a multilevel learning framework, which mainly consists of two steps. In step I, the hierarchical historical information embeddings learning process begins with the laboratory results embedding learning module, followed by the diagnoses codes embedding module, and then the medications codes embedding module. In step II, the global selective fusion module is utilized to infuse the learned heterogeneous embeddings into the patient representation according to the selective weights.
3.2.1 Laboratory sequence embedding module
As shown in Fig.2, the module mainly consists of three key parts: a multi-channel time-series embedding layer, an attentional selective layer, and a temporal sequence learning network.
Multi-channel time-series embedding layer.
As the meanings of particular clinical features for patients in diverse medical conditions are different, the progression of laboratory indicators are distinct accordingly. Thus, the embedding of sequential feature representing the indicator changing progress is distinct from each other. Here, inspired by ConCare [ConCare2020], we employ the multi-channel time-series embedding layer to embed the sequence of each laboratory indicator feature separately by multi-channel GRUs:
where denotes the time series and represents the embedded vectors of feature q. Therefore, all the embedded vectors of time series of indicator features can be acquired in the same way. To reduce clutter, the superscript (t) representing the results generated at -th visit will be dropped whenever it is unambiguous.
ASM for laboratory results embeddings selection.
For each sequence of laboratory results, we gain the corresponding embeddings . However, in clinical scenarios, doctors pay attention to only a few paramount indicators according to clinical experience, which can effectively improve work efficiency.
In this case, the attention mechanism that computes the attention weights using softmax function [Softmax] might be inappropriate, because it results in dense attention alignments that is wasteful and making models less interpretive. Therefore, we introduce a sparse attention using entmax [Peters2019SparseSM] computing attention weights for ASM of MeSIN to increase focus on relevant source medical codes embeddings and make the model more interpretable. Here, we employ the specially proposed ASM to compute the enhanced laboratory results embedding :
where and are the parameters of ASM to be learned. [Peters2019SparseSM] is a special method of entmax, by which we can find the optimal equilibrium point by controlling the value of . For
, as the value increases, entmax tends to produce sparse probability distributions, yielding a function family interpolating between softmax and sparsemax. In this way, we can compute all the enhanced laboratory results embeddingsat each timestamp.
Temporal sequence learning network.
To further capture the temporal dependency of multi-visit laboratory results, the enhanced laboratory results embeddings will be input into the temporal sequence learning network for combining with the historical laboratory results:
where represents the long-short temporal neural network (LSTM) for capturing the temporal dependency of laboratory examination sequence, denotes the obtained visit-level laboratory results embedding containing the history information at -th visit. With the same calculations in the remaining timestamps, we can finally have all the history laboratory results embeddings which will be input into the following embedding modules.
3.2.2 Diagnoses codes embedding module
After checking the laboratory results, the doctors tend to retrieve the history diagnosed diseases and combines them with current disease condition for comprehensive decision-making. Likely, as shown in Fig.2, we design a module that contains three critical parts: a diagnosis code embedding layer, an attentional selective module and a novel temporal sequence interactive learning network.
Diagnosis code embedding layer.
Taking the timestamp t as an example, MeSIN first encodes each diagnosis code into a dense representation vector as:
where is the embedding matrix of medical codes that needs to be learned, is the size of embedding dimension, and is the size of medical code set. Thus, for the diagnosis code set , we can represent it by a collection of dense representation vectors . Then, for the -th visit, we can obtain dense embedding set , in which each embedding vector is extracted from if it exists in -th visit.
ASM for diagnoses codes embeddings selection.
However, as discussed before, not every historical disease has impact on the future disease risk, we should assign different relevance scores on each code embedding according to their importance degree. Here we also leverage ASM for diagnoses codes embeddings selection. The enhanced diagnosis code embedding can be calculated as:
where and are the parameters of ASM to be learned. is the hyper-parameter of [Peters2019SparseSM] in this module. In this way, we can compute all the enhanced diagnoses codes embeddings .
InLSTM in diagnosis code sequence learning.
In addition to modeling of the temporal dependency of a single sequence, we should also consider the interactions of multiple sequences in the sequence learning network. As before, the laboratory results could be regarded as critical references when making the diagnoses by doctors. Hence, the sequence of laboratory results should be used to control the diagnosed disease sequence learning process. Therefore, such kind of network will adopt two input sequences: one is the primary input of sequence learning network such as the gained diagnosis code embedding , another is the auxiliary input for assisting in controlling the primary sequence learning process such as the learned visit-level laboratory results embedding .
Therefore, the basic LSTM model [hochreiter1997long] is not appropriate under such circumstances. Inspired by LSTM-DE [Jin2018ATE], we propose a novel interactive long-short term memory network (InLSTM), as shown in Fig.3, to reinforce the interaction process of two associated sequences, which brings in two novel components including a calibrated memory-augmented cell and an enhanced input gate. It can be defined as:
where the detailed mathematical expression of is:
where indicates the obtained history memory state, denotes the calibrated gate calculated with auxiliary input . Afterwards, will be used to obtain the calibrated memory-augmented cell state value by multiplying the . By this way, the calibrated gate can selectively assign more weights to the representative and predictive memory neurons while suppressing the unimportant neurons. For the input cell, besides the primary input itself, it is also influenced by the auxiliary input . Under such a circumstance, the calibrated auxiliary input is introduced to calculate the auxiliary influence score by multiplying the normal input gate . Finally, the enhanced input gate is computed by the addition of the auxiliary influence score and the normal input gate , and then adjusted to the value between 0 and 1 via a function.
Therefore, for the diagnoses codes embedding module, we can calculate the final visit-level diagnoses codes embedding by fusing with the historical diagnosed disease as:
where denotes the proposed InLSTM (Eq.(7)) in this module, which is used to capture the correlations between the primary input of diagnosis code embedding sequence and the auxiliary input of laboratory results embedding sequence.
3.2.3 Medications codes embedding module
Similar to the previous hierarchy, diagnoses codes embedding module, the medications codes embedding module still comprise three main parts: a code embedding layer, an attentional selective module, and the temporal sequence interactive learning network.
Medication code embedding layer.
In this module, MeSIN still first encodes each medication code into a dense embedding vector as:
where is the embedding matrix of medical codes that needs to be learned. Then we can gain the dense medication code embedding set , in which each embedding is extracted from if it existed in -th visit.
ASM for medications codes embeddings selection.
As mentioned before, MeSIN needs to filter out the noise coming from irrelevant historical medication codes sets medications at each timestamp. In this case, we should assign different relevance scores on different codes embeddings using the attentional selective module for computing the enhanced medications set embedding :
where and are the parameters of ASM to be learned. is the hyper-parameter of [Peters2019SparseSM] in this module. Likely, we can compute the enhanced medications codes embeddings sequence at historical timestamps.
InLSTM in medications codes sequence learning.
Finally, for capturing the temporal dependency of historical medications, the gained enhanced medication code embedding sequence is treated as the primary input of sequence learning network. In addition, recommending medications is essentially a comprehensive decision-making process, the historical prescribed medications must be affected by the laboratory results and diagnosed diseases. Therefore, the sequences of laboratory results and diagnosed diseases are taken as the auxiliary input assisting in controlling the sequence learning process. Then, we can calculate the final visit-level medication code embedding by fusing the historical disease progression using InLSTM Eq.(7) as:
where denotes the proposed InLSTM in this module, which is used to capture the correlations between the primary input of diagnosis code embedding and the auxiliary input . Further, the auxiliary input is calculated via a fusion module:
denotes the activation functiontanh. and respectively represent the obtained visit-level laboratory results embedding using Eq. (4) and diagnosed diseases embedding using Eq. (11).
3.2.4 Global selective fusion module
In step I, by modeling the hierarchically interactive temporal sequence learning process, we can obtain the visit-level laboratory results embedding , diagnoses codes embedding and the medications codes embedding . All above three kinds of embeddings have incorporated corresponding historical information. For recommending medications at current timestamp, the current enhanced laboratory results embedding and diagnosis code embedding should be given more attention when making final decisions.
To effectively fuse above five heterogeneous embeddings according to their importance scores and minimize the effect introduced by irrelevant information as much as possible, in step II, we design a global selective fusion module, which is realized by a self-attention mechanism. Since the five types of embeddings are heterogeneous, here, we first calculate the information importance scores by themselves as:
where and are the parameters to be learned. are the information importance scores, by which we can calculate the final information importance scores as:
Finally, we obtain the ultimate patient representation vector by summing up the heterogeneous information vectors according to importance scores from Eq.(17) as:
where is the patient representation vector to be used to recommend reasonable medications in the next subsection.
3.2.5 Medication recommendation
Doctors make decisions about recommending reasonable medications for patients after comprehensive consideration. Likewise, the learned patient representation is employed in this study to recommend reasonable medications as:
where denotes the set of recommended multi-label medications, and are parameters to be learned.
3.3 Model training
Since medication recommendation task belongs to the domain of sequential multi-label prediction task, we utilize the binary cross-entropy loss and multi-label margin loss as the objective functions. The prediction objective function binary cross-entropy loss is formulated as:
The corresponding objective function multi-label margin loss is:
). So we can get two binary cross entropy loss functions, , and two multi-label margin loss functions , .
To facilitate the joint optimization process of two tasks, we combine the aforementioned loss functions to build a joint loss function :
where are the mixture weights, and . The training algorithm is detailed in Algorithm 1.
4 Experiments and discussion
4.1 Datasets description
As is analyzed in Section 1, the aim of study is to recommend medications for patients based on the heterogeneous multilevel EHR data. Hence, we should conduct experiments on a cohort where patients have at least two visits and their EHRs are complete. Here, we choose a real-world publicly available dataset MIMIC-III [MIMIC] 111https://mimic.physionet.org, in which patients stayed within the intensive care units (ICU) at Beth Israel Deaconess Medical Center and had relatively complete health records with multilevel heterogeneous data. Though MIMIC-III belongs to the ICU data, there are certain patients with multiple visits. Hence, we utilize it as our experimental dataset. Similar to [Shang2019GAMENetGA], we choose the medications prescribed by doctors for each patient within the first 24 hours as medicine set since it is usually a critical period for each patient to get rapid and accurate treatment [Fonarow2005Effect]. Besides, the medicine codes form NDC are transformed to ATC Level 3 for integrating with MIMIC-III. Meanwhile, we employ the second hierarchy codes of the ICD9 codes222http://www.icd9data.com as the disease category labels, since predicting category information not only guarantees the sufficient granularity of all the diagnoses but also improves the training speed and predictive performance [Ma2017DipoleDP, Choi2017GRAMGA]
. For considering the laboratory results into decision-making process, we follow the feature extraction method used in[Harutyunyan2019MultitaskLA]. Here, the time-window of each laboratory indicator is 24 hours. More information about the patients cohort from the dataset is listed in Table 1.
|# of patients||4631|
|# of unique diagnosis||1879|
|# of unique medication||143|
|# of unique laboratory indicators||17|
|avg # of visits||2.55|
|avg # of diagnoses||10.16|
|avg # of medications||7.33|
4.2 Evaluation metrics
To evaluate the performance, we adopt the Jaccard Similarity Score (Jaccard), Precision Recall AUC (PR-AUC), Average Recall (Recall) and Average F1 (F1) as the evaluation metrics. Jaccard is defined as the size of the intersection divided by the size of the union of predicted setand ground truth set . Precision is used to measure the correctness of predicted medicines and Recall is used to measure the completeness of predicted medicines. F1 is often used as the comprehensive evaluation metric of prediction model.
where denotes the number of patients in test set and is the number of visits for the patient. Given
the valuation metric F1 can be calculated as:
4.3 Benchmark methods
To evaluate the effectiveness of the proposed model, it was compared to the following baseline methods:
Nearest. To predict treatment medicines for a patient , Nearest was proposed to choose the treatment medications prescribed for patient , who has the most similar historical laboratory indicators and medications with .
. It is a logistic regression with L1/L2 regularization. We sum the multi-hot vector of each visit together and apply the binary relevance technique[Luaces2012BinaryRE] to handle multi-label output.
NBN [Alexiou2017ABM]. Here, this method mainly utilizes the prior knowledge and employ the statistical methods to recommend corresponding medications for patients.
Retain [Choi2016RETAINAI]. RETAIN is an interpretable model with a two-level reverse time attention mechanism to predict diagnoses, which can detect significant past visits and associated clinical variables. It can be used for similar sequential prediction tasks, such as predicting treatment medicines.
DELSTM [Jin2018KDD]. This model utilizes additional input sequence as the input of a decomposed gate to control the memory cell state, which indirectly interacts with primary input sequence.
PCLSTM [Jin2018KDD]. This structure takes all heterogeneous sequences as input. In other words, multiple sequences interact with each other via the neuron interactions in the way of concatenating both hidden states.
RAHM [AN2020103502]. It builds a relation augmented hierarchical multi-task learning framework for learning multi-level relation aware patient representation for medication prediction.
. Leap formulates the medicine prediction problem as a multi-instance multi-label learning problem, which mainly uses a recurrent neural network (RNN) to recommend medicines.
DMNC [Le2018DualMN]. DMNC uses a memory augmented neural network to model the interaction of two asynchronous sequences for treatment prediction task [Le2018DualMN].
GAMENet [Shang2019GAMENetGA]. It employs a dynamic memory network to save encoded historical medication information, and further utilizes a query representation formed by encoding sequential diagnosis and procedure codes to retrieve medications from the memory bank.
AMANet [He2020AttentionAM]. AMANet leverages self-attention and inter-attention to capture the intra-view and inter-view interactions. Then it concatenates the information from history attention and dynamic external memory to predict the medications.
4.4 Experimental settings
We randomly split the patients in MIMIC-III dataset into training, validation and test sets with 2/3 : 1/6 : 1/6 ratios. The random splitting and training processes were performed five times. Table 2 lists the results as averages across the five runs obtained for all the compared models in terms of the four evaluation metrics described in Section 4.2 . Specificially, the embedding size and the hidden layer dimension for LSTM and GRU are all set as and , respectively. The dropout rate is set as , batch size is set as 10, and the mixture weights of objective function are set as . The values of attention sparse degree controlling parameters in ASM are set as 1.5, 1.5, 1.3, respectively. Training is done through Adam [Kingma2014AdamAM]
at learning rate 2e-4, and we report the model performance in test set within 40 epochs. All methods are trained on an Ubuntu 16.04 with 64GB memory and Nvidia TAITAN XP GPU using the Pytorch 1.0 framework.
4.5 Performance comparison
As demonstrated in Table 2, the benchmark models used in health informatics are divided into three categories: Shallow methods, including Nearest, LR and NBN; Predictive models, including Retain, DELSTM, PCLSTM and RAHM; Recommendation models,including LEAP, DMNC, GAMENet and AMANet. From the table, we have an impressive observations that the proposed MeSIN achieves the superior performance over the listed benchmark models. Through detailed comparison with the benchmark models, several interesting observations can be made as follows.
First, as for the shallow methods, Nearest, LR and NBN achieved about at least 5.74%, 1.35%, 8.53% and 7.69% lower scores on medication recommendation task with respect to Jaccard, PR-AUC, Recall and F1 score, respectively, than MeSIN. On the one hand, this kind of method does not consider the temporality and heterogeneity of EHR data, and also overlook the relations between multiple sequences. On the other hand, Nearest achieves the worst performance which indicates that most of the patients possess distinct disease conditions within continuous admissions to hospital.
Second, as for predictive models in health informatics, Retain, DELSTM, PCLSTM and RAHM also achieve poor performance on medication recommendation task than MeSIN. We think that the main reason might be that they can not consider the current medical records including laboratory indicators and diagnosed diseases status, and only take the historical records into accounts. In addition, Retain is a two-level attention based model, that can capture temporal correlations and identify influential past visits. However, it can not consider all heterogeneous sequences respectively and can just concatenate the heterogeneous embeddings into one embedding which would confuse the embeddings obtained from different hierarchies of EHR data. DELSTM and PCLSTM mainly pay more attention on the sequences interaction in the temporal sequence learning process while overlook the inherent hierarchy structure of EHR data. In MeSIN, the multilevel learning framework is incorporated to extract useful information from such kind of inherent structure. RAHM partially employs the hierarchy nature of EHR data to acquire better performance via multi-task learning framework. However, it still employ single sequence learning methods to integrate with the historical information which might cause the confusion of different historical medical sequences.
Third, our proposed MeSIN outperforms all state-of-the-art methods used for medication recommendation such as LEAP, DMNC, GAMENet and AMANet about at most 8.64%, 14.72%, 13.93% and 10.43%, and at least 3.34%, 0.92%, 3.87% and 2.89% with respect to Jaccard, PR-AUC, Recall and F1 score. In practice, the medication recommendation problem within an admission might not be the pure sequential recommendation process and it also refers to the diverse correlations. Their poor performance could be attributable to the poor ability to capture such complicated correlations. Particularly, LEAP cannot capture the inherent multiple relations among heterogeneous sequences. While DMNC realizes the interactions of two sequences through attention based DNC blocks but neglecting the utilization of medications in the history visits. Similarly, AMANet also does not consider historical medications prescribed for patients, but it achieves relatively better performance through multiple attention networks for capturing the inter- and intra- correlations of heterogeneous sequences. However, AMANet neglects the captured evolution information such as disease progression through temporal sequence learning network, which is still a kind of important information in decision-making process.
Finally, we can observe the MeSIN also outperforms two special variants, and . For the former variant, we replace the developed interactive LSTM network (InLSTM) in MeSIN with DELSTM network [Jin2018KDD]. In this variant, the interaction processes just consider the impact of auxiliary input on the memory cell state of primary input sequence learning network, and overlook the impact on the current input cell state. While the InLSTM network in MeSIN simultaneously considers the sequential interactions from above two aspects. For the latter variant, we replace the incorporated attention weights computation method Entmax in attentional selective module (ASM) of MeSIN with Softmax. In this variant, unlike Entmax, Softmax will generate the dense attention alignments that is wasteful, and can not pay more focus on the really important feature embeddings.
Therefore, the critical reasons that MeSIN achieves the best performance compared with all benchmark models can be summarized as follows: (1) The multilevel learning framework can help capture the inherent causal relations of adjacent hierarchies; (2) The incorporated multiple attentional selective modules in the framework realizes the effective embeddings selection and make the learned patient representation be more expressive; (3) The designed InLSTM further reinforces the sequences interactions from both the historical memory cell and the input cell, which can further optimize the temporal sequence learning process by incorporating more useful calibrated information.
4.6 Ablation study
We now need to examine the effectiveness of different components in MeSIN and evaluate the contribution of different source data. Hence, we conduct two kinds of ablation studies respectively on model’s components and multi-sourced EHR data.
4.6.1 Model components
This ablation study is conducted to verify the effectiveness of different MeSIN components to its overall performance. To determine whether the incorporated components improve the performance, we add them one by one from scratch and verify their performance by all evaluation metrics including Jaccard, PRAUC, Recall and F1 score. Table 3 presents the recommendation results of distinct MeSIN variants on the MIMIC-III dataset. One of the basic baseline models, Vanilla, the medical codes are respectively added together as the enhanced embeddings in every module. Besides, the standard LSTM networks are also respectively employed as the temporal sequence learning networks in three distinct modules, and we employ the concatenation-based fusion method to replace the proposed global selective fusion module. However, Vanilla still achieves relatively better performance compared with benchmark models, which can attribute to the incorporation of multi-sources data and the integration of current laboratory results and diagnosed disease by concatenation-based fusion method.
Attentional selective module (ASM). As explained in Section 3.2, ASM is introduced to automatically select the useful information and filter out noise information as much as possible by assigning corresponding attention weights to embeddings according to their respective importance. The following variants are tested to evaluate the contribution of ASMs from different modules to the overall performance of MeSIN:
. In this variant, we incorporate an attentional selective module for laboratory results embeddings selection. The overall performance is slightly improved by 0.17% on Jaccard in this case compared with the Vanilla model. This testifies that the introduced ASM module can help focus on the useful laboratory results embeddings by controlling the value of , by which we can obtain relatively better enhanced embedding as the input of temporal sequence learning network.
. Similar to , in this variant, we introduce an attentional selective module to replace the addition operation for diagnoses codes embeddings selection. In this way, the diagnoses codes embeddings that are irrelevant with the recommendation task would be discarded by sparse attention under the value control of . As a result, the performance of is improved by 0.16% on Jaccard, which indicates the importance of the ASM in MeSIN in selecting the useful information from numerous medical codes embeddings.
. In this variant, we further incorporate the third ASM module into the prescribed medications embedding module for selecting the most relevant historical medication codes embeddings to build the patient representation. As a result, makes relatively better improvement compared with and . We think that it can attribute that the medication embedding module has direct relevance with the medication recommendation task. In the end, the incorporation of above three attentional selective modules bring about 0.55% on Jaccard, 0.13% on PR-AUC, 2.33% on Recall, 1.01% on F1 in total compared with Vanilla. But the important is that ASM makes MeSIN more interpretable by focusing on the really important features.
Interactive Long-Short Term Memory network (InLSTM). InLSTM is developed for reinforcing the interaction process of heterogeneous sequences, which is beneficial to capture the correlations of sequences. The following variants are tested to evaluate the contribution of InLSTM to the overall performance of MeSIN:
. In this variant, we incorporate a novel InLSTM to replace the standard LSTM in in diagnoses codes embedding module for enhancing the interaction process of disease progression and changing laboratory results. It achieves by 0.29% on Jaccard compared with , which verifies the importance of considering the correlations of sequences such as between laboratory results and diagnosed diseases into the temporal sequence learning process.
. Here, the interactive LSTM is further introduced to the top hierarchy, prescribed medications embedding module for facilitating the medication codes sequential learning process. In this sequence learning network, the diagnosed diseases are utilized to enhance the interaction process with prescribed medications for providing complimentary useful information. Thus, the performance of outperforms the fifth variant by 1.9% on Jaccard, which further indicates the superiority of InLSTM in MeSIN than the standard LSTM in .
Global selective fusion module (GSFM). After step I, the hierarchically interactive temporal sequence learning procedure, the obtained multi-sourced embeddings are integrated together via proposed global selective fusion module (GSFM) for obtaining the patient representation. In this way, MeSIN can automatically learn the contribution scores of distinct embeddings to the medication recommendation task. As a result, it improves by 0.4% on Jaccard compared with the sixth variant . This also indicates the advantage of GSFM than the concatenation-based method used in above six variant models. However, owe to the utilization of concatenation-based fusion method, Vanilla gains relatively better performance than the benchmark methods.
4.6.2 Heterogeneous Data
According to the proposed method, multilevel EHR data need to be input to MeSIN for obtaining the final patient representation. Though each of them plays a paramount role in the clinical decision-making scenario, here, we would build the following MeSIN variants to evaluate the impact of different heterogeneous data on medication recommendation results (Table 4). In , the laboratory results embedding module in MeSIN is removed, and the introduced in diagnoses codes embedding module needs to be replaced by the standard LSTM network. In this case, patient’s detailed health status is unknown. In , the diagnoses codes embedding module is removed from MeSIN, and just retains the remain two modules. Under such circumstance, the learned patient representation will lose the key disease progression information. In , the medications codes embedding module is removed from MeSIN. In this way, the learned patient representation will lose the historical medications information.
Clearly, it can be noticed from Table 4 that the performance of variants all drop owing to some apparent reasons. First, in practice, most medications are prescribed conditioned on the diagnosed diseases. Therefore, the performance of drops dramatically, which validates the crucial role of diagnosed diseases and disease progression in medication recommendation task. Second, though historical medications in ICU are not so much valuable for most patients, but are still a kind of important information in understanding patient’s history diseases which can help know some detailed information such as allergic condition. Thus, the performance of also drops significantly on medications recommendation task. Third, the performance of also drops significantly in this case but slightly compared with . The main reason is that the current patient health status including diagnosed diseases and key laboratory indicators results is still a paramount indicator indicating patient’s health status. Thus, though the importance of each hierarchy data within EHRs are diverse from each other, all of them play important roles in medication recommendation task.
4.7 Attention analysis in selective module
As discussed above, our newly developed MeSIN outperforms all benchmark models on medication recommendation for patients. Among the constituent components of MeSIN, the attentional selective module (ASM) plays a great role in the model, which has been testified through ablation studies about MeSIN in section 4.6.1. Actually, the positive influence of ASM should attribute to the selective ability of entmax, which can increase focus on important medical codes embeddings and make the process more interpretable. Hence, we perform attention analysis to explore the crucially attentive process shown in Figure 4, visualize the difference of softmax and entmax shown in Figure 5, and investigate the importance of multi-source embeddings shown in Figure 6.
The attentive process. To clearly interpret the attentive process, as shown in Figure 4, we just consider the relations between the second and third hierarchies (diagnoses codes embedding module and prescribed medications embedding module) within our multilevel learning framework. In addition, the quantitative value in column DA and column MA respectively denotes the attention weights calculated by Eq. (6) and Eq. (13). As shown in Figure 4, the attentive process can be categorized into four distinct but correlated processes. In this case, we have four interesting observations. First, in the attentive process (1), the learned visit-level diagnoses codes embedding will be input into the medication codes embedding module for interacting with the medications codes embedding within historical visits. Thus, we can observe that there exists strong causal relations between DC column (diagnoses codes) and MC (medication codes) within each visit. In this way, the diagnoses codes that corresponds to the medications codes existing in the recommendation label column will be assigned more attention weights within each visit. Second, owing to the temporal dependencies of EHR data, the recommended medications in Recommend column not only depends on the diagnosed diseases in DC column in the third visit, but also relies on the historical prescribed medications and diseases progression. As for this, as shown in the attentive process (2), the medication codes embeddings in historical visits but existing in label column will be assigned more attention weights. Similarly, in the attentive process (3), as for the inherent causal relations between diseases and medications, the corresponding diagnoses codes embeddings will be also assigned more attention weights. In the end, as shown in attentive process (4), for capturing the temporal dependency of EHR data, the medications codes sequence learning process will be influenced by the diagnoses codes sequence learning process under the help of proposed InLSTM in MeSIN.
The calculation methods of attention weights: Softmax and Entmax. In MeSIN, Entmax has been incorporated into the attentional selective module (ASM) to make more intelligent selections: assist to filter out noisy information and pay more focus on the important feature embeddings. In figure 5, we provide a hot map which demonstrates the difference of attention weights computed by Entmax [Peters2019SparseSM] and Softmax in laboratory results embeddings selection module. As shown in figure, we observe that the calculation method Entmax can generate sparse attention weights within each visit, in other words, it can make the attention scores of some unimportant indicator results embeddings to zero such as CRR, DBP, MBP, OS, RR and PH. In this way, the MeSIN can intelligently make selections about which features embeddings are more important to be focused on and which are unnecessary to be payed so much attention on when making decisions in clinical decision-making process. Therefore, such attention weights computation method in ASMs can help MeSIN increase focus on the really important features embeddings and make the model more interpretable.
The importance of multi-source embeddings. As mentioned in section 3.2.4, the global selective fusion module, to fuse the obtained five heterogeneous embeddings, we introduce a global selective fusion module, which can integrate them into patient representation according to respective importance score and minimize the adverse effect caused by noisy information. In figure 6, we can observe that (see details in Eq.(16-18)), which indicates the importance ranking of multi-source embeddings. Such a phenomenon further testifies that diagnosed diseases especially the disease progression with historical disease information is the most important information for the medication recommendation task, which have been proved in the ablation study shown in Table 4. The current laboratory result is the second critical factor when making decisions about the recommended medications. In addition, the historical prescribed medications are also taken into account. Finally, the historical laboratory results might be not so important in the intensive care unit (ICU). However, owing to that different patients might have different diseases status, the learned attention weights are also dynamically changing, which makes the computed relevance scores distribution are also diverse. For example, the historical medications might be more important than the diagnosed diseases when the diagnosis is adverse drug reaction. Through the above analysis, we can see that MeSIN can provide some insightful and interpretable recommendation results.
In this paper, we propose a novel multilevel selective and interactive network for medication recommendation task with clinical EHR data. In our model, the inherent causal relations and temporal dependencies of EHR data are effectively captured via proposed multilevel learning framework and a novel interactive LSTM cell. Considering the inevitable noise within EHR data, multiple attentional selective modules are incorporated into model for paying more focus on the really important feature embeddings and meanwhile provide insightful and interpretable recommendation results. Finally, we evaluate our model on a real world and public clinical dataset. The experimental results show that our model achieves the best recommendation performance against eleven baselines in terms of Jaccard, PR-AUC, Recall and F1 score. In the future, we plan to adapt the proposed approach for more healthcare prediction tasks based on sequential data and explore its usage in domains other than healthcare.
Funding: This research was partially supported by the National Key R&D Program of China (2018YFC0116800), National Natural Science Foundation of China (No. 61772110 and 71901011).
Declaration of Competing Interest
Authors declare that there is no conflict of interest.