The availability of a large amount of electronic health records (EHR) provides huge opportunities to improve health care service by mining these data. One important application is clinical endpoint prediction, which aims to predict whether a disease, a symptom or an abnormal lab test will happen in the future according to patients' history records. This paper develops deep learning techniques for clinical endpoint prediction, which are effective in many practical applications. However, the problem is very challenging since patients' history records contain multiple heterogeneous temporal events such as lab tests, diagnosis, and drug administrations. The visiting patterns of different types of events vary significantly, and there exist complex nonlinear relationships between different events. In this paper, we propose a novel model for learning the joint representation of heterogeneous temporal events. The model adds a new gate to control the visiting rates of different events which effectively models the irregular patterns of different events and their nonlinear correlations. Experiment results with real-world clinical data on the tasks of predicting death and abnormal lab tests prove the effectiveness of our proposed approach over competitive baselines.READ FULL TEXT VIEW PDF
Sepsis is a life-threatening condition that seriously endangers millions...
With the recent availability of Electronic Health Records (EHR) and grea...
We study the problem of detecting adverse drug events in electronic
A clinical dashboard for a patient's diabetes condition helps physicians...
Pharmaco-epidemiology (PE) is the study of uses and effects of drugs in ...
The cause of Alzheimer's disease (AD) is poorly understood, so forecasti...
The increased adoption of Electronic Health Records(EHRs) has brought ch...
The volume of electronic health records (EHR) is expanding at a staggering rate, providing a great opportunity for machine learning and data mining researchers to analyze these data so as to provide better health care service. An important application of machine learning in health care is predicting the clinical endpoints such as a disease, symptom, or laboratory abnormality based on patients’ historical records.
This paper develops effective deep learning techniques for clinical endpoint prediction since deep learning techniques have been proved effective for predictive analysis in a variety of applications such as image recognition [He et al.2016], speech recognition [Hinton et al.2012], and natural language understanding [Blunsom et al.2017]
. The goal of deep learning is to learn effective semantic representations of the high-dimensional data such as images, speeches and natural language. Therefore, our goal is to effectively represent patients’ historical records.
However, the problem is challenging since patients’ historical records contain a variety of heterogeneous temporal events such as different lab tests, routine vital signals, diagnosis, and drug administrations (See Fig. 1 as an example). The visiting rates of different events vary significantly. For example, a patient may take a blood test every morning while take a temperature test every two hours. Besides, there is a high level of dependency among different kinds of events. For instance, some diagnosis are made according to the results of some lab tests. As a result, these heterogeneous temporal events yield heterogeneous event sequences consisting of thousands of correlated event types, the visiting rate of which varies significantly.
In the literature, learning representations of sequences are widely studied especially in the domain of speech recognition and natural language understanding. The state-of-the-art approaches for sequence modeling are recurrent neural networks[Mikolov et al.2010]
(RNNs) with the Long Short-term Memory (LSTM) units[Hochreiter and Schmidhuber1997]. RNNs are commonly used for modeling homogeneous sequences, but it is nontrivial to apply them for modeling heterogeneous event sequences. There are some recent works based on multi-task Gauss Process (MTGP) [Ghassemi et al.2015] for modeling the correlations between multiple sequences. However, the computational cost of MTGP is too expensive for EHR data since there are thousands of types of events. Therefore, we are seeking an approach that is able to: (1) effectively model the irregular visiting patterns of different events; (2) model the complex nonlinear relationships between different events; (3) scale up a large number of different types of events.
In this paper, we propose such an approach called Heterogeneous Event LSTM(HE-LSTM) for learning the joint representation of heterogeneous event sequences. Our approach is an extension of Phased LSTM [Neil, Pfeiffer, and Liu2016], which was recently proposed and is used to model irregular event-based sequential data. Compared to the vanilla LSTM model, Phased LSTM [Neil, Pfeiffer, and Liu2016] adds a new time gate, which is able to naturally integrate inputs from several sensors of arbitrary sampling rates. But Phased LSTM is not suitable for modeling the heterogeneous event sequence with thousands of event types in longitudinal EHR data. Our proposed model extends it by modeling correlated heterogeneous events with multi-scale sampling rates. Each event type and its attributes are embedded and fed into HE-LSTM. The HE-LSTM is equipped with an event gate controlled by the event type embeddings and the their timestamps. With the help of the event gates, the HE-LSTM can perfectly trace the temporal information of different event types in the long heterogeneous event sequence by asynchronously sample important and related events in the heterogeneous event sequence. Therefore, the representation of heterogeneous temporal events can be updated base on the dependency of the current input event and other events maintained in the HE-LSTM.
We conduct extensive experiments on real-world clinical data. Experiment results on the tasks of death prediction and abnormal lab test prediction prove that our proposed approach outperforms competitive baselines. Our proposed approach can be widely used in modeling data collected from sensors with arbitrary sampling rates, such as data collected from mobile sensors.
Our main contributions are:
We formulate the clinical endpoint prediction task based on EHR data as a representation learning problem of heterogeneous temporal events consists of asynchronous clinical records from multiple sources.
We propose a novel model called HE-LSTM for learning the representations of heterogeneous event sequence. The model effectively models the multi-scale sampling rates of different kinds of events and their temporal dependency.
We conducted experiment on real-world clinical data on the tasks of death and abnormal lab tests. Promising results prove the effectiveness of our proposed approach over competitive baselines.
There are plenty of works trying to solve the clinical endpoint prediction problem. However, many of these works only use a small subset of the whole EHR sequences in order to avoid dealing with the high-dimensional event types. Some works select a subset of the clinical events from the EHR data according to the expertise of physicians [Caballero Barajas and Akella2015]. For instance, Alaa only uses a set of 21 (temporal) physiological streams comprising a set of 11 vital signs and 10 lab test scores to predict ICU admission [Alaa, Hu, and van der Schaar2017]. Some techniques select 50 time series from the whole set of EHR data, and transformed the fixed-size subset into a new latent space using the hyper-parameters of multi-task GP(MTGP) models. They then calculate the similarity of patient’s records in the new hyper-parameter space [Ghassemi et al.2015]. It is notable that manually selecting only a fraction of clinical sequences from original EHR data as the input brings out expert bias, thus these works seldom make full use of the important information of original data.
Most works ignore the content or value of clinical events, and only use the type information of clinical events to predict the endpoints [Liu et al.2015]. Specifically, some approaches train the semantic embeddings for different categories of clinical events for endpoint predictions [Henriksson et al.2015].RETAIN uses two reversed recursive neural networks(RNN) generating attention variables of sequential ICD-9 code groups for the prediction tasks [Choi et al.2016]
. There are some works using convolution neural network(CNN) to model irregular medical codes for future risk predictions[Nguyen et al.2016]. These works only exploit the type information of historical clinical events to make predictions, ignoring the fine-grained varying attributes of the events. Our work is to address the issue by utilizing the rich type information of clinical events as well as the content and values of the events.
Standard RNNs trained with stochastic gradient descent have difficulty learning long-term dependencies (i.e. spanning more than 10 time steps) encoded in the input sequences owing to the vanishing gradient[Hochreiter et al.2001]
. The problem has been addressed for example by using a specialized neuron structure in Long Short-Term Memory (LSTM) networks[Hochreiter and Schmidhuber1997] that maintains constant backward flow in the error signal.
In the Clockwork RNN (CW-RNN) [Koutnik et al.2014], the hidden layer is partitioned into separate modules, each processing inputs at its own temporal granularity, making computations only at its prescribed clock rate. In this way, the fixed clock periods help to contain long-term dependencies.
Phased LSTM [Neil, Pfeiffer, and Liu2016]is a state-of-the-art RNN architecture for modeling event-based sequential data. It extends LSTM by adding the time gate. The gate has three phases: it rises from 0 to 1 in the first phase and drops from 1 to 0 in the second phase, which are active states. During the third phase, the model is in the inactive state. Updates to and are permitted only in the active state. The Phased LSTM network can achieve fast convergence in most experiments, owing to the fact that the auto-sampling on the long sequential data conducted by the time gate maintains derivative error in the longer back propagation.
However, these models only focus on learning long-term dependencies in homogeneous sequences, lacking the ability to capture the various and complex temporal dependencies in heterogeneous temporal events, which usually exist in EHR data.
Here are some notations and the definition of the task.
The heterogeneous event is defined as the triple . is the category of event, is the attribute of the event, and of are logged at . It is noteworthy that the attributes
s of different event types can be either numerical or categorical variable. For example, the attributes of a lab test, e.g.lactate blood test, is numerical while the attribute of the clinical status, e.g. ectopia type, is categorical variable(i.e. fusion beats, nodal bigeminy).
Heterogeneous events are merged in the ascending order of the record time into a triple sequence . We denote the heterogeneous event sequence in a period of time as .
The clinical endpoint prediction task is formulated as follow: given a clinical heterogeneous event sequence , and a binary label for the target endpoint occurring at + hours, the objective is to predict what the target endpoint is in 24 hours using .
In this paper, we aim to dynamically predict two endpoint outcomes base on the heterogeneous event sequence of patient data in EHR. In the first “death prediction dataset”, the endpoint outcome is death in either hospital or discharge to home. In the second “lab test result prediction dataset”, the endpoint outcome is either an abnormal result of the potassium lab test, or clinical stability.
In this section, we introduce the technical details about our proposed model. The overall view of our model is illustrated in Figure 2.
To help the HE-LSTM to trace temporal information of various kinds of events, we use “event type embedding” and “attribute encoding” to embed the type and attributes of the high dimensional events into compact continuous vectors, which can be trained end-to-end with the following HE-LSTM.
An event of the sequence will be embedded into three parts to feed the HE-LSTM for the endpoint prediction. The three input including embedding vector of event type , the event attribute encoding vector and the scale variable time .
The event type vector carries the information of the event category of , and is constructed only by the one hot representation of event type. Similar to word embedding [Mikolov et al.2013], it will provide a low-dimension vector of the event type with semantic meaning in clinical field. The embedding lookup matrix , where is the embedding dimension and is the number of event types, is established for further training. The event type vector is given by:
The event attribute encoding vector represents the combining information of both event type and the attribute of the event , which is the main input of the following HE-LSTM. Each event has two kinds of attributes. One is categorical with the one hot representation , where is the number of values of categorical attributes of all the event types. The other is numerical with the one hot representation , where is the number of all numerical attribute types of all the event types. Notice that the event attribute vector .
Each value of categorical attributes is assigned with a vector from , where is the embedding dimension. As for numerical attributes, they are associated with a value encoding vector in , where is the embedding dimension.
The representing vector of a record is mainly decided by its event type
, however the event attributes also carries lots of information for modeling patients. The different values of the same event type, such as the abnormal label in a lab test event, can lead to distinct estimates for the patient’s future health status. The other important part ofis a disturbance from the numerical attribute values. For instance, the high numerical value of the lactate blood lab test event indicates potential health problem of the patient, while the low value does not offer much information. Finally, to combine the three parts of information, the attribute encoding vector is given by:
where and are parameters to learn.
Long short-term memory (LSTM) units [Hochreiter and Schmidhuber1997] (Fig. 2(a)) is an important ingredient of modern deep RNN architectures. We first define their update equations in a commonly-used version in the following:
The main difference from classical RNNs is the use of the gating functions , , , which represent the input, forget, and output gate at time respectively. is the cell activation vector, whereas and
represent the input feature vector and the hidden output vector respectively. The gates use the typical sigmoid functionand nonlinear function with weight parameters , , , , , and , which connect the different inputs and gates with the memory cells and outputs, as well as biases , , and . The cell state itself is updated with a fraction of the previous cell state that is controlled by , and a new input state created from the element-wise product, denoted by , of and the output of the cell state nonlinearity . Optional peephole [Gers and Schmidhuber2000] connection weights , , further influence the operation of the input, forget, and output gates.
HE-LSTM extends the LSTM model by adding a new event gate . The event gate has two factors — an event filter and a phase gate. The event filter only allows the information of a certain cluster of events to fuse into the corresponding memory cell, so that each cell will only trace a particular group of events. Collaborated with the phase gate, the event filter can help the network to maintain the temporal information of the different events in multi-scaled sampling rates. The dependency of the heterogeneous events will be easier to capture by the diverse and long memory of correlated events.
The opening and closing of this event gate is controlled by the event type embedding and an independent rhythmic oscillation specified by the phase gate [Neil, Pfeiffer, and Liu2016] with three parameters. And updates to the cell state and are permitted only when the gate is opened.
One factor of the event gate, the event filter , for each neuron is a feed forward network with a hidden layer of size with activation function as following.
where , , and are parameters to learn.
Considering the multi-scale sampling rates of the events, we extend the event filter with a time factor proposed in phased LSTM [Neil, Pfeiffer, and Liu2016] by three parameters: , and , where represents the real-time period of the gate, represents the phase shift and is the ratio of the open phase to the full period. , and are learned by training. Therefore, is formally defined as:
Different from traditional RNNs for single sequential data and even sparser variants of RNNs [Koutnik et al.2014], updates in HE-LSTM can optionally be performed at irregularly sampled time points for different event types. This allows the RNNs to learn the multi-scale rhythm of related events and work with asynchronously sampled heterogeneous temporal event data. We use the shorthand notation = for cell states at time (analogously for other gates and units), and let denote the state at the previous update time . We can then rewrite the regular LSTM cell update equations for and (from Eq. 5 and Eq. 7), using proposed cell updates and mediated by the event gate :
The HE-LSTM formulation ensures the flexible allocation and retain of information of each event clusters. Each neuron of the memory cell and hidden layer of HE-LSTM states can be updated only during the open periods of the event gate. In other words, only the information of a certain cluster of events’ records can flow into this certain neuron in its own phase. This is because the event filter , one of the factor of the event gate
, can be seen as a binary classifier to chose the cluster of event types responsible for each neuron. Besides, the neuron maintains a perfect memory during its closed phase, i.e.if for . Thus, other neurons, tracing other events can directly use the information of this cluster of events even they are far away from each other in term of the sequence index. Because of this allocation mechanism, HE-LSTM can have much diverse and longer memory for modeling the dependency of multiple events.
We use a sigmoid layer to predict the true label of the learned representation vector of sequence in the given decision times.
where and are parameters to learn.
We use cross-entropy to calculate the classification loss of the prediction and true label of each sample as follows:
We can sum up the losses of all the samples in one mini-batch to get the total loss for back propagation.
The source code of MIMIC-III EHR data preprocessing and the proposed model is available and can be found at https://github.com/pkusjh/HELSTM.
We set up two data sets for evaluation of the models from one real clinical data source. MIMIC-III [Johnson et al.2016](Medical Information Mart for Intensive Care III) is a large, freely-available database comprising de-identified health-related data relating to over forty thousand patients who stayed in critical care units of the Beth Israel Deaconess Medical Center between 2001 and 2012.
We extract all kinds of events from the MIMIC-III database to get the initial event type set(18192 kinds of events in total).The statistics of the event types with top frequency are listed in Table.1. By merging the heterogeneous events into triple sequence, we get a set of clinical event sequences. We drop out the sparse event types, whose frequency in total is less than 2500.
We extract episodes of patients, which are 24 hours before the occurrence time of each endpoint, from these event sequences as samples. And the upper bound of the record number of the samples is 1000. All the resulting sample events are labeled according to the target endpoint outcome in each task.
|event sources||e.g. event types||# of types|
|WHITE BLOOD CELLS|
|vital signal||Heart Rate,||385|
|drug input||0.9% Normal Saline,||60|
|clinical symptom||Ectopy Type||2382|
|gastric retentive oral dosage|
The statistics of the final clinical multiple sequences in two datasets are summarized in Table 2.
|Dataset||# of samples||# of events||Avg timespan|
|death||24301(8%)||20290879||3d 15h 58m|
|lab test||784583(11%)||41006177||192d 22h 45m|
Each dataset is split into 3 parts with fixed proportions, namely training set(70%), validation set(10%) and evaluation set(20%). The data in validation set is used to select hyper-parameters of the proposed and comparing models and to conduct “early stop” while training, in which the samples may be different because of cross-validation. The evaluation set, the details of which is non-transparent for us in the process of training and parameter selection, will then be only used to calculate and report the evaluation metrics for comparison.
|Independent LSTM||0.8771 0.0005||0.5573 0.0006||0.7196 0.0006||0.2969 0.0008|
|Independent LSTM(shared weight)||0.8064 0.0005||0.5301 0.0006||0.5308 0.0005||0.1098 0.0005|
|Phased LSTM||0.8474 0.0005||0.4900 0.0075||0.7722 0.0007||0.3575 0.0026|
|Clock-work RNN||0.8400 0.0001||0.7181 0.0003||0.6516 0.0002||0.2208 0.0003|
|RETAIN||0.8967 0.0011||0.5808 0.0114||0.7325 0.0022||0.3096 0.0052|
|LSTM + event embedding & attr encoding||0.9466 0.0002||0.7445 0.0007||0.7231 0.0028||0.3021 0.0014|
|HE-LSTM||0.9516 0.0003||0.7687 0.0011||0.7987 0.0008||0.3914 0.0013|
We compare HE-LSTM to the following methods.
We use LSTM to model each homogeneous event independently and average the resulting representations into a logistic regression layer. Because the computational cost of thousands of independent LSTM exceed our tolerance, we select 25 important events as it was done in many works[Alaa, Hu, and van der Schaar2017].
Independent LSTM (shared weight) This model is the same as the previous one, except that the weights in each single LSTM is shared and all events are used as the input of the model.
RETAIN RETAIN [Choi et al.2016] mimics physician practice by modeling the EHR data in a reverse time order, and a two-level RNN generating attention variables of sequential data can provides interpretation of the prediction.
LSTM + event embedding & attr encoding We use the event embedding in the first part of proposed method section as the input of traditional LSTM. Logistic regression is applied to the top hidden layer.
Clock-work RNN Clockwork RNN [Koutnik et al.2014] described in related works section.
Phased LSTM Phased LSTM [Neil, Pfeiffer, and Liu2016] described in related works section.
All methods listed above can produce predict scores instead of binary labels, and the data for target prediction tasks are imbalanced labeled. So metrics for binary labels such as accuracy are not suitable for measuring the performance. Similar to the work [Choi et al.2016, Liu et al.2015]
, we adopt the area under ROC curves (Receiver Operating Characteristic curves) and area under PRC (Precision-Recall curves) for evaluation. Both reflect the overall quality of predicted scores at each decision time, according to their true labels.
the Area under ROC Curve(AUC) of comparing with the true label . AUC is robust to imbalanced positive/negative prediction labels, making it appropriate for evaluating the classification accuracy in the endpoint prediction prediction tasks.
Average Precision(AP) Average precision [Turpin and Scholer2006] emphasizes ranking positive samples higher. It is the average of precisions computed at the point of each positive samples in the ranked sequence in ascending order of predict score:
where r is the rank, N the total number of samples, a index function on the positive sample of a given rank, and precision at a given cut-off rank.
This metric is also referred to geometrically as the area under the Precision-Recall curve.
Cross Entropy that measures the model loss on the test set. The loss can be calculated by Eq (16).
|Methods||Phase gate||Event filter||Event gate|
Table.3 shows the area under ROC and AP of different methods on death and lab test datasets respectively. From the results in Table.3, we draw the following conclusions:
Firstly, models considering the dependency of correlated events types outperform all the independent sequential models and the proposed HE-LSTM achieves the best performance. For example, on death prediction task, RETAIN, LSTM and HE-LSTM improve the AP of lab test prediction by around 4.3%, 2.4% and 32.1% respectively compared to the best of “independent LSTM” model without weight share of the parameters in each independent LSTMs. The similar results have been shown in other experiments and metrics. Furthermore, our model achieves the highest performance among these heterogeneous sequential models. For example, on lab test prediction task, HE-LSTM improves the AP by 26.2% and 29.4% compared to RETAIN and LSTM. Besides, the improvements on AUC are 9.0% and 10.4% respectively. We can draw the conclusion that the dependency information of correlated clinical temporal events is useful in endpoint prediction tasks and learning joint representations is more effective to model the temporal dependency of different events of EHR data compared to simple independent sequential models.
Secondly, compared to the densely updating recurrent neural networks, the RNNs adaptive to the sampling rate pattern of events make more improvement of the prediction performance. For example, clock-work RNN improve the AP of death prediction by 29.0% and 33.9% compared to the two kinds of independent LSTMs, while the improvements of AUC and AP are 7.0% and 20.7% for phased LSTM compared to the best of independent LSTMs in the lab test prediction task. We can draw the conclusion that multi-scaled sampling rate pattern of events is effective for endpoint prediction, which makes the model concentrate more on the important events in different phases other than treating all clinical events equally in the long sequence.
Thirdly, HE-LSTM achieves the best performance on all datasets and all evaluation metrics. HE-LSTM outperforms all sparsely updating recurrent neural networks and heterogeneous sequential models on each metrics of two datasets. Models solely utilizing multi-scale sampling pattern in event sequence or models straight-forwardly merging different type of events are not the best choice for clinical endpoint prediction in EHR data. Take the result of death prediction for example, HE-LSTM improves the AUC and AP by 12.4% and 7.0% respectively compared to the best of sparsely updating methods without the event type embedding and attribute encoding modules. The improvements of HE-LSTM compared to the heterogeneous sequential models without event gates are 3.4% and 30.3% in average in term of AUC and AP. We can draw the conclusion that the proposed HE-LSTM effectively improve the performance because of the joint effects of tracing the temporal dependency of heterogeneous events and adaptively fitting their multi-scaled sampling patterns.
To evaluate the effect of the components in the event gate , we replace in (Eq 10) with its factors, namely phase gate and event filter while remaining the other parts of the model identical. The results on two datasets are list in table 4, including AUC, AP and cross entropy on test data as well as the values of the three metrics when the first training epoch is finished.
The event filter mainly helps to improve the performance of clinical endpoint prediction tasks by modeling the dependency of heterogeneous events. Both the event gate and the event filter achieve good performance in all metrics and both datasets when the training is finished. For example, the event gate and the event filter improve the AUC of death prediction by 0.5% and 0.5% compared to the phase gate, while the improvements of AP are 2.8% and 2.9% and the improvements of entropy are 4.9% and 5.2%.
The phase gate helps to achieve a fast convergence in the early stage of training by fitting the multi-scaled sampling rates of different events. HE-LSTM and the model with only phase gate get much higher performance in all metrics and both datasets in the first epoch of training. Take results in lab test task for example, the phase gate and the event gate improve the AUC in first epoch by 4.6% and 7.9% compared to the event filter, while the improvements of AP are 14.5% and 23.3% and the improvements of entropy are 2.2% and 4.3%.
From these comparisons, we draw the conclusion that the event filter and the phase gate collaborates jointly in modeling the dependency in heterogeneous temporal events with the multi-scale sampling rates, which leads to the accurate and efficient performance on the clinical endpoint prediction task.
To evaluate the ability to model the temporal dependency of heterogeneous temporal events of our proposed architect and the other baselines, we feed the trained models multiple events in test set with various length, in the range of 20 to 1000, as input. From figure 3, we can draw the following conclusions:
Firstly, temporal information is effective for endpoint prediction tasks. The performances of most models improve with the increase of the input sequence length. Especially, the performance increases sharply when the length of input sequence is less than 200.
Secondly, HE-LSTM is better at handling the dependency of heterogeneous temporal events than other models. When the input sequence is short, the performances of different models are similar. The reason lies in the fact that, for short sequence input, the combination of independent representations of a single event makes less difference from the joint representation of heterogeneous events in HE-LSTM. But when the input sequence get longer and longer, the performance of our model steadily increase from 0.7551 to 0.7687 in term of AP and from 0.9482 to 0.9516 in term of AUC. The performance of other models remained almost unchanged at almost 0.9465 of AUC and 0.7434 of AP.
To explore the effect of the event filter in the event gate when modeling heterogeneous sequential EHR data, we compare the performance of the proposed HE-LSTM with the reduced HE-LSTM, of which the event filter factor in the event gate is removed. We use different initial periods of during training for death prediction task. The period was drawn uniformly in the exponential domain, comparing four sampling intervals , , , and for each model. The results in Figure.4 show that the initialization of affects the performance of both models. But HE-LSTM is more robust to the initialization. For example, the improvements of HE-LSTM compared to the one without event filter are 4.1%, 4.1%, 2.8%and 6.6% on average. We can draw the conclusion that, with the help of event filter, the event gate can be more adaptive to multi-scale sampling rates of the events in the heterogeneous temporal sequence.
In this paper, we propose a novel HE-LSTM model to learn joint representations of heterogeneous temporal events for clinical endpoint prediction. Our model can adaptively fit the multi-scaled sampling rates of events in the heterogeneous event sequence. By tracing the temporal information of different kinds of events in the long sequence, the temporal dependency of different types of events can be captured in our learned representations. Experimental results with real-world clinical data on the tasks of predicting death and abnormal lab tests prove the effectiveness of our proposed approach over competitive baselines.
This paper is partially supported by the National Natural Science Foundation of China (NSFC Grant Nos.91646202, 61772039 and 61472006).
Proceedings of the AAAI Conference on Artificial Intelligence. AAAI Conference on Artificial Intelligence, volume 2015, 446. NIH Public Access.