MemGCN
None
view repo
With the arrival of the big data era, more and more data are becoming readily available in various real world applications and those data are usually highly heterogeneous. Taking computational medicine as an example, we have both Electronic Health Records (EHR) and medical images for each patient. For complicated diseases such as Parkinson's and Alzheimer's, both EHR and neuroimaging information are very important for disease understanding because they contain complementary aspects of the disease. However, EHR and neuroimage are completely different. So far the existing research has been mainly focusing on one of them. In this paper, we proposed a framework, MemoryBased Graph Convolution Network (MemGCN), to perform integrative analysis with such multimodal data. Specifically, GCN is used to extract useful information from the patients' neuroimages. The information contained in the patient EHRs before the acquisition of each brain image is captured by a memory network because of its sequential nature. The information contained in each brain image is combined with the information read out from the memory network to infer the disease state at the image acquisition timestamp. To further enhance the analytical power of MemGCN, we also designed a multihop strategy that allows multiple reading and updating on the memory can be performed at each iteration. We conduct experiments using the patient data from the Parkinson's Progression Markers Initiative (PPMI) with the task of classification of Parkinson's Disease (PD) cases versus controls. We demonstrate that superior classification performance can be achieved with our proposed framework, comparing with existing approaches involving a single type of data.
READ FULL TEXT VIEW PDF
Objective: Deriving disease subtypes from electronic health records (EHR...
read it
Parkinson's Disease (PD) is one of the most prevalent neurodegenerative
...
read it
Extractive summarization is very useful for physicians to better manage ...
read it
Healthcare professionals have long envisioned using the enormous process...
read it
Most of the existing medicine recommendation systems that are mainly bas...
read it
Disease Intelligence (DI) is based on the acquisition and aggregation of...
read it
Case vs control comparisons have been the classical approach to the stud...
read it
None
With the arrival of the big data era, more and more data are becoming readily available in various real world applications. Those data are like gold mines and data mining technologies are like tools that can dig the gold out from those mines. Taking medicine as an example, we have a large amount of medical data of different types nowadays, from molecular to cellular to clinical and even environmental. As has been envisioned in [1], one key aspect of precision medicine, which aims at recommending the right treatment to the right patient at the right time, is to integrate those multiscale data from different sources to obtain a comprehensive understanding of a health condition.
Many data mining approaches have been proposed for analyzing medical data in recent years. For example, Ghassemi et al. [2] modeled the mortality risk in intensive care unit with latent variable models. Caruana et al. [3] utilized generalized additive model to predict the risk of pneumonia and hospital readmission. Zhou et al. [4]
developed a matrix factorization approach for predictive modeling of the disease onset risk based on patients’ Electronic Health Records (EHR) data. Tensor modeling techniques have also been leveraged in electronic phenotyping
[5, 6]and clinical natural language processing
[7]. More recently, deep learning has emerged as a powerful data mining approach that can disentangle the complex interactions among data features and achieve superior performance. Because of the complex nature of medical problems, researchers have also been exploring the applicability of deep learning models in helping with medical problems using medical images
[8, 9], EHRs [10, 4], physiological signals [11, 12], etc., and obtained promising results.Despite the initial success, so far most of the existing works on data mining for medicine have been focusing on one single type of data (e.g., images or EHRs). However, typically different data sources contain complementary information about the patients from different aspects. For example, concerning neurological diseases, we can get general clinical information of patients, such as diagnosis, medication, lab, etc., from EHRs; while we can obtain specific biomarkers regarding white matter, gray matter, and the change of different RegionsofInterest (ROI), from brain images. Integrative analysis of both EHR and neuroimages can help us understand the disease in a better and more comprehensive way. In reality, such integrative analysis is challenging because of the following reasons.
Heterogeneity. The nature of patient EHR and neuroimages are completely different: the EHR for each patient can be regarded as a temporal event sequence, where at each timestamp multiple medical events (e.g., diagnosis, medications, lab tests, etc.) can appear; while each neuroimage is essentially a collection of pixels. Therefore the ways to process these two types of data could be very different.
Sequentiality. EHR data are sequential and a specific brain image is static. The brain status reflected in a certain brain image can be related to the EHR of the corresponding patient before the acquisition of the image. Effective integration of such heterogeneous information into a unified analytics pipeline is a challenging task.
With the above considerations, we proposed a novel Memorybased Graph Convolutional Network (MemGCN) to perform integrative analysis with both patient EHRs and neuroimages. As its name suggests, there are two major components in MemGCN.
Memory Network [15]. Memory network is a new type of model that connects a regular learning process with a memory module, which is usually represented as a matrix that memorizes the historical status of the system. At each iteration some useful information is extracted from the memory to help the current inference while the same time the memory unit will be updated.
In our framework, the GCN module extracts features from the human brain networks constructed from the brain images. The longitudinal patient EHRs are stored in the memory network to encode the historical clinical information about the patient before the acquisition of the image. The information extracted from the memory network will be combined with the feature from GCN to discriminate PD cases and controls. We conduct experiments on real world data from the patients in the Parkinson’s Progression Markers Initiative (PPMI) [16] and obtained superior performance comparing with conventional methodologies.
The rest of this paper is organized as follows. Section II presents the technical details of our framework. The experimental results are introduced in Section III, followed by the related work in Section IV and conclusions in Section V.
As illustrated in Fig. 1, the proposed method MemGCN is a matching network that is designed for metric learning on not only brain images but also clinical records. The preprocessed brain connectivity graphs are transformed by graph convolutional networks into representations, while memory mechanism is in charge of iteratively (multiple hops) reading clinical sequences and choosing what to retrieve from memories in order to augment the representations learned by graph convolution. For the purpose of metric learning, inner product similarity and bilinear similarity are separately introduced in the matching layer. The output component is composed of a fully connected layer and a softmax for relationship classification of acquisition pairs. Accordingly, we present MemGCN, a matching network embeds multihop memoryaugmented graph convolutions and can be trained in an endtoend fashion with stochastic optimization.
The brain connectivity graph is characterized by defining its ROI nodes and the interactions among them. Since the graphstructured data are nonEuclidean, it is not straightforward to use a standard convolution that has impressive performances on grid. Hence, we resort to geometric deep learning approaches [17, 18] to deal with the problem of learning features on brain connectivity network.
In general, let be an undirected weighted graph, where is a symmetric adjacency matrix satisfying if and if . According to spectral graph theory [19], the graph Laplacian matrix can be computed as , where is the diagonal degree matrix with , and
is the identity matrix. Note that
is a positivesemidefinite matrix and its eigendecomposition can be written as , whereare the orthonormal eigenvectors and
is the diagonal matrix of nonnegative eigenvalues
.In our scenario, the vertices of graph are corresponding to ROIs. Define a brain connectivity acquisition as an input signal , where is a feature vector associated with vertex . The convolution operation is conducted on Fourier domain instead of the vertex domain. Consider two signals and , it can be proved that the following equation exists,
(1) 
where is the elementwise Hadamard product and
defines the Graph Fourier Transform. The function
can be regarded as learnable spectral filters. Previous studies [20, 21, 22, 13] on geometric deep learning have proposed a variety of filter functions to achieve promising properties such as spatial localization and computational complexity. Chebyshev spectral convolution network (ChebNet) [22] is utilized in our model. Before introducing representation learning by ChebNet, we first give the details about how to construct a graph and build its edges with ROI vertices of the collection of brain image acquisitions.Spatial Graph Construction
The brain connectivity graph can be represented as a square matrix with the numerical values indicating the connectivity strength of ROI pairs. However, the region coordinates of anatomical space can provide the crucial spatial relations between ROIs which have not been taken into account in conventional works of the domain [23]. Motivated by the work [24], which applied graph convolution on a functional Magnetic Resonance Imaging (fMRI) task, a spatial graph based on 3dimensional coordinates is constructed for our model. The coordinates are associated with a predefined number of ROIs and share a common coordinate system.
In detail, the xyzcoordinates of region center are able to present the spatial location of the corresponding ROI . The global ROI coordinates are computed by the average aggregation . Thus, the edges can be constructed by a Gaussian function based on Nearest Neighbor similarity, which is
(2) 
where denotes the edge weights between vertex and vertex , and denote the neighbors for and respectively. In practice, we set as a Nearest Neighbor graph. Therefore, the spatial information of ROI is formulated into our model in terms of the graph structure.
ChebNet
With the constructed graph , its graph Laplacian matrix can be obtained. Now our goal is to learn a highlevel representation for each image acquisition by feeding its input signal as well as the shared
into the neural network. From the general sense, it can capture the local traits of each individual brain images and the global traits of the population of subjects.
To address the issues of localization and computational efficiency for convolution filters on graphs, ChebNet exploited a series of polynomial filters represented in the Chebyshev basis,
(3) 
where is the rescaled Laplacian which leads to its eigenvalues in the interval . is the dimensional vector Chebyshev coefficients parameterizing the filters. And defines the Chebyshev polynomial in a recursive manner with and .
To explicitly express filter learning of the graph convolution, without loss of generality, let denote the index of feature map in layer , the th feature map in its layer of sample is given by
(4) 
yielding vectors of trainable Chebyshev coefficients . In detail, denotes the feature maps of the th layer. For the input layer, can be simply set as , which is the th row vector of the brain connectivity matrix. Given a sample , its output of graph convolution can be collected into a feature matrix , where each row represents the learned highlevel feature vector of an ROI.
The key contribution of MemGCN is incorporating sequential records into the representation learning of brain connectivity in terms of memories. Our model is proposed based on Memory Networks [15, 25] which has a variety of successful uses in natural language processing tasks [26, 27] including complex reasoning or question answering. When we define a memory, it could be viewed as an array of slots that can encode both longterm and shortterm context. By pushing the clinical sequences into the memories, the continuous representations of this external information are processed with brain graphs together so that a more comprehensive diagnosis could be made. Inspired by the observation, the memoryaugmented graph convolution are designed.
We start by introducing the MemGCN in the single hop operation, and then show the architecture of stacked hops in multiple steps. Concretely, the memory augmentation can be divided into two procedures: reading and retrieving(see Fig. 1).
Clinical Sequences Reading
Suppose there is a discrete input clinical sequences , where is the index of a clinical record extracted from the certain timestamp. In memory network, it needs to be transformed as continuous vectors and stored into the memory. We use a fixed number of timestamps to define the memory size. The dimension of the continuous space is denoted as while the dimension of the original clinical features is denoted as . To embed the sequential vectors , a embedding matrix is used. That is . The matrix can be regarded as a new input memory representation.
Meanwhile, similar to the method in [25], an output memory to generate continuous vectors is involved. The corresponding embeddings is obtained from , where also is a embedding matrix. Different from other computational forms of attentive weights[28], two memories in our model are maintained by the separate sequence reading procedures, which are responsible for memory access and integration respectively in the retrieving procedure.
Memory Representation Retrieving
To retrieve memory vectors from the embedding space, we firstly need to decide which vector to choose. Not all records in a sequence contribute equally when it comes to the representation learning for brain graphs. Hence, attentive weights are adopted here to make a soft combination of all memory vectors. Mathematically, the weights are computed by a softmax function on the inner product of the input memory vectors and the learned ROI vectors ,
(5) 
Once the informative memory vectors are indicated by weights , the correspondence strength for attention are shown. As Fig. 2 illustrated, our attention is 2dimensional that describes similarities between the representations generated from two modality sources. To make this feasible, we assume that both memory and ROI vectors are in the embedding space with same dimension. Next we represent the contextual information by the aggregation of weights and output memory vectors. Specifically,
(6) 
where is a row vector of the context matrix , and is aware of a new representation for the ROI.
To integrate the context vectors with feature maps of GCN, elementwise sum is employed as . The intuition of using the sum operator derives from the neural networks [29], in which the learned features in the next layer would benefit from both components in their networks.
The entire operations in a single hop are shown in Fig. 2, which is regarded as one layer (hop) of our model MemGCN. The output feature matrix of the single hop is fed into the next hop, and again as an input of the next GCN.
Basically, memory mechanism allow the network to read the input sequences multiple times to update the memory contents at each step and then make a final output. Compared to single step attention [28], contextual information from the memory is collected iteratively and cumulatively for feature maps learning. In particular, suppose there are layer memories for hop operations, the output feature map at the th hop can be rewritten as
(7) 
where is a linear mapping and can be beneficial to the iteratively updating of . Similarly, the computational equations for weights and context vectors in Eq.(5) and Eq.(6) are rewritten as
(8)  
(9) 
In addition, a layerwise updating strategy [25] for input and output memory vectors at multiple hops are used, which is keeping the same embeddings as and .
Notice that the contextual states of the first hop are determined by the two given modalities and then accumulated into the generation of the contextual states in the following hops. Consequently, the final output feature maps rely on the conditional contextual states as well as previous feature maps , where is directly generated from brain connectivity matrix through one layer graph convolution. The underlying rationale of the multihop design is that it is easier for the model to learn what have already been taken into account in previous hops and capture a finegrained attendance from memories in the current hop.
Metric learning for brain connectivity graphs with multiple layers normally involves several nonlinearities so that the complex underlying data structure can be captured. To train such a neural network, a large amount of training data are necessary to prevent overfitting [30]. Although largescale labeled dataset are often limited in clinical practice, metric learning between sample pairs allow us increase the training data significantly because of the possible combination of two samples [31]. In our case, take a brain image acquisition as a sample, the goal of metric learning is to learn discriminative properties to distinguish whether the sample pairs in the same diagnosis class or not.
The basic hypothesis is that, if two samples share the same diagnosis result, the matching score between their highlevel feature maps should be high. Here, two sorts of matching function are explored to calculate the similarities between pairs of acquisitions.
Inner Product Matching
Let and denote any pair of initial brain connectivity matrices, and denote their associated feature vectors learned from the th hops by MemGCN, where is a vertex of ROI. The Euclidean distance computed in the matching layer is a vector with each dimension corresponding to each ROI, which is,
(10) 
Thus, is the output of the matching layer. Instead of computing the distance directly, the feature maps are normalized along with dimension of hidden features and then a inner product is used to get a similarity vector,
(11) 
where is the inner product similarity on th dimension, and it is equivalent to Euclidean distance if the vectors are normalized.
Bilinear Matching
Above matching function in Eq.(11) only considers the similarity of the corresponding ROI vectors of a given paired brain graphs. The similarities computed by different ROI are not modeled. To the aim, a simple bilinear matching function [32] is used here. The matching score is defined as
(12) 
where is the similarity between ROI and based on bilinear matching. is a matrix parameterizing the matching between the paired feature maps. With the matching procedure in Eq.(12), the output of the matching layer is a matrix, with each element suggesting the strength of ROI connections. It is worth to note that if the parameter matrix is an identity matrix, the bilinear matching reduces to the inner product matching.
As in MemGCN, our output layer models the probability of each sample pair is matching or nonmatching. The similarity representation from matching layer is passed to a fully connected layer and a softmax layer for the eventual classification. For each pair, set the output of fully connected layer is a feature vector
. We calculate the probability distribution over the binary classes by
(13) 
where is a trainable parameter.
We train our model using a regularized crossentropy loss function. Let
be the training set of acquisition pairs. is the number of total pairwise combination of brain graphs. The number of acquisitions is much smaller than . The loss function we minimize is(14) 
where denotes the label for sample pair , is the collection of trainable parameters. The MemGCN is trained on machines with NVIDIA TESLA V100 GPUs by using Adam optimizer [33] with minibatch.
Extra Modalities  Methods  tensorFACT  ODFRK2  Hough  

Accuracy  AUC  Accuracy  AUC  Accuracy  AUC  
None  Raw Edges  65.943.78  58.474.05  67.564.12  60.935.60  67.904.09  64.493.56 
PCA  69.193.13  64.102.10  68.382.50  60.932.63  66.284.60  63.463.52  
FCN  71.653.58  66.172.00  70.663.79  68.802.80  70.013.28  61.913.42  
FCN  84.222.76  82.362.87  82.312.68  82.534.74  84.272.63  81.773.74  
GCN  93.692.15  92.674.94  93.232.63  93.045.26  92.802.51  93.905.48  
GCN  93.891.76  94.776.08  94.002.65  94.325.72  93.342.26  93.355.14  
Fusion  AttGCN  93.622.99  94.255.88  94.763.31  94.335.23  94.011.94  94.745.35 
AttLstmGCN  94.702.35  94.385.41  94.892.71  94.874.49  94.642.02  94.805.51  
MemGCN  95.432.22  96.426.36  95.542.98  96.596.44  95.482.34  96.496.41  
MemGCN  95.472.25  96.486.40  95.872.56  96.846.36  95.642.00  96.746.51 
Results for classifying matching vs. nonmatching brain graphs on the test sets of tensorFACT, ODFRK2, and Hough in terms of Accuracy and AUC metrics. Performances without and with extra modalities are shown. “Fusion” modality means clinical records of both motor and nonmotor features. (Hop number
for MemGCNs).The data we used to evaluate MemGCN are obtained from the Parkinson Progression Marker Initiative (PPMI) [16] study. PPMI is an ongoing PD study that has meticulously collected various potential PD progression markers that have been conducted for more than six years. Neuroimages and EHRs are considered as two modalities in this work.
To obtain brain connectivity graphs, a series of preprocessing procedures are conducted. For the correction of head motion and eddy current distortions, FSL eddycorrect tool is used to align the raw data to the b0 image. Also, the gradient table is corrected accordingly. To remove the nonbrain tissue from the diffusion MRI, the Brain Extraction Tool (BET) from FSL [34] is used. To correct for echoplanar induced (EPI) susceptibility artifacts, which can cause distortions at tissuefluid interfaces, skullstripped b0 images are linearly aligned and then elastically registered to their respective preprocessed structural MRI using the Advanced Normalization Tools (ANTs^{1}^{1}1http://stnava.github.io/ANTs/) with SyN nonlinear registration algorithm [35]. The resulting 3D deformation fields are then applied to the remaining diffusionweighted volumes to generate full preprocessed diffusion MRI dataset for the brain network reconstruction. In the meantime, ROIs are parcellated from T1weighted structural MRI using Freesufer^{2}^{2}2https://surfer.nmr.mgh.harvard.edu.
The connectivity graphs computed by three whole brain tractography methods [36] for are applied, which is a coverage of the tensorbased deterministic approach (Fiber Assignment by Continuous Tracking [37]), the Orientation Distribution Function (ODF)based deterministic approach (the 2ndorder RungeKutta, RK2 [38]), as well as the probabilistic approach (Hough voting [39]). ROIs are finally obtained. We define each the coordinates for ROIs using the mean coordinate for all voxels in the corresponding regions (see Spatial Graph Construction in Section IIB for the details). After preprocessing, we collect a dataset of DTI acquisitions, where of them are brain graphs of Parkinson’s Disease (PD) patients and the rest are from Healthy Control (HC) subjects. The spatial graph we constructed has vertices and edges, with each vertex is corresponding to a ROI.
Additionally, sequential EHR records are aligned with corresponding brain connectivity graphs. For each acquisition , a sequence of its associated input features can be used for the external memories. Note that the sequences are chunked at the time points of neruoimaging acquisition, and we only use the subsequences before the time point to make a reasonable experimental design. Usually, the number of timestamps in sequences are different because subjects provide their medical records with distinct frequencies, we set the length of sequence as
according to the statistics of the PPMI study. Padding is utilized for those sequences with fewer timestamps. The specific clinical assessments we study here are motor (MDSUPDRS Part IIIII
[40]) and nonmotor (MDSUPDRS Part I [40] and MoCA [41]) symptoms which are crucial for evaluating a disease course of PD. There are discrete clinical features anddimensions after binarization, then we have the original dimensions of clinical feature which is
. At last, an imputation strategy Last Occurrence Carry Forward (LOCF) is adopted, since there are several missing entries at timestamps.
Implementation Details
To learn similarities between brain connectivity matrices, acquisitions in the same group (PD or HC) are labeled as matching pairs while those from different groups are labeled as nonmatching pairs. Hence, we have pairs in total, with matching pairs and nonmatching pairs.
We selected hyperparameter values through random search
[42]. Batch size is . Initial learning rate is , and early stop is used once the model stops improving. The L2regularization weight is . For each graph convolution operation, the order of Chebyshev polynomials and the feature map dimension are respectively set as and . For the memory network, memory size and dimension of embedding are respectively set as and . The code is available at https://github.com/sherylai/MemGCN.# of hops  Matching  tensorFACT  ODFRK2  Hough  

Accuracy  AUC  Accuracy  AUC  Accuracy  AUC  
1  inner product  94.202.42  94.075.19  94.032.32  94.395.15  94.612.05  95.725.50 
2  inner product  95.362.60  96.356.36  95.402.27  96.396.35  95.212.92  96.106.30 
3  inner product  95.432.22  96.426.36  95.542.98  96.596.44  95.482.34  96.496.41 
1  bilinear  94.682.04  95.325.98  93.882.22  94.175.49  94.371.89  95.174.95 
2  bilinear  95.192.14  96.066.18  94.612.91  95.275.89  95.232.50  96.175.26 
3  bilinear  95.472.25  96.486.40  95.872.56  96.846.36  95.642.00  96.746.51 
Baselines
To test the performance of MemGCN, we report the empirical results of comparisons with a set of baselines. Here are the methods that classify brain graphs without any other modalities.
Raw Edges. It is one simple approach that is to directly use the numerical values from the connectivity matrix to represent the brain network. The feature space is a vector.
PCA.Principal Component Analysis (PCA) is used to reduce the data dimensionality. After forming a samplebyfeature input matrix, PCA is performed by keeping the first 100 principal components, which is an optimal setting in practice.
FCN. Fully Connected Network (FCN) is employed as feature extractor for raw connectivity matrix. The output dimension of the model is set as . FCN is used on brain networks in [43].
FCN2layer. layers FCN has the same number of parameters in its first layer with FCN and reduces the dimension down to in layer .
GCNinner. Metric Learning for brain networks using GCN is first introduced in [24], where a global loss function is used to supervise pairwise similarities. The crossentropy loss is adopted in our experiments to be consistent with other models.
GCNbilinear. The bilinear matching layer proposed in Section IIE is added on the basis of GCN to conduct metric learning. It is a version of MemGCNbilinear without memory.
Furthermore, neural networks without memory mechanism that can embed the clinical data are built as baselines.
AttGCN. Instead of using input and output memories on sequences, only one embedding matrix for the computation of attentive weights is incorporated with GCN via the sum operation.
AttLstmGCN. A standard bidirectional LSTM with attention [28] is established for sequential EHR data and then its context states are combined with GCN feature maps.
Finally, two variants of our model are given.
MemGCNinner. The proposed MemGCN with the inner product matching layer.
MemGCNbilinear. The proposed MemGCN with the bilinear matching layer.
For a fair comparison, the reported models are built under the same pairwise matching architecture for metric learning. Inner product matching are employed in the baseline model if it is not stated as a bilinear version. fold cross validation are conducted in all of our experiments.
Motor  Nonmotor  Fusion  

ROI Name  Score  ROI Name  Score  ROI Name  Score  

Right Thalamus Proper  0.9258  Rh Paracentral  0.8563  Rh Pars Opercularis  0.9344  
Lh Insula  0.9253  Rh Lingual  0.8180  Rh Lateral Occipital  0.8372  
Right Pallidum  0.9226  Right Pallidum  0.8091  Left Accumbens Area  0.7887  
Lh Rostral Middle Frontal  0.9210  Lh Parsorbitalis  0.6554  Rh Parahippocampal  0.7827  
Parahippocampal  0.9206  Left Thalamus Proper  0.6387  Rh Frontalpole  0.7742  

Right Putamen  0.9134  Left Putamen  0.7423  Right Thalamus Proper  0.8960  
Right Accumbens Area  0.9075  Lh Frontal Pole  0.5754  Left Caudate  0.8439  
Left Hippocampus  0.9059  Lh Supramarginal  0.5731  Lh Paracentral  0.8227  
Right VentralDC  0.9058  Lh Inferior Parietal  0.5693  Lh Middle Temporal  0.7865  
Left Caudate  0.9014  Lh Paracentral  0.4851  Lh Cuneus  0.7528 
Lh and Rh are the abbreviations of Left Hemisphere and Right Hemisphere respectively.
Matching vs. Nonmatching Classification
Table I reports the performance for the binary classification task. Three sorts of memory augmentation are configured as motor sequences, nonmotor sequences and a fusion of them. The metrics for evaluation are Accuracy and Area Under the Curve (AUC).
From the results we can observe that, the Raw Edges, and simple feature extraction approach such as PCA and
layer FCN cannot predict a reliable distance for sample pairs and correspondingly achieve a promising results on the matching classification task. More layers with extra nonlinearities have a good influence on the fully connected networks to capture the complicated patterns from acquisitions. All the GCN based methods can largely improve both of Accuracy and AUC performance in three DTI sets generated by TensorFACT, ODFRK2, and Hough tractography algorithms, which demonstrates the effectiveness of graph convolution on the brain connectivity graphs. Overall, the bilinear matching strategy is outperform the inner product matching strategy slightly on both GCN and MemGCN. The best AUC performance is , , and , which are accomplish by MemGCNbilinear with fusion clinical sequences as the external modality. With attention mechanism, AttGCN and AttLstmGCN also perform well in the given circumstances. However, they cannot boost the results significantly compared to the vanilla GCN. The reason that MemGCN behaves better than them is probably separate memories for reading and retrieving are employed in a multihop network.Table III shows the concrete effects of increasing the number of hops on inner product and bilinear matchings. The number of hops is tuned from to . The results on Accuracy and AUC metrics illustrate that our multihop framework indeed improves performance constantly.
Identical ROIs vs. Discriminative ROIs
The interpretability of MemGCN is investigated. As the representation learned in the inner product matching layer can be explained as pairwise similarities at ROI dimensions, it describes the significance of each ROI in metric learning. Therefore, we compute the average similarities for all the PDPD pairs and the average similarities for all the PDHC pairs. ROIs with the highest scores in the PD group could be considered as the identical ROIs for PD, while those with the lowest scores in the PD versus HC group are regarded as the discriminative ROIs.
The interpretable results depends on memory augmentation of motor, nonmotor, and the fusion data are presented in Table III. While the whole functions of the human brain regions are still unclear, it is quite intriguing that MemGCN can locate some of the modalityrelated ROIs, which might be critical for PD study. For instance, The most identical ROI for PD with motor features as augmentation is the Thalamus with one of its major role as motor control. Also, lingual gyrus discovered by nonmotor features is linked to processing vision, especially related to letters. On the other hand, MemGCN can help us to find which ROI is sufficiently discriminative to distinguish PD patients with healthy controls. Several important ROIs belongs to the current research of clinicians and domain experts are detected, i.e., Caudate and Putamen areas.
To show the representation generated from the bilinear matching layer, we draw the edges between ROIs with high similarities in Fig.3. Similar to Table III, the most identical edges for PD group and the most discriminative edges between PD and HC groups are depicted. The interesting patterns we found might be deserved to the further exploration in clinical scenarios.
Longitudinal Alignment: Case Study
From Fig. 4 and 4 we observe that though the structures of three hops of memory layer are same, the values of the attention weights they learned are quite different in typical cases. The matrices we draw in terms of colormaps in Fig.4 indicate the attentive weights for one PD case and one healthy control case. Here we abandon the first padding dimensions of the shown cases and give memory positions (rows in the matrices). The attendance of all the ROI vertices are depicted (columns in the matrices). A darker color indicates where MemGCN is attending during the multihop updating for representations. Basically, given a specific case, which time point has more influences on his/her PD progression and which ROI is more important according to the clinical evidences can be analyzed through this longitudinal alignment between DTIs and EHRs.
In general, the first hop attention appears to be primarily concerned with identifying the salient interaction between timeaware sequences and ROIs’ feature maps. In this hop, the majority values are close to zero and only a few values are close to one, such that a sketch of key ROIs and timestamps are signified. The second and the third hops are then responsible for the finegrained interactions that are relevant to optimizing the representation for the distance learning task.
Another important observation is that the PD case has different interaction patterns compared to the healthy control. At each hop, PD has a relatively narrow attention and fewer responses across memory positions. Consider the PD case shown in Fig. 4, longitudinal alignments occur at timestamps , , and after 3hop updating, meanwhile a series of ROIs might function on the disease progression. By the DesikanKilliany Atlas, the darker ROI dimensions from to are Rh Insula, Right Thalamus Proper, Right Caudate, and Right Putamen, respectively, which matches our expectation for the PD case.
We briefly review the existing research that is closely related to the framework proposed in this paper.
EHR Mining. In recent years many algorithms have been proposed to mine insights from patient EHRs. Initially those methods were static in the sense that they first construct patient vectors by aggregating their EHR with in a certain observational time window and then build learning approaches (e.g., predictive models and clustering methods) on top of those vectors [3, 2]
. Most of these methods are shallow except the DeepPatient work which applied AutoEncoder to further compress the patient vectors and obtain better representations
[44]. Recently researchers have also been exploring CNN and RNN type of approaches to incorporate the temporal information in patient EHRs into the modeling process [45, 46, 47]. However, these methods compressed the patient EHRs to a vector before it was fed to the final model, which is not as flexible as the memory network we adopted.GCN for Neuroimage Analysis. Many data mining approaches have been developed to perform neuroimage analysis in recent years [48]
, among which deep learning models are very popular because of their huge success in various computer vision problems
[49]. Recently, Ktena et al. [24] propose to learn a metric from patients’ neuroimages on top of the features constructed using GCN (where the graph is basically the patients’ brain network constructed on the ROIs), which can discriminate the cases versus controls with autism. Zhang et al. [50] extend such approach to handle the multiple modalities of the brain networks (e.g., constructed from different tractography algorithms on DTI images). However, none of them incorporated any clinical records from the patients. Our work is the first step towards filling the gap.We propose a novel framework, memorybased graph convolution network (MemGCN), to perform integrative analysis of patient clinical records and neuroimages. On the one hand, our experiments on classification of Parkinson’s Disease case patients with healthy controls demonstrate the superiority of MemGCN over conventional approaches. On the other hand, the interpretable highlevel representations extracted from the inner product or bilinear matching layers are capable of indicating group patterns of brain connectivity via ROI nodes or their edges for PD subjects and healthy controls.
Here we explored the operators of the graph convolution via ChebNet and the embedding via memory mechanism as feature extractors for neuroimages and patient health records respectively. The pairwise distance under the metric learning setting in our framework makes a progress in modeling a small cohort data such as PPMI. An important future direction is to design deep architectures that can lower the amount of training data meanwhile learn meaningful representations. We are especially interested in continuing to develop more general endtoend trainable models in the space of boosting system performance on small data.
The authors would like to thank Dr. Liang Zhan’s help on processing the neuroimages. The research is supported by NSF IIS1716432, NSF IIS1750326, and Michael J. Fox Foundation grant number 14858. Data used in the preparation of this article were obtained from the Parkinson’s Progression Markers Initiative (PPMI) database (http://www.ppmiinfo.org/data). For uptodate information on the study, visit http://www.ppmiinfo.org. PPMI – a publicprivate partnership – is funded by the Michael J. Fox Foundation for Parkinson’s Research and funding partners, including Abbvie, Avid, Biogen, BristolMayers Squibb, Covance, GE, Genentech, GlaxoSmithKline, Lilly, Lundbeck, Merk, Meso Scale Discovery, Pfizer, Piramal, Roche, Sanofi, Servier, TEVA, UCB and Golub Capital.
R. T. Schirrmeister, J. T. Springenberg, L. D. J. Fiederer, M. Glasstetter, K. Eggensperger, M. Tangermann, F. Hutter, W. Burgard, and T. Ball, “Deep learning with convolutional neural networks for eeg decoding and visualization,”
Human brain mapping, vol. 38, no. 11, pp. 5391–5420, 2017.D. Bahdanau, K. Cho, and Y. Bengio, “Neural machine translation by jointly learning to align and translate,” in
International Conference on Learning Representations, 2015.International Conference on Machine Learning
, 2017, pp. 1243–1252.Proceedings of the TwentyFifth International Joint Conference on Artificial Intelligence
, 2016, pp. 2972–2978.
Comments
There are no comments yet.