I Introduction
Alzheimer’s disease (AD) is an irreversible and chronic neurodegenerative disease. It is characterized by slowly progressive memory loss and cognitive deficits, constituting the most common form of dementia in the elderly and becoming a worldwide health issue[1, 2, 3, 4]. According to the Alzheimer’s Association[5], it is forecasted that at least 50 million people worldwide suffer from AD, and the number will exceed 152 million by 2050 if this situation continues. The widespread incidence of AD makes it an inevitable global issue and creates a severe financial burden to both patients and governments. However, the causes and mechanisms of AD are still unveiled, and there aren’t efficient treatments for AD. Therefore, it is becoming increasingly important for the scientific community to develop novel methods to study the pathological features of AD.
Traditional neuroimaging methods[6, 7] using CT or MRI to study the morphological feature of brain regionsofinterest(ROIs) have already achieved high precision to AD’s early diagnosis. However, a nonnegligible defect of traditional imaging methods is that they can not characterize the interaction relations between ROIs. It is the main obstacle to understand the pathological features of AD. To overcome such barriers, new tools should be implemented. One of the modern approaches is the analysis of the brain network. Brain network provides a powerful representation of the interaction patterns among ROIs, which gains a lot of attention in investigating the mechanisms of AD.
A brain network can be characterized as a series of nodes and edges. The nodes represent ROIs and edges measure regional interactions extracted from neuroimaging. There are two common categories of brain networks: functional connectivity (FC) and structural connectivity (SC). FC is defined as the interdependence between two ROI’s bloodoxygenleveldependent (BOLD) signals extracted from restingstate functional magnetic resonance imaging (rsfMRI). SC is defined as the physical connection strength between two ROI’s neural fibers extracted from Diffusion Tensor Imaging (DTI). However, most existing brain network analysis approaches focus on either FC or SC. These approaches may ignore the complementary information existing in different modality data. Previous studies
[8, 9, 10, 11] have shown that using multimodal neuroimaging data to study brain networks can better understand the brain mechanisms.Recent neurology studies [12, 13] indicated that brain cognitive mechanisms involve multiple coactivated brain regions (i.e., neural circuit) interactions rather than single pairwise interactions of two ROIs. Such neural circuits information could be critical to understanding the pathological underpinnings of AD. However, how to detect the crucial neural circuits of AD from multimodal brain networks is still challenging: because of the brain’s manytoone functionstructure mode, traditional regression methods cannot be directly used to explore the neural circuits relationships among nodes. Moreover, the individual variability and the nonlinearity between the nodes feature and the weighted adjacent matrix of edges need to be considered simultaneously.
To overcome the abovementioned difficulties and motivated by the recent development of deep neural network based methods
[14, 15], a novel decoupling generative adversarial network (DecGAN) is proposed to detect the crucial neural circuits of AD from multimodal brain networks. Generative adversarial networks (GANs) [16, 17, 18, 19, 20]have attracted wide attention since it is efficient to learn complex distribution without explicitly modeling the probability density function. It is worth mentioning that combining variational methods
[21, 22, 23] with GANs can learn highorder latent boundary distribution. This advantage of GAN can be utilized to capture the complicated highorder relationships buried in ROIs. In this work, a novel decoupling module in DecGAN is designed to decompose the multimodal brain network into several sparse subgraphs and a supplement graph. The nodes of each represent a crucial neural circuit of AD, and the nodes of the supplement graph represent the ROIs whose influence on AD can be ignored. Specifically, the decoupling module is designed by using graphbased algorithms [24, 25]. Compared to traditional CNNbased feature extracting algorithm that can only operate on regular, Euclidean data, The graphbased algorithms can analyze interrelated and hidden structures beyond the grid neighbors, such as brain networks. By encoding the detected neural circuits to hypergraph data, an analytic module associated with the hyperedge neurons algorithm is designed to identify the neural circuits of the subjects, guide the decoupling module to capture the neural circuits largely determining the development of AD. In addition, the generator of DecGAN is utilized to reconstruct the brain network by using the outputs of the decoupling module and the latent variable as inputs. And then input the reconstruction brain network to the decoupling module, obtain sparse subgraphs
. Basing on hypergraph embedding and similarity of hypergraph, a novel sparse capacity loss is designed to compare the spatialspectral differences between and . By minimizing , the accuracy and robustness of the proposed model can be efficiently improved. Therefore the crucial neural circuits of AD can be detected. To the best of our knowledge, the proposed DecGAN is the first work to detect crucial neural circuits of AD by using generative adversarial networks. The contributions of this paper are summarized as follows:
A novel decoupling module associated with an adversarial strategy is proposed to detect the abnormal neural circuits for Alzheimer’s disease. By utilizing the iteration decoupling mechanism, the proposed model can efficiently extract brain network’ feature highly related to Alzheimer’s disease.

A novel sparse capacity loss is designed to characterize the spatialspectral differences between two collections of neural circuits, which can minimize the difference of intrinsic topological distribution from detected neural circuits. It can significantly improve the accuracy and robustness of the proposed model.

An analytic module based on the hyperedge neurons algorithm is developed to identify the relationship between the detected neural circuits and Alzheimer’s disease. The analytic module can guide the decoupling module to capture the neural circuits that largely determine the development of Alzheimer’s disease.
The rest of this paper is organized as follows. The related work is reviewed in Section II. The proposed DecGAN is described in detail in Section III. In Section IV, DecGAN is tested and compared with existing brain network analysis methods to demonstrate its advantage. Finally, concluding remarks and future work are discussed in Section V.
Ii Related Work
With the development of machine learning (ML) techniques
[26, 27, 28, 29, 30, 31] in the area of brain networks, a lot of ML models have been proposed to detect ADrelated brain connectivity and predict AD progression. For example, Li et al. [32] proposed a deep spatialtemporal feature fusion method to predict AD at its early stage. Wang et al. [33]proposed a novel convolutional recurrent neural network for automated prediction of AD progression. Zhao et al.
[34]present a nonlinear dynamic approach to reconstruct brain networks, which shows the significant differences of FC between AD and NC. Recently, there is a growing interest in studying brain connectivity by using graphstructured learning. Graphstructured learning that captures the whole topology features of brain networks has significant advantages in describing the highorder relationships between ROIs. In particular, graph convolutional networks (GCNs) is a graphstructured algorithm that generalizes convolutional neural networks (CNN)
[35] from Euclidean data into graphstructured data [36, 37, 38]. Recently, GCN methods have been successfully used to analyze abnormal brain connections [39]. Moreover, some studies [40, 41] have used spatialbased GCN to capture tripletorder relationships of brain networks. Experiments have shown that modeling tripletorder relationships of brain networks are helpful to boost classification accuracy and learning performance. Even though studies [37, 41] just consider tripletorder relationships of brain networks, it is a good start to study general highorder relationships of brain networks.Iii Methods
Notations: Throughout this paper, let
be the set of real numbers. We denote scalars, vectors, and matrices by normal letters, lowercase boldface, and uppercase boldface, respectively. For a matrix
A, we denote its th entry, inverse matrix, transpose as , , and , respectively.Iiia GCN and Decoupling Module
In this study, we construct a graph to combine information on the multimodal imaging data (including rsfMRI and DTI). To encode the priori brain networks by rsfMRI and DTI, let be an undirected weighted graph, where is the set of nodes represent 90 ROIs defined by the anatomical automatic labeling (AAL) template, is the set of edges between nodes. Let BOLD signals be the feature for the corresponding nodes, and we will denote by the feature matrix . Let the SC matrix be the weighted adjacent matrix of edges , and we will denote by A the weighted adjacent matrix. The multilayer GCN is defined with the following layerwise propagation rule:
(1) 
Where, is the weighted adjacent matrix of with added selfconnections,
is the identity matrix of
, is the degree matrix of , and is a layerspecific trainable weight matrix.denotes a nonlinear activation function.
is the feature matrix of nodes in the layer; .The decoupling layer detects a neural circuit by choosing top nodes which satisfy the following inequality:
(2) 
Where, d is a trainable vector, W is a trainable weighted matrix, and is a trainable bias. and
are hyperparameters. Suppose that the decoupling layer outputs a neural circuit
(here since the number of nodes satisfying condition (2) may be less than ), we get the sparse graph where the weighted adjacent matrix of edges is given as follows(3) 
Behind decoupling layer, we update to by setting the adjacent matrix . And then using GCN layer extracts the feature matrix of , using decoupling layer gets a neural circuit and sparse graph , and updates to . We iterate this procedure times, we will get the neural circuits . The supplementary set of will be denoted by . Finally we define the supplementary graph where the weighted adjacent matrix of edges is given as follows
(4) 
The architecture of decoupling module is shown in Fig.2
IiiB Hypergraph and Analytic Module
To guarantee the neural circuits that are detected by the decoupling module have a significant influence on AD, a analytic module is designed to analyze AD progression by using these neural circuits. The analytic module is based on a hypergraphrelated algorithm. The concepts of the hypergraph are introduced as follows. A hypergraph is defined as , which includes a vertex set , a hyperedge set . The main difference between hypergraph and graph is that a hyperedge in a hypergraph can connect more than two vertices. A graph is a special case of a hypergraph, where each hyperedge has size . The structure of hypergraph can be denoted by a incidence matrix H, with entries defined as
Recall that each sparse graph corresponds a neural circuit . The collection of neural circuits is embedded to the hypergraph by setting . Let the feature matrix of the hypergraph be the average of the feature matrices of the sparse graphs, i.e. . The hyperedge neurons algorithm proposed in [42] is used to update with the following layerwise propagation rule:
(5) 
(6) 
where are layerspecific trainable weight matrices, and are layerspecific trainable bias. The architecture of the analytic module is shown in Fig.4.
IiiC Generator and Discriminator
The generator is designed to reconstruct the brain network from the sparse graphs and the supplementary graph . There are two steps in this procedure. Firstly, we reconstruct the weighted adjacent matrix of the brain network. Given a latent space , generator learns a mapping from a random vector to an output connective matrix . We reconstruct the weighted adjacent matrix as follows:
(7) 
Where is the neural circuits, is the supplementary set of . is the weighted adjacent matrix of supplementary graph . is the weighted adjacent matrices of sparse graphs , respectively. Secondly, we use multilayer GCN with weighted adjacent matrix and the initial feature matrix to reconstruct the feature matrix of the brain network. The reconstructive brain network will be denoted by . The discriminator is designed to identify whether the brain network is real or fake. The discriminator contains of a multilayer GCN and a fully connect layer. The initial brain network and the reconstructive brain network are treated as real and fake samples for the discriminator training. The structure of the generator and discriminator is illustrated in Fig.5
IiiD Loss Functions
Analytic Loss. Given an multimodal brain network, the goal of DecGAN is to detect the crucial neural circuits of AD that be represented by the sparse graphs. To achieve this condition, the analytic module is introduced in IIIB. The analytic loss is imposed when optimizing the analytic module and the decoupling module. In detail, the formula of analytic loss is defined as
(8) 
Optimize the analytic module and the decoupling module by minimizing this analytic loss. The analytic module is trained to analyze the disease status by using sparse graphs . The decoupling module is trained to detect the sparse graphs such that the analytic module have highest analytic accuracy when using these sparse graphs as inputs.
Adversarial Loss. To make the reconstructive brain network indistinguishable from initial brain net, the adversarial loss for generator is defined as
(9) 
The adversarial loss for discriminator is defined as
(10) 
Sparse Capacity Loss. The decoupling module is optimized according to the feedback of the analytic module. However, this procedure does not guarantees that decoupling module can detect the neural circuits with robustness when slightly changing the topological structure of brain network. To break this dilemma, a novel sparse capacity loss is designed to characterize the structural difference of two collections of neural circuits. Let and be the collections of neural circuits which are the outputs of decoupling module using and as inputs, respectively. The collections and are embedded to the hypergraphs and by setting and , respectively. The sparse capacity loss is consisted of two terms: The spatial similarity and the spectral similarity of the hypergraphs and , i.e.,
(11) 
In details, the spatial similarity is defined by
(12) 
The spectral similarity is based on hypergraph Laplacian
(13) 
where H is the incidence matrix of , denotes the diagonal matrix of node degree , and denotes the diagonal matrix of edge degree . Let and
be the eigenvalues of the hypergraph Laplacian
and , respectively. The spectral similarity of and is defined by(14) 
Total Loss
. The total loss functions to optimize the decoupling module
, the analytic module , the generator , and the discriminator are summarized respectively as(15) 
(16) 
(17) 
(18) 
where is hyperparameter that control the relative importance of analytic loss and sparse capacity loss.
Iv Experiments
Iva Dataset and Preprocessing
DTI and rsfMRI images from the Alzheimer’s Disease Neuroimaging Initiative (ADNI) public dataset are used to validate our proposed framework. There are 236 subjects’ data used in our study. The detailed subjects’ information is summarized in Table I.
Group  AD(53)  LMCI(31)  EMCI(74)  NC(78) 

Male/Female  32M/21F  15M/16F  45M/29F  35M/43F 
Age(mean SD)  75.3 5.5  74.9 5.3  75.8 6.1  74.0 5.9 
In this study, DPARSF toolbox [43] and GRETNA [44] toolbox is used to preprocess the rsfMRI data. At first, the initial DICOM format of rsfMRI data is converted to NIFTI format data. Then, standard steps for rsfMRI data preprocessing is applied by using the DPARSF, including the discarding of the first 20 volumes, head motion correction, spatial normalization, and Gaussian smoothing in this stage. Next, the AAL atlas is used to divide brain space into 90 ROIs. Finally, the time series of all voxels are extracted by using the GRETNA. By taking the mean value of the time series of all voxels in a specific ROI, the BOLD signal of individual ROI is obtained.
PANDA toolbox [45] is used to preprocess the DTI data. Similar to the preprocessing of rsfMRI data, at first, the initial DICOM format of DTI data is converted to NIFTI format data. Then, skull stripping, resampling the fiber bundle, head motion correction is applied. Next, the AAL atlas is used to divide brain space into 90 ROIs. Each ROI is defined as a node of the brain network. Finally, the structural connectivity of the brain network is determined by fiber tracking between different ROIs. The fiber tracking stopping condition is defined as follows: (1) the crossing angle between two consecutive moving directions is more than 45 degrees. (2) the fractional anisotropy value is not in the range of [0.2, 1.0].
IvB Experiment Settings
A fivefold validation strategy is used to evaluate the performance of our proposed DecGAN model. In detail, all subjects are randomly divided into five subsets with equal size. One of the subsets is treated as the test set, and the union of the other four subsets is treated as the training set. Repeat this process five times to remove the bias by the random division. The classification performance is evaluated by mean values of detection accuracy(ACC), sensitivity(SEN), specificity(SPEC), and Fscore(F1). The proposed DecGAN is implemented in Pytorch. All experiments in this study are conducted on four NVIDIA GeForce GTX 2080 Ti GPUs. The optimizer is set to ’Adam’. The batch size is set to 16. The decoupling coefficient
is set to 0.1. The learning rate of the decoupling module, analytic module, generator, and discriminator are set to , , , and , respectively.IvC Neural Circuits Analysis
In this section, the experiments are conducted to study the influence of the hyperparameters and of the decoupling module on classification performance. Recall that the hyperparameter represents the number of neural circuits which are outputted by the decoupling module, and the hyperparameter represents the maximal number of ROIs contained in each neural circuit. The experiments evaluate the ACC and AUC value of binary classification tasks (AD vs. NC) with varied parameters , while keeping other parameters invariant. The results are shown in Fig. 6. The top two highest ACC value of and are achieved at and . In addition, root mean square error (RMSE) is used to measure the structural differences between the priori brain network and the reconstruction brain network. Thus, to generate reliable reconstruction brain network, the RMSE should keep decreasing before converged. The quantitative analysis between the reconstruction brain networks and the priori brain networks is shown in Fig.7.
Counting all the output from total subjects, the abnormal neural circuits for AD is [Hippocampus_L, Hippocampus_R, Precuneus_L, Precuneus_R, Putamen_L];[Frontal_Mid_Orb_L, Amygdala_R, Pallidum_L, Heschl_L, Heschl_R] when . The abnormal neural circuits for AD is [Frontal_Mid_Orb_L, Hippocampus_L, Hippocampus_R, Amygdala_R, Occipital_Inf_L, Occipital_Inf_R, Pallidum_L, Heschl_L] when . The structural connectivity in neural circuits are shown in Fig.8,9,10,11,12.
IvD Ablation analysis
We propose the sparse capacity loss to compare the structural differences between the neural circuits decoupled from the priori brain network and the neural circuits decoupled from the reconstruction brain network. In order to explore the effectiveness of the sparse capacity loss, two ablation experiments are performed in this paper.
One is using MSE loss to replace the sparse capacity loss to measure the structural difference between two neural circuits. The other is directly removing the sparse capacity loss.
The effects of the sparse capacity loss on classification performance is shown in Fig.13. As shown in Fig.13, the results of the sparse capacity loss are better than MSE loss and without loss function. The improvement of ACC, SEN, SPE, and F1 value demonstrate that the sparse capacity loss can effectively enhance the performance of the decoupling module.
V Discussion and conclusion
Va Comparison with clinical results
By counting the number of occurrences of ROIs in the neural circuits of model output, we get the top ten ROIs, specificity, these ROIs are Frontal_Mid_Orb_L, Hippocampus_L, Hippocampus_R, Amygdala_L, Occipital_Inf_L, Occipital_Inf_R, Precuneus_L, Precuneus_R, Heschl_L and Heschl_R. We can see that these brain regions are mainly concentrated on the memory and reasoning areas, which are highly related to the AD according to the clinical studies [46, 47].
AAL region  Location  Evidence 

Frontal_Mid_Orb_L  Frontal lobe  Salat et al. [48] 
Hippocampus_L  Limbic lobe  Du et al. [49] 
Hippocampus_R  Limbic lobe  Du et al. [49] 
Amygdala_L  Limbic lobe  Tsuchiya et al. [50] 
Occipital_Inf_L  Occipital lobe  Sun et al. [51] 
Occipital_Inf_R  Occipital lobe  Gupta et al. [52] 
Precuneus_L  Parietal lobe  Yang et al. [53] 
Precuneus_R  Parietal lobe  Karas et al. [54] 
Heschl_L  Temporal lobe  Zhou et al. [55] 
Heschl_R  Temporal lobe  Pusil et al. [56] 
Classifier  Input  NC vs. EMCI  NC vs. LMCI  NC vs. AD  
ACC  SEN  SPE  F1  ACC  SEN  SPE  F1  ACC  SEN  SPE  F1  
SVM  Priori brain networks  72.41%  73.33%  71.42%  73.33%  76.19%  86.66%  50.00%  83.87%  77.77%  87.50%  63.63%  82.35% 
Reconstruction brain networks  75.86%  80.00%  71.42%  77.41%  80.95%  86.66%  66.66%  86.66%  85.18%  75.00%  100.00%  85.71%  
DNN  Priori brain networks  79.31%  86.66%  71.42%  81.25%  80.95%  80.00%  83.33%  85.71%  81.48%  87.50%  72.72%  84.85% 
Reconstruction brain networks  79.31%  73.33%  85.71%  78.57%  85.71%  86.66%  83.33%  89.65%  85.18%  93.75%  72.72%  88.23%  
GCN  Priori brain networks  82.75%  86.66%  78.57%  83.87%  80.95%  73.33%  100.00%  84.61%  81.48%  81.25%  81.80%  83.87% 
Reconstruction brain networks  86.20%  86.66%  85.71%  86.66%  85.71%  80.00%  100.00%  88.88%  85.18%  81.25%  90.91%  86.66% 
In detail, literature verification is carried out to find out whether these ROIs are related to AD, the results are shown in Table II. Note that these ROIs are mainly located in the Limbic lobe, Occipital lobe, Parietal lobe, and Temporal lobe. The limbic lobe is regarded as an important brain area that highly relates to AD pathology [57]. The occipital lobe is significantly related to visual memory, in which AD patients showed atrophy in the occipital cortex [58]. The parietal lobe is involved in the spatial function and is particularly important for realtime spatial navigation, in which AD patients showed the change in parietal lobe white matter hyperintensities [59]. The parietal lobe plays important role in integrating sensory information from various parts of the body, knowledge of numbers. The AD patients showed rapid atrophy in the medial temporal lobe [60]. Therefore, these clinical results prove the High correlation of the neural circuits detected by our model to AD.
VB The Reconstruct Brain Networks By Generator
The generator is used to reconstruct the brain network from the output of the decoupling module and the latent space. The main goal of the proposed model is to detect the abnormal neural circuits which have a significant influence on AD. The stability of the decoupling module can be improved by enhancing the ADrelated feature expression ability of the reconstruction brain networks. Therefore, the quality of the reconstruction brain network is the key factor that affects whether the proposed model can detect neural circuits accurately. To verify the ADrelated feature expression ability of the reconstruction brain networks, the classification experiment is designed to compare the prediction performance of the priori brain networks and reconstruction brain networks on different AD stages. The classification results is shown in Table III. Finally, the visualization of the structural connectivity (i.e. weighted adjacent matrix of brain network) of the brain networks is shown in Fig.14.
VC Conclusion
In this paper, we propose a novel decoupling generative adversarial network (DecGAN) to detect the crucial neural circuits for AD. Benefit from the GCN layer and the decoupling layer, the proposed model can efficiently extract complementary topology information between rsfMRI and DTI. Moreover, we propose the sparse capacity loss to characterize the intrinsic topological difference between different neural circuits, which significantly improved the robustness and accuracy of the proposed model. This paper only focuses on AD, but it is worth mention that the proposed model can be easily extended to other neurodegenerative diseases. Although the proposed model is promising in providing a new multimodal analysis framework for neural circuits detection, there are still two major limitations in our works. One limitation is that the proposed model cannot explain how the internal mechanisms of the detected neural circuits affect the development of the disease. A possible solution is introducing recurrent neural networks to analyze the BOLD signal, which makes it possible to quantitatively characterize the influence of neural circuits on the disease process. Another limitation is that our current study dataset is relatively small. In the future, we plan to test the effectiveness of the proposed model on a larger dataset of brain images such as UKBiobank.
References
 [1] M. W. Bondi, E. C. Edmonds, and D. P. Salmon, “Alzheimer’s disease: Past, present, and future,” Journal of the International Neuropsychological Society: JINS, vol. 23, no. 910, pp. 818–831, Oct. 2017.
 [2] W. Yu, et al., “Morphological feature visualization of Alzheimer’s disease via Multidirectional Perception GAN,” IEEE Transactions on Neural Networks and Learning Systems, 2021.
 [3] C. Patterson, World Alzheimer Report 2018: The State of the Art of Dementia Research: New Frontiers. London, U.K.: Alzheimer’s Disease International, 2018.
 [4] S. Wang, H. Wang, Y. Shen, X. Wang, “Automatic recognition of mild cognitive impairment and alzheimers disease using ensemble based 3d densely connected convolutional networks” 2018 17th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 517–523, 2018
 [5] Alzheimer’s Association, “2019 Alzheimer’s disease facts and figures,” Alzheimer’s Dementia, vol. 15, no. 3, pp. 321–387, 2019.
 [6] S. Hu, W. Yu, Z. Chen, and S. Wang, “Medical Image Reconstruction Using Generative Adversarial Network for Alzheimer Disease Assessment with ClassImbalance Problem,” 2020 IEEE 6th International Conference on Computer and Communications (ICCC), pp. 1323–1327, 2020.
 [7] S. Wang, H. Wang, A. Cheung, Y. Shen, and M. Gan, “Ensemble of 3D Densely Connected Convolutional Network for Diagnosis of Mild Cognitive Impairment and Alzheimer’s Disease,” Deep Learning Applications, pp. 53–73, 2020.
 [8] C. Hinrichs, V. Singh, G. Xu, S. C. Johnson, and The Alzheimers Disease Neuroimaging Initiative, “Predictive markers for AD in a multimodality framework: An analysis of MCI progression in the ADNI population,” NeuroImage, vol. 55, no. 2, pp. 574–589, 2011.
 [9] S. Hu, J. Yuan, and S. Wang, “Crossmodality Synthesis from MRI to PET Using Adversarial UNet with Different Normalization,” 2019 International Conference on Medical Imaging Physics and Engineering (ICMIPE), pp. 1–5, 2019.
 [10] E. M. Reiman and W. J. Jagust,“Brain imaging in the study of alzheimer’s disease,” in Neuroimage, vol. 61, no. 2, pp. 505–516, 2012.
 [11] D. L. Weimer and M. A. Sager,“Early identification and treatment of alzheimer’s disease: social and fiscal outcomes,” Alzheimer’s & Dementia, vol. 5, no. 3, pp. 215–226, 2009.
 [12] S. Harris, F. Wolf, B. Strooper, and M. Busche, “Tipping the Scales: PeptideDependent Dysregulation of Neural Circuit Dynamics in Alzheimer’s Disease,” Neuron, vol. 107, pp. 417–435, Aug. 2020.
 [13] R. Canter, J. Penney, and LH. Tsai, “The road to restoring neural circuits for the treatment of Alzheimer’s disease,” Nature, 539, pp. 187–196, Nov. 2016.
 [14] B. Lei, et al., “Deep and joint learning of longitudinal data for Alzheimer’s disease prediction,” Pattern Recognition, vol.102, pp.107247, 2020
 [15] S. Wang, Y. Shen, W. Chen, T. Xiao, and J. Hu, “Automatic recognition of mild cognitive impairment from mri images using expedited convolutional neural networks,” International Conference on Artificial Neural Networks, pp. 373–380, 2017.
 [16] S. Hu, et al., “Bidirectional Mapping Generative Adversarial Networks for Brain MR to PET Synthesis,” IEEE Transactions on Medical Imaging, DOI: 10.1109/TMI.2021.3107013, 2021.
 [17] I. Goodfellow, J. PougetAbadie, M. Mirza, B. Xu, D. WardeFarley, S. Ozair, A. Courville, and Y. Bengio, “Generative adversarial nets,” in Advances in Neural Information Processing Systems, pp. 2672–2680, 2014.
 [18] S. Hu, Y. Shen, S. Wang, and B. Lei, , “Brain MR to PET Synthesis via Bidirectional Generative Adversarial Network,” International Conference on Medical Image Computing and ComputerAssisted Intervention, pp. 698–707, 2020
 [19] B. Lei, et al., “Skin lesion segmentation via generative adversarial networks with dual discriminators,” Medical Image Analysis, vol. 64, pp. 010716, 2020.
 [20] W. Yu, B. Lei, M. Ng, A. Cheung, Y. Shen, and S. Wang, “Tensorizing GAN with highorder pooling for Alzheimer’s disease assessment,” IEEE Transactions on Neural Networks and Learning Systems, 2021.
 [21] L. Mo, and S. Wang, “A variational approach to nonlinear twopoint boundary value problems,” Nonlinear Analysis: Theory, Methods & Applications, vol. 71, no. 12, pp. 834–838, 2009.
 [22] S. Wang, “A variational approach to nonlinear twopoint boundary value problems,” Computers & Mathematics with Applications, vol58, no. 11–12, pp.2452–2455, 2009.
 [23] S. Wang, and J. He, “Variational iteration method for solving integrodifferential equations,” Physics letters A, vol. 367, no. 3, pp. 188–191, 2007.
 [24] S. I. Ktena et al., “Metric learning with spectral graph convolutions on brain connectivity networks,” NeuroImage, vol. 169, pp. 431–442, 2018.
 [25] D. Yao, M. Liu, M. Wang, C. Lian, J. Wei, L. Sun, J. Sui, and D. Shen, “Triplet graph convolutional network for multiscale analysis of functional connectivity using functional MRI,” in International Workshop on Graph Learning in Medical Imaging, pp. 70–78, Springer, 2019.

[26]
D. Zeng, S. Wang, Y. Shen, and C. Shi, “A GAbased feature selection and parameter optimization for support tucker machine,”
Procedia computer science, vol. 111, pp. 17–23, 2017.  [27] S. Wang, and J. He, “Variational iteration method for a nonlinear reactiondiffusion process,” International Journal of Chemical Reactor Engineering, vol. 6, no. 1, 2008.
 [28] S. Wang, et al., “Skeletal maturity recognition using a fully automated system with convolutional neural networks,” IEEE Access, vol. 6, pp. 29979–29993, 2018.
 [29] S. Wang, et al., “An EnsembleBased DenselyConnected Deep Learning System for Assessment of Skeletal Maturity,” IEEE Transactions on Systems, Man, and Cybernetics: Systems, 2020.
 [30] S. Wang, Y. Hu, Y. Shen, and H. Li, “Classification of diffusion tensor metrics for the diagnosis of a myelopathic cord using machine learning,” International journal of neural systems, vol. 28, no 02, pp. 1750036, 2018.
 [31] S. Wang, et al., “Prediction of myelopathic level in cervical spondylotic myelopathy using diffusion tensor imaging,” Journal of Magnetic Resonance Imaging, vol. 41, no. 6, pp.1682–1688, 2015.
 [32] Y. Li, J. Liu, Z. Tang, and B. Lei, “Deep SpatialTemporal Feature Fusion From Adaptive Dynamic Functional Connectivity for MCI Identification,” IEEE Transactions on Medical Imaging, vol. 39, no. 9, pp. 2818–2830, Sep. 2020
 [33] M. Wang, C. Lian, D. Yao, D. Zhang, M. Liu, and D. Shen,“Spatialtemporal dependency modeling and network hub detection for functional MRI analysis via convolutionalrecurrent network,” IEEE Transactions on Biomedical Engineering, 2020.
 [34] Y. Zhao et al., “Imaging of Nonlinear and Dynamic Functional Brain Connectivity Based on EEG Recordings With the Application on the Diagnosis of Alzheimer’s Disease,” IEEE Transactions on Medical Imaging, vol. 39, no. 5, pp. 1571–1581, May 2020,
 [35] K. Wu, Y. Shen, and S. Wang, “3D convolutional neural network for regional precipitation nowcasting,” Journal of Image and Signal Processing, vol.7, no. 4, pp. 200–212, 2018
 [36] T. Kipf, and M. Welling, “SemiSupervised Classification with Graph Convolutional Networks,” arXiv preprints arXiv:1609.02907, 2016.
 [37] D. Yao, M. Liu, M. Wang, C. Lian, J. Wei, L. Sun, J. Sui, and D. Shen, “Triplet graph convolutional network for multiscale analysis of functional connectivity using functional MRI,” in International Workshop on Graph Learning in Medical Imaging, pp. 70–78, Springer, 2019.
 [38] S. Parisot. et al., “Disease prediction using graph convolutional networks: Application to autism spectrum disorder and Alzheimer’s disease,” Medical Image Analysis, vol. 48, pp. 117–C130, 2018.
 [39] S. I. Ktena et al., “Distance metric learning using graph convolutional networks: Application to functional brain networks,” in Proc. Int. Conf. Med. Image Comput. Comput.Assist. Intervent. Cham, Switzerland: Springer, 2017, pp. 469–477.
 [40] S. Yu, et al., “Multiscale Enhanced Graph Convolutional Network for Early Mild Cognitive Impairment Detection,” International Conference on Medical Image Computing and ComputerAssisted Intervention, pp.228–237, 2020.
 [41] G. Ma et al., “Deep graph similarity learning for brain data analysis,” in Proceedings of the 28th ACM International Conference on Information and Knowledge Management, pp. 2743–2751, 2019.
 [42] Y. Dong, W. Sawin , and Y. Bengio, “HNHN: Hypergraph Networks with Hyperedge Neurons” arXiv preprints arXiv:2006.12278, 2020.
 [43] C. Yan, and Y. Zang, “DPARSF: a MATLAB toolbox for” pipeline” data analysis of restingstate fMRI” Frontiers in systems neuroscience, vol. 4, pp. 13, 2010.
 [44] J. Wang, X. Wang, M. Xia, X. Liao, A. Evans, and Y. He, “GRETNA: a graph theoretical network analysis toolbox for imaging connectomics” Frontiers in human neuroscience, vol. 9, pp. 386, 2015.
 [45] Z. Cui, S. Zhong, P. Xu, Y. He, and G. Gong, “PANDA: a pipeline toolbox for analyzing brain diffusion images” Frontiers in human neuroscience, vol. 7, pp. 42, 2013.
 [46] D. Veitch. et al., “Understanding disease progression and improving alzheimer’s disease clinical trials: Recent highlights from the alzheimer’s disease neuroimaging initiative” Alzheimer’s & Dementia, vol. 15, pp. 106–152, 2019.
 [47] B. Lei, et al., “Predicting Clinical Scores for Alzheimer’s Disease Based on Joint and Deep Learning” Expert Systems with Applications, DOI: https://doi.org/10.1016/j.eswa.2021.115966, 2021.
 [48] D. Salat. et al., “Selective preservation and degeneration within the prefrontal cortex in aging and Alzheimer disease” Archives of neurology, vol. 58 pp. 1403–1408, 2001.
 [49] A. Du. et al., “Magnetic resonance imaging of the entorhinal cortex and hippocampus in mild cognitive impairment and Alzheimer’s diseas disease” Journal of neurology, neurosurgery, and psychiatry psychiatry, vol. 71 pp. 441–447, 2001.
 [50] K. Tsuchiya, and K. Kosaka, “Neuropathological study of the amygdala in presenile Alzheimer’s disease” Journal of the Neurological Sciences, vol. 100 pp. 165–173, 1990

[51]
L. Sun. et al., “Mining brain region connectivity for Alzheimer’s disease study via sparse inverse covariance estimation”
Proceedings of the 15th ACM SIGKDD international conference on Knowledge discovery and data mining, pp. 1335–1344, 2009  [52] Y. Gupta. et al., “Early diagnosis of Alzheimer’s disease using combined features from voxelbased morphometry and cortical, subcortical, and hippocampus regions of MRI T1 brain images ” PLoS ONE, vol. 14, 2019
 [53] L. Yang. et al., “Gradual disturbances of the amplitude of lowfrequency fluctuations (ALFF) and fractional ALFF in Alzheimer spectrum ” Frontiers in neuroscience, vol. 12: 975, 2018
 [54] G. Karas. et al., “Precuneus atrophy in early earlyonset Alzheimer’s disease: a morphometric structural MRI study” Neuroradiology, vol. 49 pp.967–976, 2007

[55]
K. Zhou. et al., “Feature selection and transfer learning for Alzheimer’s disease clinical diagnosis ”
Applied Sciences, vol. 8: 1372, 2018  [56] S. Pusil. et al., “Aberrant MEG multifrequency phase temporal synchronization predicts conversion from mild cognitive impairmenttoAlzheimer’s disease ” NeuroImage: Clinical, vol. 24: 101972, 2019
 [57] M. Takahashi, Y. Oda, T. Okubo, and Y. Shirayama, “Relationships between cognitive impairment on ADAScog and regional cerebral blood flow using SPECT in lateonset Alzheimer’s disease” J. Neural Transm., vol. 124, no. 9, pp. 1109–1121, Sep. 2017.
 [58] R. Migliaccio. et al., “Mapping the Progression of Atrophy in Early and LateOnset Alzheimer’s Disease” J. Alzheimers Dis., vol. 46, no. 2, pp. 351–364, Mar. 2015
 [59] A. Brickman. et al., “ Reconsidering harbingers of dementia: progression of parietal lobe white matter hyperintensities predicts Alzheimer’s disease incidence” Neurobiology of aging, vol. 36, no. 1, pp. 27–32, 2015.
 [60] K. Jobst. et al., “Rapidly progressing atrophy of medial temporal lobe in Alzheimer’s disease” The Lancet, vol. 343, no .8901, pp. 829–830 1994.
Comments
There are no comments yet.