1 Introduction
Alzheimer’s disease (AD) is known as the most common type of dementia. GenomeWide Association Studies (GWAS) [2] achieved great success in finding single nucleotide polymorphisms (SNPs) associated with AD. Some largescale collaborative network such as ENIGMA [8] Consortium consists of 185 research institutions around the world, analyzing genomic data from over 33,000 subjects, from 35 countries. However, processing and integrating genetic data across different institutions is challenging. The first issue is the data privacy since each participating institution wishes to collaborate with others without revealing its own data set. The second issue is how to conduct the learning process across different institutions. Local Query Model (LQM) [3, 13] is proposed to perform the distributed Lasso regression for largescale collaborative imaging genetics studies across different institutions while preserving the data privacy for each of them. However, in some imaging genetics studies [2], we are more interested in finding important explanatory factors in predicting responses, where each explanatory factor is represented by a group of features since lots of AD genes are continuous or relative with some other features, not individual features. In such cases, the selection of important features corresponds to the selection of groups of features. As an extension of Lasso, group Lasso [12] has been proposed for feature selection in a group level and quite a few efficient algorithms [5, 1] have been proposed for efficient optimization. However, integrating group Lasso with imaging genetics studies across multiple institutions has not been studied well.
In this study, we propose a novel Distributed Feature Selection Framework (DFSF) to conduct the largescale imaging genetics studies analysis across multiple research institutions. Our framework has three components. In the first stage, we proposed a family of distributed group lasso screening rules (DSR and DDPP_GL) to identify inactive features and remove them from the optimization. The second stage is to perform the group lasso feature selection process in a distributed manner, selecting the top relevant group features for all the institutions. Finally, each institution obtains the learnt model and perform the stability selection to rank the top risk genes for AD. The experiment is conducted on the Alzheimer’s Disease Neuroimaging Initiative (ADNI) GWAS data set, including approximately 809 subjects with 5.9 million loci. Empirical studies demonstrate that proposed method the proposed method achieved a 35fold speedup compared to stateoftheart distributed solvers like ADMM. Stability selection results show that the proposed DFSF detects APOE, GRM8, GPC6 and LOC100506272 as top risk SNPs associated with AD, demonstrating a superior result compared to Lasso regression methods [3]. The proposed method offers a powerful feature selection tool to study AD and its early symptom.
2 Problem Statement
2.1 Problem Formulation
Group Lasso [12] is a highly efficient feature selection and regression technique used in the model construction. Group Lasso takes the form of the equation:
(1) 
where represents the feature matrix where and y denotes the
dimensional response vector.
is a positive regularization parameter. Different from Lasso regression [9], group Lasso partitions the original feature matrix into nonoverlapping groups and denotes the weight for the th group. After solving the group Lasso problem, we get the corresponding solution vector and the dimension of is the same as the feature space in .2.2 ADNI GWAS data
The ADNI GWAS dataset contains genotype information of 809 ADNI participants. To store statistically relevant SNPs called using Illumina’s CASAVA SNP Caller, the ADNI WGS SNP data is stored in variant call format (VCF) for storing gene sequence variations. SNPs at approximately 5.9 million specific loci are recorded for each participant. We encode SNPs using the coding scheme in [7] and apply Minor Allele Frequency (MAF) and Genotype Quality (GQ) as two quality control criteria to filter high quality SNPs features. We follow the same SNP genotype coding and quality control scheme in [3].
We have institutions to conduct the collaborative learning. The th institution maintains its own data set where , is the sample number, is the feature number and is the response and . We assume is the same across institutions. We aim at conducting the feature selection process of group lasso on the distributed datasets , .
3 Proposed Framework
In this section, we present the streamline of proposed DFSF framework. The DFSF framework is composed of three main procedures:

Identify the inactive features by the distributed group Lasso screening rules and remove inactive features from optimization.

Solve the group Lasso problem on the reduced feature matrix along a sequence of parameter values and select the most relevant features for each participating institution.

Perform the stability selection to rank SNPs that may collectively affect AD.
3.1 Screening Rules for Group Lasso
Strong rule [10] is an efficient screening method for fitting lassolike problems by preidentifying the features which have zero coefficients in the solution and removing these features from optimization, significantly cutting down on the computation required for optimization.
For the group lasso problem [12], the th group of —— will be discarded by strong rules if the following rule holds:
(2) 
The calculation of follows . could be discarded in the optimization without sacrificing the accuracy since all the elements of are zero in the optimal solution vector.
Let denote the index set of groups in the feature space and . Suppose that there are remaining groups after employing screening rules, we use to represent the index set of remaining groups and . As a result, the optimization of group lasso problem (1) can be reformulated as:
(3) 
where is the dimension of reduced feature space and .
3.2 Distributed Screening Rules for Group Lasso
As the data set are distributed among multiple research institutions, it is necessary to conduct a distributed learning process without compromising the data privacy for each institution. LQM [3, 13] is proposed to optimize the lasso regression while preserving the data privacy for each participating institution. In this study, we aim at selecting the group features to detect the top risk genetic factors for the entire GWAS data set. Since each institution maintains its own data pair , we develop a family of distributed group Lasso screening to identify and discard the inactive features in a distributed environment. We summarize the Distributed Strong Rules (DSR) as follows:

For the th institution, compute by .

Update by LQM, then send back to all the institutions.

In each institution, calculate by: where is the elements of th group in and it is similar as the definition of .

For each th group in the problem (1), we will discard it and remove from the optimization when the following rule holds:
In many real word applications, the optimal value of regularization parameter is unknown. To tune the value of , commonly used methods such as cross validation needs to solve the Lasso problem along a sequence of parameter values ,which can be very timeconsuming. A sequential version of strong rules was proposed in EDPP [11] by utilizing the information of optimal solutions in the previous parameter, achieving about 200x speedups for realworld applications. The implementation details of EDPP is available on the GitHub: http://dpcscreening.github.io/glasso.html. We omit the introduction of EPDD for brevity. We propose a distributed safe screening rules for group Lasso, known as the Distributed Dual Polytope Projection Group Lasso (DDPP_GL), to quickly identify and discard inactive features along a sequence of parameters in a distributed manner. We summarize DDPP_GL in algorithm 1.
3.3 Distributed Block Coordinate Descent for Group Lasso
After we apply DDPP_GL to discard the inactive features, the feature space shrank from to and there are remaining groups. The problem of group Lasso (1) could be reduced as (3). We need to optimize (3) in a distributed manner. The block coordinate descent (BCD) [5] is one of the most efficient solvers in the big data optimization. BCD optimize the problem by updating one or a few blocks of variables at a time, rather than updating all the block together. The order of update can be deterministic or stochastic. For the group lasso problem, we can randomly pick up a group of variables to optimize and keeps other groups fixed. Following this idea, we propose a Distributed Block
Coordinate Descent (DBCD) to solve the group Lasso problem in algorithm 2.
In algorithm 2, we use a variable to store the result of . is initialized as since is initialized to be zero at the beginning. In DBCD, the update of gradient can be divided as three steps:

Compute the gradient: and get by LQM.

Get by the gradient information .

Update :
The update of follow the equations in rd line of algorithm 2. We update if is larger than , otherwise all the elements of are set to be zero. denotes the Lipschitz constant in th group. For the group Lasso problem, is set to be . DBCD updates at the end of each iteration to make sure stores the correct information of in each iteration.
3.4 Feature selection by Group Lasso
Given a sequence of parameter values: , we can obtain a sequence of learnt models by employing DDPP_GL+DBCD. For each group in the feature space , we count the frequency of nonzero entries in the learnt model and rank the frequency by descent to get the top relevant features. We summarize the top feature selection process as follows:

For each group in the feature space , , If is not equal to zero where and .

Rank by descent and select the top relevant features from to construct the feature matrix .
4 Experimental Results
In this section, we conduct several experiments to evaluate the efficiency and effectiveness of our methods. The proposed framework is implemented across three institutions with thirty computation nodes on Apache Spark: http://spark.apache.org, a stateoftheart distributed computing platform. We perform DDPP_GL+ DBCD on a sequence of parameter values and employ stability selection with our methods to determine top risk SNPs related to AD.
4.1 Performance Comparison
In this experiment, we choose the volume of lateral ventricle as variables being predicted which containing 717 subjects by removing subjects without labels. The volumes of brain regions were extracted from each subject’s T1 MRI scan using Freesurfer: http://freesurfer.net. The distributed platform is built across three research institutions that maintain 326, 215, and 176 subjects, respectively and each institution has ten computation nodes. We perform the DDPP_GL+DBCD along a sequence of 100 parameter values equally spaced on the linear scale of from 1.00 to 0.1. As a comparison, we run the stateoftheart distributed solver ADMM [1] with the same experiment setup. The group size is set to be 20 and we vary the number of features by randomly selecting 0.5 million to 5.9 million from GWAS dataset and report the result in Fig 2. The proposed method achieved a 38fold speedup compared to ADMM.
4.2 Stability selection for top risk genetic factors
We employ stability selection [3, 4] with DDPP_GL+DBCD to select top risk SNPs from the entire GWAS data set with 5,906,152 features. We conduct two different groups of trials by choosing the volume of hippocampus and entorhinal cortex at baseline as the response variable for each group, respectively. In each trial, DDPP_GL+DBCD is carried out along a 100 linearscale sequence of parameter values from 1 to 0.05, respectively. Then we select the top 10000 features and perform stability selection [4] to rank the top risk SNPs for AD. As a comparison, we perform D_EDPP+F_LQM [3] with the same environment setup and report the result in Table 1. In both of trials, APOE is ranked 1st while DDPP_GL+DBCD detects more risk genes like GRM8, GPC6, PIK3C2G and LOC100506272 that are associated with the disease AD in GWAS [6].
Hippocampus by D_EDPP+F_LQM  Hippocampus by DDPP_GL+DBCD  
No.  Chr  RS_ID  Gene  No.  Chr  RS_ID  Gene 
1  19  rs429358  APOE  1  19  rs429358  APOE 
2  8  rs34173062  SHARPIN  2  7  rs1592376  GRM8 
3  6  rs71573413  unknown  3  5  rs6892867  LOC105377696 
4  11  rs10831576  GALNT18  4  6  rs71573413  unknown 
5  9  rs3010760  unknown  5  13  rs7317246  GPC6 
Entorhinal by D_EDPP+F_LQM  Entorhinal by DDPP_GL+DBCD  
No.  Chr  RS_ID  Gene  No.  Chr  RS_ID  Gene 
1  19  rs429358  APOE  1  19  rs429358  APOE 
2  15  rs8025377  ABHD2  2  4  rs1876071  LOC100506272 
3  Y  rs79584829  unknown  3  18  rs4486982  unknown 
4  14  rs41354245  MDGA2  4  14  rs41354245  MDGA2 
5  3  rs55904134  unknown  5  12  rs12581078  PIK3C2G 
References
 [1] Boyd, S., Parikh, N., Chu, E., Peleato, B., Eckstein, J.: Distributed optimization and statistical learning via the alternating direction method of multipliers. Foundations and Trends® in Machine Learning 3(1), 1–122 (2011)
 [2] Harold, D., et al.: Genomewide association study identifies variants at clu and picalm associated with alzheimer’s disease. Nature genetics 41(10), 1088–1093 (2009)
 [3] Li, Q., Yang, T., Zhan, L., Hibar, D.P., Jahanshad, N., Wang, Y., Ye, J., Thompson, P.M., Wang, J.: Largescale collaborative imaging genetics studies of risk genetic factors for alzheimer’s disease across multiple institutions. In: International Conference on Medical Image Computing and ComputerAssisted Intervention. pp. 335–343. Springer (2016)
 [4] Meinshausen, N., Buhlmann, P.: Stability selection. Journal of the Royal Statistical Society: Series B (Statistical Methodology) 72(4), 417–473 (2010)
 [5] Qin, Z., Scheinberg, K., Goldfarb, D.: Efficient blockcoordinate descent algorithms for the group lasso. Mathematical Programming Computation 5(2), 143–169 (2013)
 [6] Rouillard, A.D., et al.: The harmonizome: a collection of processed datasets gathered to serve and mine knowledge about genes and proteins. Database 2016
 [7] Sasieni, P.D.: From genotypes to genes: doubling the sample size. Biometrics pp. 1253–1261 (1997)
 [8] Thompson, P.M., et al.: The enigma consortium: largescale collaborative analyses of neuroimaging and genetic data. Brain imaging and behavior 8(2), 153–182 (2014)
 [9] Tibshirani, R.: Regression shrinkage and selection via the lasso. Journal of the Royal Statistical Society. Series B (Methodological) pp. 267–288 (1996)
 [10] Tibshirani, R., et al.: Strong rules for discarding predictors in lassotype problems. Journal of the Royal Statistical Society: Series B 74(2), 245–266 (2012)
 [11] Wang, J., Zhou, J., Wonka, P., Ye, J.: Lasso screening rules via dual polytope projection. In: Advances in Neural Information Processing Systems (2013)

[12]
Yuan, M., Lin, Y.: Model selection and estimation in regression with grouped variables. Journal of the Royal Statistical Society: Series B 68(1), 49–67 (2006)
 [13] Zhu, D., et al.: Largescale classification of major depressive disorder via distributed lasso. In: 12th International Symposium on Medical Information Processing and Analysis. p. 10160. International Society for Optics and Photonics (2017)
Comments
There are no comments yet.