Large-scale Feature Selection of Risk Genetic Factors for Alzheimer's Disease via Distributed Group Lasso Regression

04/27/2017 ∙ by Qingyang Li, et al. ∙ 0

Genome-wide association studies (GWAS) have achieved great success in the genetic study of Alzheimer's disease (AD). Collaborative imaging genetics studies across different research institutions show the effectiveness of detecting genetic risk factors. However, the high dimensionality of GWAS data poses significant challenges in detecting risk SNPs for AD. Selecting relevant features is crucial in predicting the response variable. In this study, we propose a novel Distributed Feature Selection Framework (DFSF) to conduct the large-scale imaging genetics studies across multiple institutions. To speed up the learning process, we propose a family of distributed group Lasso screening rules to identify irrelevant features and remove them from the optimization. Then we select the relevant group features by performing the group Lasso feature selection process in a sequence of parameters. Finally, we employ the stability selection to rank the top risk SNPs that might help detect the early stage of AD. To the best of our knowledge, this is the first distributed feature selection model integrated with group Lasso feature selection as well as detecting the risk genetic factors across multiple research institutions system. Empirical studies are conducted on 809 subjects with 5.9 million SNPs which are distributed across several individual institutions, demonstrating the efficiency and effectiveness of the proposed method.



There are no comments yet.


page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Alzheimer’s disease (AD) is known as the most common type of dementia. Genome-Wide Association Studies (GWAS) [2] achieved great success in finding single nucleotide polymorphisms (SNPs) associated with AD. Some large-scale collaborative network such as ENIGMA [8] Consortium consists of 185 research institutions around the world, analyzing genomic data from over 33,000 subjects, from 35 countries. However, processing and integrating genetic data across different institutions is challenging. The first issue is the data privacy since each participating institution wishes to collaborate with others without revealing its own data set. The second issue is how to conduct the learning process across different institutions. Local Query Model (LQM) [3, 13] is proposed to perform the distributed Lasso regression for large-scale collaborative imaging genetics studies across different institutions while preserving the data privacy for each of them. However, in some imaging genetics studies [2], we are more interested in finding important explanatory factors in predicting responses, where each explanatory factor is represented by a group of features since lots of AD genes are continuous or relative with some other features, not individual features. In such cases, the selection of important features corresponds to the selection of groups of features. As an extension of Lasso, group Lasso [12] has been proposed for feature selection in a group level and quite a few efficient algorithms [5, 1] have been proposed for efficient optimization. However, integrating group Lasso with imaging genetics studies across multiple institutions has not been studied well.

In this study, we propose a novel Distributed Feature Selection Framework (DFSF) to conduct the large-scale imaging genetics studies analysis across multiple research institutions. Our framework has three components. In the first stage, we proposed a family of distributed group lasso screening rules (DSR and DDPP_GL) to identify inactive features and remove them from the optimization. The second stage is to perform the group lasso feature selection process in a distributed manner, selecting the top relevant group features for all the institutions. Finally, each institution obtains the learnt model and perform the stability selection to rank the top risk genes for AD. The experiment is conducted on the Alzheimer’s Disease Neuroimaging Initiative (ADNI) GWAS data set, including approximately 809 subjects with 5.9 million loci. Empirical studies demonstrate that proposed method the proposed method achieved a 35-fold speedup compared to state-of-the-art distributed solvers like ADMM. Stability selection results show that the proposed DFSF detects APOE, GRM8, GPC6 and LOC100506272 as top risk SNPs associated with AD, demonstrating a superior result compared to Lasso regression methods [3]. The proposed method offers a powerful feature selection tool to study AD and its early symptom.

2 Problem Statement

2.1 Problem Formulation

Group Lasso [12] is a highly efficient feature selection and regression technique used in the model construction. Group Lasso takes the form of the equation:


where represents the feature matrix where and y denotes the

dimensional response vector.

is a positive regularization parameter. Different from Lasso regression [9], group Lasso partitions the original feature matrix into non-overlapping groups and denotes the weight for the -th group. After solving the group Lasso problem, we get the corresponding solution vector and the dimension of is the same as the feature space in .

2.2 ADNI GWAS data

The ADNI GWAS dataset contains genotype information of 809 ADNI participants. To store statistically relevant SNPs called using Illumina’s CASAVA SNP Caller, the ADNI WGS SNP data is stored in variant call format (VCF) for storing gene sequence variations. SNPs at approximately 5.9 million specific loci are recorded for each participant. We encode SNPs using the coding scheme in [7] and apply Minor Allele Frequency (MAF) and Genotype Quality (GQ) as two quality control criteria to filter high quality SNPs features. We follow the same SNP genotype coding and quality control scheme in [3].

We have institutions to conduct the collaborative learning. The th institution maintains its own data set where , is the sample number, is the feature number and is the response and . We assume is the same across institutions. We aim at conducting the feature selection process of group lasso on the distributed datasets , .

Figure 1: Illustration of our DFSF framework. Each participating institution maintains its own dataset which are a few subjects of GWAS dataset. Firstly, we perform the distributed group Lasso screening rules to pre-identifying the inactive features and remove them from the optimization. Then we conduct the learning process of group Lasso by proposed distributed solver DBCD to select the top relevant features. Finally, each institution obtains the same selected features and performs stability selection to rank the top SNPs that may collectively affect AD.

3 Proposed Framework

In this section, we present the streamline of proposed DFSF framework. The DFSF framework is composed of three main procedures:

  1. Identify the inactive features by the distributed group Lasso screening rules and remove inactive features from optimization.

  2. Solve the group Lasso problem on the reduced feature matrix along a sequence of parameter values and select the most relevant features for each participating institution.

  3. Perform the stability selection to rank SNPs that may collectively affect AD.

3.1 Screening Rules for Group Lasso

Strong rule [10] is an efficient screening method for fitting lasso-like problems by pre-identifying the features which have zero coefficients in the solution and removing these features from optimization, significantly cutting down on the computation required for optimization.

For the group lasso problem [12], the th group of — will be discarded by strong rules if the following rule holds:


The calculation of follows . could be discarded in the optimization without sacrificing the accuracy since all the elements of are zero in the optimal solution vector.

Let denote the index set of groups in the feature space and . Suppose that there are remaining groups after employing screening rules, we use to represent the index set of remaining groups and . As a result, the optimization of group lasso problem (1) can be reformulated as:


where is the dimension of reduced feature space and .

3.2 Distributed Screening Rules for Group Lasso

As the data set are distributed among multiple research institutions, it is necessary to conduct a distributed learning process without compromising the data privacy for each institution. LQM [3, 13] is proposed to optimize the lasso regression while preserving the data privacy for each participating institution. In this study, we aim at selecting the group features to detect the top risk genetic factors for the entire GWAS data set. Since each institution maintains its own data pair , we develop a family of distributed group Lasso screening to identify and discard the inactive features in a distributed environment. We summarize the Distributed Strong Rules (DSR) as follows:

  1. For the th institution, compute by .

  2. Update by LQM, then send back to all the institutions.

  3. In each institution, calculate by: where is the elements of th group in and it is similar as the definition of .

  4. For each th group in the problem (1), we will discard it and remove from the optimization when the following rule holds:

In many real word applications, the optimal value of regularization parameter is unknown. To tune the value of , commonly used methods such as cross validation needs to solve the Lasso problem along a sequence of parameter values ,which can be very time-consuming. A sequential version of strong rules was proposed in EDPP [11] by utilizing the information of optimal solutions in the previous parameter, achieving about 200x speedups for real-world applications. The implementation details of EDPP is available on the GitHub: We omit the introduction of EPDD for brevity. We propose a distributed safe screening rules for group Lasso, known as the Distributed Dual Polytope Projection Group Lasso (DDPP_GL), to quickly identify and discard inactive features along a sequence of parameters in a distributed manner. We summarize DDPP_GL in algorithm 1.

1:A set of data pairs and th institution holds the data pair . A sequence of parameters: .
2:The learnt models: .
3:Let , compute by LQM.
4:, represents all the elements in the th group.
5:, compute by LQM.
6:Let and .
10:, compute by LQM.
12:Given a sequence of parameters , for any integer , we make a pre-screen on each groups of , if is known.
13:for to do
15:  Compute by LQM.
16:  if then
17:   We identify all the elements of to be zero.
18:end for
Algorithm 1 Distributed Dual Polytope Projection for Group Lasso

3.3 Distributed Block Coordinate Descent for Group Lasso

After we apply DDPP_GL to discard the inactive features, the feature space shrank from to and there are remaining groups. The problem of group Lasso (1) could be reduced as (3). We need to optimize (3) in a distributed manner. The block coordinate descent (BCD) [5] is one of the most efficient solvers in the big data optimization. BCD optimize the problem by updating one or a few blocks of variables at a time, rather than updating all the block together. The order of update can be deterministic or stochastic. For the group lasso problem, we can randomly pick up a group of variables to optimize and keeps other groups fixed. Following this idea, we propose a Distributed Block

1:A set of data pairs where th institution holds the data pair and
2:The learnt models: .
3:Initialize: and .
4:while not converged do
5: Randomly pick up from the index set .
6: Compute the th group’s gradient:
7: Update by LQM: .
8: Let and
10: Let compute by: .
11: Update by:
12:end while
Algorithm 2 Distributed Block Coordinate Descent

Coordinate Descent (DBCD) to solve the group Lasso problem in algorithm 2.

In algorithm 2, we use a variable to store the result of . is initialized as since is initialized to be zero at the beginning. In DBCD, the update of gradient can be divided as three steps:

  1. Compute the gradient: and get by LQM.

  2. Get by the gradient information .

  3. Update :

The update of follow the equations in rd line of algorithm 2. We update if is larger than , otherwise all the elements of are set to be zero. denotes the Lipschitz constant in th group. For the group Lasso problem, is set to be . DBCD updates at the end of each iteration to make sure stores the correct information of in each iteration.

3.4 Feature selection by Group Lasso

Given a sequence of parameter values: , we can obtain a sequence of learnt models by employing DDPP_GL+DBCD. For each group in the feature space , we count the frequency of nonzero entries in the learnt model and rank the frequency by descent to get the top relevant features. We summarize the top feature selection process as follows:

  1. For each group in the feature space , , If is not equal to zero where and .

  2. Rank by descent and select the top relevant features from to construct the feature matrix .

After obtaining the relevant features, we perform the stability selection [3, 4] to rank the top genetic factors that are associated with the disease AD.

4 Experimental Results

In this section, we conduct several experiments to evaluate the efficiency and effectiveness of our methods. The proposed framework is implemented across three institutions with thirty computation nodes on Apache Spark:, a state-of-the-art distributed computing platform. We perform DDPP_GL+ DBCD on a sequence of parameter values and employ stability selection with our methods to determine top risk SNPs related to AD.

Figure 2: Running time comparison of DDPP_GL+DBCD with ADMM.

4.1 Performance Comparison

In this experiment, we choose the volume of lateral ventricle as variables being predicted which containing 717 subjects by removing subjects without labels. The volumes of brain regions were extracted from each subject’s T1 MRI scan using Freesurfer: The distributed platform is built across three research institutions that maintain 326, 215, and 176 subjects, respectively and each institution has ten computation nodes. We perform the DDPP_GL+DBCD along a sequence of 100 parameter values equally spaced on the linear scale of from 1.00 to 0.1. As a comparison, we run the state-of-the-art distributed solver ADMM [1] with the same experiment setup. The group size is set to be 20 and we vary the number of features by randomly selecting 0.5 million to 5.9 million from GWAS dataset and report the result in Fig 2. The proposed method achieved a 38-fold speedup compared to ADMM.

4.2 Stability selection for top risk genetic factors

We employ stability selection [3, 4] with DDPP_GL+DBCD to select top risk SNPs from the entire GWAS data set with 5,906,152 features. We conduct two different groups of trials by choosing the volume of hippocampus and entorhinal cortex at baseline as the response variable for each group, respectively. In each trial, DDPP_GL+DBCD is carried out along a 100 linear-scale sequence of parameter values from 1 to 0.05, respectively. Then we select the top 10000 features and perform stability selection [4] to rank the top risk SNPs for AD. As a comparison, we perform D_EDPP+F_LQM [3] with the same environment setup and report the result in Table 1. In both of trials, APOE is ranked 1st while DDPP_GL+DBCD detects more risk genes like GRM8, GPC6, PIK3C2G and LOC100506272 that are associated with the disease AD in GWAS [6].

Hippocampus by D_EDPP+F_LQM Hippocampus by DDPP_GL+DBCD
No. Chr RS_ID Gene No. Chr RS_ID Gene
1 19 rs429358 APOE 1 19 rs429358 APOE
2 8 rs34173062 SHARPIN 2 7 rs1592376 GRM8
3 6 rs71573413 unknown 3 5 rs6892867 LOC105377696
4 11 rs10831576 GALNT18 4 6 rs71573413 unknown
5 9 rs3010760 unknown 5 13 rs7317246 GPC6
Entorhinal by D_EDPP+F_LQM Entorhinal by DDPP_GL+DBCD
No. Chr RS_ID Gene No. Chr RS_ID Gene
1 19 rs429358 APOE 1 19 rs429358 APOE
2 15 rs8025377 ABHD2 2 4 rs1876071 LOC100506272
3 Y rs79584829 unknown 3 18 rs4486982 unknown
4 14 rs41354245 MDGA2 4 14 rs41354245 MDGA2
5 3 rs55904134 unknown 5 12 rs12581078 PIK3C2G
Table 1: Top 5 selected SNPs with the volume of entorhinal cortex and hippocampal.


  • [1] Boyd, S., Parikh, N., Chu, E., Peleato, B., Eckstein, J.: Distributed optimization and statistical learning via the alternating direction method of multipliers. Foundations and Trends® in Machine Learning 3(1), 1–122 (2011)
  • [2] Harold, D., et al.: Genome-wide association study identifies variants at clu and picalm associated with alzheimer’s disease. Nature genetics 41(10), 1088–1093 (2009)
  • [3] Li, Q., Yang, T., Zhan, L., Hibar, D.P., Jahanshad, N., Wang, Y., Ye, J., Thompson, P.M., Wang, J.: Large-scale collaborative imaging genetics studies of risk genetic factors for alzheimer’s disease across multiple institutions. In: International Conference on Medical Image Computing and Computer-Assisted Intervention. pp. 335–343. Springer (2016)
  • [4] Meinshausen, N., Buhlmann, P.: Stability selection. Journal of the Royal Statistical Society: Series B (Statistical Methodology) 72(4), 417–473 (2010)
  • [5] Qin, Z., Scheinberg, K., Goldfarb, D.: Efficient block-coordinate descent algorithms for the group lasso. Mathematical Programming Computation 5(2), 143–169 (2013)
  • [6] Rouillard, A.D., et al.: The harmonizome: a collection of processed datasets gathered to serve and mine knowledge about genes and proteins. Database 2016
  • [7] Sasieni, P.D.: From genotypes to genes: doubling the sample size. Biometrics pp. 1253–1261 (1997)
  • [8] Thompson, P.M., et al.: The enigma consortium: large-scale collaborative analyses of neuroimaging and genetic data. Brain imaging and behavior 8(2), 153–182 (2014)
  • [9] Tibshirani, R.: Regression shrinkage and selection via the lasso. Journal of the Royal Statistical Society. Series B (Methodological) pp. 267–288 (1996)
  • [10] Tibshirani, R., et al.: Strong rules for discarding predictors in lasso-type problems. Journal of the Royal Statistical Society: Series B 74(2), 245–266 (2012)
  • [11] Wang, J., Zhou, J., Wonka, P., Ye, J.: Lasso screening rules via dual polytope projection. In: Advances in Neural Information Processing Systems (2013)
  • [12]

    Yuan, M., Lin, Y.: Model selection and estimation in regression with grouped variables. Journal of the Royal Statistical Society: Series B 68(1), 49–67 (2006)

  • [13] Zhu, D., et al.: Large-scale classification of major depressive disorder via distributed lasso. In: 12th International Symposium on Medical Information Processing and Analysis. p. 10160. International Society for Optics and Photonics (2017)