Identifying Genetic Risk Factors via Sparse Group Lasso with Group Graph Structure

09/12/2017 ∙ by Tao Yang, et al. ∙ University of Michigan University of Southern California University of Illinois at Urbana-Champaign Baidu, Inc. 0

Genome-wide association studies (GWA studies or GWAS) investigate the relationships between genetic variants such as single-nucleotide polymorphisms (SNPs) and individual traits. Recently, incorporating biological priors together with machine learning methods in GWA studies has attracted increasing attention. However, in real-world, nucleotide-level bio-priors have not been well-studied to date. Alternatively, studies at gene-level, for example, protein--protein interactions and pathways, are more rigorous and legitimate, and it is potentially beneficial to utilize such gene-level priors in GWAS. In this paper, we proposed a novel two-level structured sparse model, called Sparse Group Lasso with Group-level Graph structure (SGLGG), for GWAS. It can be considered as a sparse group Lasso along with a group-level graph Lasso. Essentially, SGLGG penalizes the nucleotide-level sparsity as well as takes advantages of gene-level priors (both gene groups and networks), to identifying phenotype-associated risk SNPs. We employ the alternating direction method of multipliers algorithm to optimize the proposed model. Our experiments on the Alzheimer's Disease Neuroimaging Initiative whole genome sequence data and neuroimage data demonstrate the effectiveness of SGLGG. As a regression model, it is competitive to the state-of-the-arts sparse models; as a variable selection method, SGLGG is promising for identifying Alzheimer's disease-related risk SNPs.



There are no comments yet.


page 9

page 14

page 17

page 18

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Genetic variation is what makes us all unique. It refers to the diversity in the DNA sequence in human genomes and it may affect how an individual develops a disease or and responds to drugs, vaccines, pathogens, and etc [5, 2]. The most common type of genetic variation is a single-nucleotide polymorphism (SNP)—i.e., a difference in a single nucleotide in the deoxyribonucleic acid (DNA) [13]. In the past decade, genome-wide association studies (GWA studies or GWAS), which aim at revealing the relationships between genetic variants such as SNPs and individual traits, have attracted much attention achieved considerable success [14, 25, 28].

Traditional GWA studies are based on statistical tests. Genetic risk factors are determined by their statistical significance, where a general procedure is to perform a statistical test between each individual SNP and the phenotype under investigation [29, 8, 7]. For example, via meta-analyses, 11 new susceptibility SNPs for Alzheimer’s disease (AD) have been identified [15]; 10 loci that may influence allergic sensitization have been detected [3]. However, such kind of approaches has several limitations. First, it ignores the aggregate effects of multiple SNPs, for example, the epistatic interactions between loci [34, 17]. Second, independent SNP–phenotype testing disregards the SNPs’ structural correlations associated with population genetics (i.e., linkage disequilibrium, LD) and biological relations (e.g. functional relationships between genes) [20].

Later, increasing attention has been focused on Lasso (least absolute shrinkage and selection operator [24]), as an alternative tool for identifying risk SNPs in GWAS [31, 26]. Lasso is a multivariate method that models multiple SNPs simultaneously and highly precarious SNPs (that related to the phenotype under investigation) can be identified through the non-zero components of the model. For example, a previous whole genome association study [31] shows Lasso together with stability selection [19] is promising in detecting risk SNPs associated with Alzheimer’s disease (AD) . However, there are two major drawbacks of Lasso: 1) it tends to arbitrary select only one from a set of highly correlated features [10]; 2) it considers all features equally without any further structural assumptions. To address the above issues, utilizing structured sparse models together with different biological priors has aroused growing concern in GWAS, as incorporating such assumptions is favorable for model construction and interpretation [32]. There are several attempts, for example, group Lasso [18], tree Lasso [26], and absolute fused Lasso [30].

It is worth mentioning that all those aforementioned approaches are based on the nucleotide-level biological assumptions (e.g. LD or the consistency of successive SNPs). However, in real-world, at nucleotide-level, neither structural associations, nor functional relationships, nor interaction mechanisms, have been well-studied to date. On the other hand, studies of biological mechanisms are more rigorous and legitimate at gene-level. For example, GeneMANIA [27] is a powerful tool for revealing gene-level biological networks. It integrates a large set of functional association data, including protein and genetic interactions, pathways, co-expression, co-localization and protein domain similarity. As a consequence, it is potentially beneficial to utilize such gene-level priors in nucleotide-level GWAS studies.

In this paper, we propose a novel two-level structured sparse model, called Sparse Group Lasso with Group-level Graph structure (SGLGG), which a is promising method for identifying significant SNP–phenotype associations. As its name indicates, SGLGG can be considered as a fusion model of a sparse group Lasso [33, 9] and a group-level graph Lasso (a.k.a., graph-guided fused Lasso [6]). Essentially, our proposed model involves two levels of predictors—i.e., the nucleotide-level predictors and the gene-level predictors. And consequently in a GWA study, SGLGG will penalize the following three respects:

  1. the gene-level sparsity;

  2. the graph structure among gene-level predictors;

  3. the nucleotide-level sparsity.

As a result, SGLGG tends to select only a set of causal SNPs within a gene group and limited gene groups among the entire sequence. Meanwhile, it is capable of taking advantages of biological priors (i.e., gene networks) during the gene-level selection. With the graph constraint, highly relevant genes are likely to be chosen simultaneously, and thus SNPs from different gene scopes are potentially able to connect. SGLGG is hard to solve due to its complex sparse-inducing regularizers. To this end, we first transfer the edge constraints among the graph into the matrix form, and then, employ the ADMM (alternating direction method of multipliers [4]) algorithm for optimizing. Experiments have been conducted on the Alzheimer’s Disease Neuroimaging Initiative (ADNI) whole genome sequence (WGS) data and neuroimage data, for both regression tasks and variable selection tasks. Preliminary results show that SGLGG is competitive to the state-of-the-arts sparse models in predicting AD-related imaging phenotypes. In addition, stability selection results demonstrate that SGLGG is promising for identifying risk SNPs associated with Alzheimer’s disease.

2 Our Model: SGLGG

Essentially, we consider a linear prediction model. Given a centered data matrix with observations and features, and a corresponding response . Suppose that predictors can be divided into non-overlapping groups, with the number of low-level predictors in group . Accordingly, we denote be the low-level predictors and be the group-level predictors, respectively. Then, the low-level predictor can be represented as . We further denote , where is the Hadamard product operator, is a designed mapping matrix111 is a binary matrix, an element if in group ., and is the -th element of . The group-level graph222In this study, we only consider the situation of undirected graph among group-level features. information is described by , where is a set of nodes, and is the set of edges. In addition, let

denote the weight vector corresponding to the group-level predictors, and

denote the weight of the edge between node and . Hence, in this paper, we consider the following optimization problem:



is a convex empirical loss function (

e.g. the least squares) and the error is calculated based on —a combination of predictors and via ; ; and represent a general monotonically increasing function weight function that enforces a fusion effect between coefficients and .

In Eq. (1), the first constraint can be considered as a group-level sparsity constraint, the second constraint introduces the group-level graph structure via the fused Lasso, and the third constraint penalizes the low-level sparsity. Hereby, we call Problem (1), the Sparse Group Lasso with Group-level Graph structure (SGLGG) problem. More specifically, in a GAW study, represents the nucleotide-level predictor, and accordingly, can be considered as the gene-level predictor. Therefore, an ideal solution to Eq. (1) will lead to the following scenarios: 1) only limited gene groups will be selected among the entire sequence; 2) the group selection is guided by the gene-level biological priors—i.e., relevant genes are more likely to be chosen simultaneously; and 3) only a subset of SNPs will be selected within a selected gene. In other words, the gene-level and nucleotide-level constraints ensure that the most relevant gene groups and SNPs within a gene will be chosen by the model. Meanwhile, the group selection will be affected by the gene-level priors—i.e., some inter-gene SNP–SNP connections could be revealed by the graph constraint.

Furthermore, the grpah constriant in Eq. (1) can be reformualted into a matrix form. Denote be the sparse matrix constructed from the edge set , where if there is a edge between and . Furthermore, for discussion convenience, we ignore the weight vectors in Eq. (1), then SGLGG problem can be simplified as the following matrix form:


3 ADMM for Solving SGLGG

3.1 ADMM basic

Due to the complex sparse-inducing regularizers, unconstrianted optimzation problem like (1) are sometimes hard to solve directly. Instead, it is possbile to reformulate the original unconstrianted problem to an equivalent constrained problem. In the sequel, such a problem can be addressed using constrained optimization methods such as the augmented Lagrangian method.

Hereby, we employ the alternating direction method of multipliers (ADMM) [4, 21] algorithm to solve Problem (1). ADMM is a variant of the augmented Lagrangian method. It utilizes dual decomposition and partial updates for the dual variables. Without loss of generality, we consider the following constraint optimization problem:


where and are convex, , , , , and . With ADMM, we first reformulate the above problem (3) as:


with being the augmented Lagrangian multiplier, and being the non-negative dual update step length. ADMM solves this problem by iteratively minimizing over , and , one at a time, until convergence. Consequently, the update rule for ADMM is given by

3.2 ADMM for solving SGLGG problem

Suppose be the least squares loss, then the SGLGG problem presented in (2) can be rewritten as the following constrained form:


where are slack variables. We employ ADMM to solve Problem (5). The augmented Lagrangian is


where are augmented Lagrangian multipliers. Accordingly, in the -th iteration, the update rules are as follows:

  • Update : can be updated by minimizing with fixed:

    where , and is an operation that transforms a vector into a square diagonal matrix. The above optimization problem is quadratic, and thus the optimal solution can be obtained by solving the following linear system:



    It is trivial to show that is symmetric positive definite (SPD), and thus Eq. (7) can be solved efficiently via the conjugate gradient method [11] .

  • Update : can be updated by minimizing with fixed:

    where . Similar to the update rule of , the above optimization problem is quadratic, and thus the optimal solution can be obtained by solving the following linear system:



    Similarly, since is SPD, Eq. (8) can be solved efficiently via the conjugate gradient method.

  • Update : Similarly, can be obtained by solving the following problem:

    The above optimization problem has a closed-firm solution, known as the soft-thresholding:


    where the soft-thresholding operator is defined as:

  • Update : Similarly, can be obtained by solving the following problem:

    The closed-form solution of the above problem can be obtained by:

  • Update : Similarly, can be obtained by solving the following problem:

    The closed-form solution of the above problem can be obtained by:

  • Update : In the -th iteration, are updated by:


We summarize the ADMM algorithm for solving the SGLGG Problem (2) in Algorithm 1. Generally, ADMM breaks the original complex optimization problem into a series of smaller subproblems, each of which is then easier to handle. In addition, it is worth mentioning that in practice, it is important to normalize according to its group size.

Algorithm 1 ADMM for the sgLasso_gGraph Problem 0:   0:   1:  Initialization: Initialize and , . 2:  while not converge do 3:     Update according to Eq. (7). 4:     Update according to Eq. (8). 5:     Update according to Eq. (9). 6:     Update according to Eq. (10). 7:     Update according to Eq. (11). 8:     Update according to Eqs, respectively. (12), (13) & (14). 9:  end while

4 Experiments

To evaluate the performance of the proposed SGLGG approach in GWAS, we conducted a series of experiments on the Alzheimer’s Disease Neuroimaging Initiative (ADNI) whole genome sequence (WGS) data and neuroimage data. Particularly, we focus on two learning tasks: 1) predicting AD-related imaging phenotypes (based on SNPs data); and 2) identifying risk SNPs w.r.t. AD imaging phenotypes.

4.1 Data processing

4.1.1 ADNI WGS data and neuroimaging data

In this study, we adopt the ADNI WGS data set and MRI data for GWAS. More specifically, the following procedures have been employed for processing SNPs data. First, we employ PLINK [22] together with a series of standard quality control constraints for SNPs data preprocessing. Particularly, a SNP will be removed if its minor allele frequency (MAF) , or missingness , or deviations from Hardy-Weinberg Equilibrium . In the sequel, we adopt MaCH [16]

for genotype imputation. MaCH is a Markov chain based haplotyper that is capable of resolving long haplotypes or inferring missing genotypes. Eventually, we apply several filters on the imputed data set, including: RSQ (estimated

, specific to each SNP) , FREQ1 (frequency for reference Allele 1) and FREQ1 . As a consequence, the entire genome data contains 1,319 subjects with 6,566,154 SNPs, in which 155,357 SNPs are from Chromosome 19. For subjects composition, there are 327 healthy controls (HC), 249 AD patients, 41 participants with mild cognitive impairment (MCI), 220 early MCI (EMCI) patients, 419 late MCI (LMCI) patients, and 63 patients with significant memory concerns (SMC).

Volumes of some major influenced brain regions that are related to Alzheimer’s disease, including the hippocampus (HIPP) and the entorhinal cortex (EC), have been chosen as the neuroimaging phenotypes in this study. Those volumes were extracted from subject’s T1 MRI data using Freesurfer [23],

4.1.2 Candidate AD genes

Hereby, we focus on Alzheimer’s disease genetic risk factors (at both gene-level and nucleotide-level) on the 19th chromosome of the human genome. Particularly, at gene-level, ten candidate genes are pre-selected as high AD-risk according to AlzGene [1], including LDLR, GAPDHS, BCAM, PVRL2, TOMM40, APOE, APOC1, APOC4, EXOC3L2, and CD33. Positions of those pre-selected genes are shown in Figure 1.

The above ten genes have been considered as the most strongly associated genes with AD on Chromosome 19 (Chr.19). In AlzGene, top associated genes are ranked based on genetic variants with the best overall HuGENet/Venice grades [12]. Specifically, for genes with identical grades, the ranking is based on their p-values; for genes with identical grades & p-values, the ranking is based on their effect sizes. Basic information on those AD-risk genes is available in Table 1 (top part).

Figure 1: AD-risk genes (marked by yellow) on Chr.19 according to AlzGene.
Figure adapted from:

4.1.3 Gene networks

To retrieve gene-level biological priors—i.e., gene networks, we utilized GeneMANIA [27] in our study. Essentially, GeneMANIA is a powerful tool to extract gene networks based on a set of input genes. The network is retrieved from a large set of functional association data, including gene co-expression & co-localization, protein-protein interaction, genetic interaction, shared protein domains, pathway, and etc. GeneMANIA stands for the Multiple Association Network Integration Algorithm

. It consists of a linear regression-based algorithm for calculating the functional association network and a label propagation algorithm for predicting gene functions hereafter. In our study, we employ the following two methods to extract gene networks.

  1. Gene network within 10 pre-selected AD-risk genes in Chr.19.
    Ten aforementioned AD-risk genes on Chromosome 19 are utilized as the input genes for GeneMANIA. For network exploration, we only focus on connections within those ten pre-selected genes. In addition, we adopt the biological process-based method for gene ontology weighting. A visualization of this gene networks is shown in Figure 2 (left).

  2. Extended gene network based on 10 selected Chr19 AD-related genes.
    Similar to 1, but we allow to introduce ten additional genes for network exploration. This results in totally 20 genes in the graph. A visualization of such a network is shown in Figure 2 (right). Note that, additional genes are selected based on their relations with input genes and thus those genes are not necessary located on Chromosome 19. Additional information of those selected genes is available in Table 1 (bottom part).

Symbol Assembly Chr Location # of loci333This is the number of available loci in our experimental dataset.

AD Candidate Genes

LDLR GRCh37.p13 19 11200037..11244506 135
GAPDHS GRCh37.p13 19 36024314..36036221 22
BCAM GRCh37.p13 19 45312316..45324678 15
PVRL2 GRCh37.p13 19 45349393..45392485 164
TOMM40 GRCh37.p13 19 45394477..45406946 38
APOE GRCh37.p13 19 45409039..45412650 5
APOC1 GRCh37.p13 19 45417577..45422606 14
APOC4 GRCh37.p13 19 45445495..45448753 7
EXOC3L2 GRCh37.p13 19 45715879..45737469 88
CD33 GRCh37.p13 19 51728335..51743274 16

Associated Genes

LDLRAP1 GRCh37.p13 1 25870071..25895377 28
PVRL3 GRCh37.p13 3 110790606..110913017 73
APOA5 GRCh37.p13 11 116660086..116663136 7
APOA1 GRCh37.p13 11 116706467..116708338 5
CRTAM GRCh37.p13 11 122709255..122743347 75
GAPDH GRCh37.p13 12 6643585..6647537 10
LIPC GRCh37.p13 15 58702953..58861073 481
CD226 GRCh37.p13 18 67530192..67624412 149
APOC2 GRCh37.p13 19 45449239..45452822 17
SOD1 GRCh37.p13 21 33031935..33041244 15
Table 1: Basic information of selected genes
Figure 2: Visualizations of two gene networks. Left: network within 10 pre-selected AD-risk genes on Chr.19; Right: extended gene network based on 10 pre-selected Chr.19 AD-risk genes.

Later, the experimental data sets were generated through those two aforementioned methods. More specifically, we first construct a smaller SNPs data set that consists of SNPs from 10 pre-selected AD-risk genes on Chromosome 19. As a result, such a data set contains 1,381 subjects and 504 SNPs. Next, we generate a larger SNPs data set based on an extended gene network obtained through GeneMANIA—i.e., SNPs from 10 additional genes (as shown in Table 1) are also involved, according to gene-level associations. Accordingly, the larger SNPs data set contains 1,364 SNPs in total from 20 candidate genes.

4.2 Learning task I — Predicting AD-related phenotypes

In the first series of experiments, we evaluate our proposed SGLGG model in a set of regression tasks—i.e., predicting Alzheimer’s disease-related imaging phenotypes. More specifically, SGLGG is compared with a suite of well-known commonly-used (structured) sparse methods, including Lasso, the fused Lasso (FL) and sparse group Lasso (SGL). For SGL and SGLGG, SNPs in the same gene naturally fall into a group. In addition, we compare SGLGG with the absolute fused Lasso (AFL) [30]—a novel learning model that penalizes SNPs successive similarities. Four imaging phenotypes including volumes of the left entorhinal cortex (LEH), left hippocampus (LHP), right entorhinal cortex (REH), and right hippocampus (RHP), are used as the responses in this study.

Experiments have been conducted on the two SNPs data sets described in Section 4.1.3. We adopt five-fold cross-validation for each learning task and each sparse model. Comparisons of predictive performance in terms of mean squared error (MSE) of 10 replications are shown in Figure 3 through box plots. In Figure 3, each color represents a modeling method. Labels of the -axis are named as follows: the first few letters represent a modeling method, the middle three letters indicate the learning task, and the last number (10 or 20) indicate the data set involved.

From Figure 3, we can observe that our proposed SGLGG model is very competitive compared with other (structured) sparse models. With complex sparse-inducing regularizers and complex bio-priors, SGLGG can still provide favorable predictive performance in most of the cases. Meanwhile, such a model has better interpretability than traditional ones, as it incorporated extensive prior knowledge during model learning. Therefore, it is potentially beneficial to address real-world GWA studies through the SGLGG model.

Figure 3: Comparison of regression error in terms of MSE of different structured sparse models on candidate AD-risk genes on Chr.19. For -axis labels: the first few letters represent a modeling method, the middle three letters indicate the learning task, and the last number (10 or 20) indicate the data set involved.

4.3 Learning task II — Identifying AD-risk SNPs

One of the benefits of adopting a sparse model for GWAS is that the most relevant genetic factors can be identified through the non-zero components from the model. Hereby, in the following series of experiments, we compare the variable selection (i.e., SNPs selection) results of different structured sparse methods through stability selection [19]. More specifically, experiments were conducted on the smaller SNPs data set mentioned in Sec 4.1.3. We perform 100 simulations for each learning target. Within each simulation, we first randomly subsample half of the subjects and then perform a modeling method 100 times with different regularization parameters (or pairs of parameters). The model selection results are visualized in Figure LABEL:fig:adni_fs_comp. Detailed SNPs selection results are available in Appendix 1. In Figure LABEL:fig:adni_fs_comp, top 50 selected SNPs are marked for each method; each color refers to a modeling method; the -axis is a compact illustration of gene/SNPs location on Chromosome 19; green bars together with the -axis indicate the negative logarithmic of P-values of SNPs associated with each learning task.

From Figure LABEL:fig:adni_fs_comp, we have the following observations:

  1. SNPs selected by Lasso and SGL are spread over a large region in the feature sets (i.e., across different genes). However, most SNPs selected by FL, AFL, and our proposed SGLGG model are clustered in a few small regions.

  2. SNPs groups identified by SGLGG are different from FL or AFL, where the proposed method tends to select more SNPs within a gene but fewer number of genes in total.

  3. Statistical significance in terms of P-value of an SNP selected by SGLGG, may not necessarily be small444A smaller P-value implies higher statistical significance. Since we use the negative logarithm of P-values in Figure LABEL:fig:adni_fs_comp, statistically significant SNPs will have higher green bars. (see the bottom two sub-figures in Figure LABEL:fig:adni_fs_comp).

The above observations imply that our proposed SGLGG model sparse selection on both nucleotide-level and gene-level. Within a gene, only the most relevant SNPs will be chosen. The group selection is benefited from gene-level biological prior knowledge—i.e., gene network. Thus, potential inter-gene SNP–SNP connections could be established by SGLGG. In other words, SGLGG is a promising method and has good prospects in revealing the causal SNPs that associated with a phenotype under investigation.

5 Conclusion

In this paper, we proposed a novel two-level structured sparse model—SGLGG—for genome-wide association studies. Essentially, it can be considered as a sparse group Lasso together with a group-level graph-guided fused Lasso. Specifically, SGLGG induces sparsities in both nucleotide-level and gene-level. That is, only the most causal SNPs will be selected within a gene group and only a part of relevant genes will be chosen on the genome. Another benefit of SGLGG is that it also takes advantages of gene-level biological priors during the model construction. Consequently, gene-level bio-priors such as protein–protein interactions and pathways can be utilized to explore inter-gene SNP–SNP connections. To address SGLGG model, we propose an ADMM-based optimization algorithm. Our experiments on the Alzheimer’s disease genome sequence data and neuroimaging data show that SGLGG is very competitive in predict AD-related phenotypes, compared with other state-of-the-arts sparse learning models. Furthermore, stability selection results demonstrate that SGLGG is a promising model for identifying AD-risk SNPs. With the help of gene-level biological priors, SGLGG has good prospects for revealing SNP–SNP interactions among different genes.


This work was supported in part by NIH BD2K (Big Data to Knowledge) grants to the KnowENG Center, based at UIUC, and the ENIGMA Center for Worldwide Medicine, Imaging & Genomics, based at USC.


  • [1] L. Bertram, M. B. McQueen, K. Mullin, D. Blacker, and R. E. Tanzi. Systematic meta-analyses of Alzheimer disease genetic association studies: the alzgene database. Nature genetics, 39(1):17–23, 2007.
  • [2] D. G. Blazer, L. M. Hernandez, et al. Genes, behavior, and the social environment: Moving beyond the nature/nurture debate. National Academies Press, 2006.
  • [3] K. Bønnelykke, M. C. Matheson, T. H. Pers, R. Granell, D. P. Strachan, A. C. Alves, A. Linneberg, J. A. Curtin, N. M. Warrington, M. Standl, et al. Meta-analysis of genome-wide association studies identifies ten loci influencing allergic sensitization. Nature genetics, 45(8):902–906, 2013.
  • [4] S. Boyd, N. Parikh, E. Chu, B. Peleato, and J. Eckstein. Distributed optimization and statistical learning via the alternating direction method of multipliers. Foundations and Trends® in Machine Learning, 3(1):1–122, 2011.
  • [5] B. Carlson. Snps-a shortcut to personalized medicine. Genetic Engineering & Biotechnology News, 28(12):12–12, 2008.
  • [6] X. Chen, S. Kim, Q. Lin, J. G. Carbonell, and E. P. Xing. Graph-structured multi-task regression and an efficient optimization method for general fused lasso. arXiv preprint arXiv:1005.3579, 2010.
  • [7] G. M. Clarke, C. A. Anderson, F. H. Pettersson, L. R. Cardon, A. P. Morris, and K. T. Zondervan. Basic statistical analysis in genetic case-control studies. Nature protocols, 6(2):121–133, 2011.
  • [8] S. L. Edwards, J. Beesley, J. D. French, and A. M. Dunning. Beyond gwass: illuminating the dark road from association to function. The American Journal of Human Genetics, 93(5):779–797, 2013.
  • [9] J. Friedman, T. Hastie, and R. Tibshirani. A note on the group lasso and a sparse group lasso. arXiv preprint arXiv:1001.0736, 2010.
  • [10] M. Hebiri and J. Lederer. How correlations influence lasso prediction. IEEE Transactions on Information Theory, 59(3):1846–1854, 2013.
  • [11] M. R. Hestenes and E. Stiefel. Methods of conjugate gradients for solving linear systems, volume 49. NBS, 1952.
  • [12] J. P. Ioannidis, P. Boffetta, J. Little, T. R. O’Brien, A. G. Uitterlinden, P. Vineis, D. J. Balding, A. Chokkalingam, S. M. Dolan, W. D. Flanders, et al. Assessment of cumulative evidence on genetic associations: interim guidelines. International journal of epidemiology, 37(1):120–132, 2008.
  • [13] J. M. Kidd, G. M. Cooper, W. F. Donahue, H. S. Hayden, N. Sampas, T. Graves, N. Hansen, B. Teague, C. Alkan, F. Antonacci, et al. Mapping and sequencing of structural variation from eight human genomes. Nature, 453(7191):56, 2008.
  • [14] A. Korte and A. Farlow. The advantages and limitations of trait analysis with gwas: a review. Plant methods, 9(1):29, 2013.
  • [15] J.-C. Lambert, C. A. Ibrahim-Verbaas, D. Harold, A. C. Naj, R. Sims, C. Bellenguez, G. Jun, A. L. DeStefano, J. C. Bis, G. W. Beecham, et al. Meta-analysis of 74,046 individuals identifies 11 new susceptibility loci for Alzheimer’s disease. Nature genetics, 45(12):1452–1458, 2013.
  • [16] Y. Li, C. J. Willer, J. Ding, P. Scheet, and G. R. Abecasis. Mach: using sequence and genotype data to estimate haplotypes and unobserved genotypes. Genetic epidemiology, 34(8):816–834, 2010.
  • [17] C. Lippert, J. Listgarten, R. I. Davidson, J. Baxter, H. Poon, C. M. Kadie, and D. Heckerman. An exhaustive epistatic snp association analysis on expanded wellcome trust data. Scientific reports, 3:1099, 2013.
  • [18] J. Liu, J. Huang, S. Ma, and K. Wang. Incorporating group correlations in genome-wide association studies using smoothed group lasso. Biostatistics, 14(2):205–219, 2013.
  • [19] N. Meinshausen and P. Bühlmann. Stability selection. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 72(4):417–473, 2010.
  • [20] B. Mieth, M. Kloft, J. A. Rodríguez, S. Sonnenburg, R. Vobruba, C. Morcillo-Suárez, X. Farré, U. M. Marigorta, E. Fehr, T. Dickhaus, et al. Combining multiple hypothesis testing with machine learning increases the statistical power of genome-wide association studies. Scientific reports, 6:36671, 2016.
  • [21] N. Parikh, S. Boyd, et al. Proximal algorithms. Foundations and Trends® in Optimization, 1(3):127–239, 2014.
  • [22] S. Purcell, B. Neale, K. Todd-Brown, L. Thomas, M. A. Ferreira, D. Bender, J. Maller, P. Sklar, P. I. De Bakker, M. J. Daly, et al. Plink: a tool set for whole-genome association and population-based linkage analyses. The American Journal of Human Genetics, 81(3):559–575, 2007.
  • [23] M. Reuter, N. J. Schmansky, H. D. Rosas, and B. Fischl. Within-subject template estimation for unbiased longitudinal image analysis. NeuroImage, 61(4):1402–1418, 2012.
  • [24] R. Tibshirani. Regression shrinkage and selection via the lasso. Journal of the Royal Statistical Society. Series B (Methodological), pages 267–288, 1996.
  • [25] P. M. Visscher, M. A. Brown, M. I. McCarthy, and J. Yang. Five years of gwas discovery. The American Journal of Human Genetics, 90(1):7–24, 2012.
  • [26] J. Wang, T. Yang, P. Thompson, and J. Ye. Sparse models for imaging genetics. In G. Wu, D. Shen, and M. R. Sabuncu, editors, Machine Learning and Medical Imaging, pages 129 – 151. Academic Press, 2016.
  • [27] D. Warde-Farley, S. L. Donaldson, O. Comes, K. Zuberi, R. Badrawi, P. Chao, M. Franz, C. Grouios, F. Kazi, C. T. Lopes, et al. The genemania prediction server: biological network integration for gene prioritization and predicting gene function. Nucleic acids research, 38(suppl 2):W214–W220, 2010.
  • [28] D. Welter, J. MacArthur, J. Morales, T. Burdett, P. Hall, H. Junkins, A. Klemm, P. Flicek, T. Manolio, L. Hindorff, et al. The nhgri gwas catalog, a curated resource of snp-trait associations. Nucleic acids research, 42(D1):D1001–D1006, 2013.
  • [29] N. R. Wray, J. Yang, B. J. Hayes, A. L. Price, M. E. Goddard, and P. M. Visscher. Pitfalls of predicting complex traits from snps. Nature reviews. Genetics, 14(7):507, 2013.
  • [30] T. Yang, J. Liu, P. Gong, R. Zhang, X. Shen, and J. Ye. Absolute fused lasso & its application to genome-wide association studies. In Proceedings of the 22th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. ACM, 2016. accepted.
  • [31] T. Yang, J. Wang, Q. Sun, D. P. Hibar, N. Jahanshad, L. Liu, Y. Wang, L. Zhan, P. M. Thompson, and J. Ye. Detecting genetic risk factors for Alzheimer’s disease in whole genome sequence data via lasso screening. In Biomedical Imaging (ISBI), 2015 IEEE 12th International Symposium on, pages 985–989. IEEE, 2015.
  • [32] J. Ye and J. Liu. Sparse methods for biomedical data. ACM SIGKDD Explorations Newsletter, 14(1):4–15, 2012.
  • [33] M. Yuan and Y. Lin. Model selection and estimation in regression with grouped variables. Journal of the Royal Statistical Society Series B, 68:49–67, 2006.
  • [34] O. Zuk, E. Hechter, S. R. Sunyaev, and E. S. Lander. The mystery of missing heritability: Genetic interactions create phantom heritability. Proceedings of the National Academy of Sciences, 109(4):1193–1198, 2012.