1 Introduction
In this paper, we consider the problem of learning a functional mapping from a highdimensional input space with structured sparsity to a multivariate output space where responses are coupled (therefore making the estimator doubly structured), with an application for detecting genomic loci affecting gene expression levels, a problem known as expression quantitative trait loci (eQTL) mapping. In particular, we are interested in exploiting the structural information on both the input and output space jointly to improve the accuracy for identifying a small number of input variables relevant to the outputs among a large number of candidates. When the input or output variables are highly correlated among themselves, multiple related inputs may synergistically influence the same outputs, and multiple related outputs may be synergistically influenced by the same inputs.
The primary motivation for our work comes from the problem of genomewide association (GWA) mapping of eQTLs in computational genomics kendziorski2006statistical , of which the goal is to detect the genetic variations, often single nucleotide polymorphisms (SNPs), across the whole genome that perturb the expression levels of genes, given the data of SNP genotypes and microarray geneexpression traits of a study cohort. One of the main challenges of this problem is that, typically the sample size is very small (e.g. ), whereas there are a very large number of SNPs (e.g. 500,000) and expression traits (e.g., 10,000). Furthermore, there have been numerous evidences that multiple genetic variations may interact with each other (a.k.a., epistasis) segre2005modular ; cho1998identification ; badano2005dissection , and the same genetic variation(s) can influence multiple genes (a.k.a., pleiotropy) stearns2010one ; ducrest2008pleiotropy . However, prior knowledge of such information implying complex association structures are difficult to exploit in standard statistical analysis of GWA mapping wang2010analysing ; moore2010bioinformatics . To enhance the statistical power for mapping of eQTLs, it is desirable to incorporate biological knowledge of genome and transcriptome structures into the model to guide the search for true eQTLs. In this article, we focus on developing a model which can make use of structural information on both input (SNPs) and output (gene expressions) sides. In particular, we consider biological knowledge about group associations as structural information. If there exist group behaviors among the covariates in the highdimensional input , for example, multiple genetically coupled SNPs (e.g., in linkage disequilibrium) can jointly affect a single trait wang2010analysing , such group information is called an input structure; if multiple variables in the highdimensional output are jointly under the influence of a similar set of input covariates, for example, a single SNP can affect multiple functionally coupled traits (e.g., genes in the same pathway or operon) kimxingPLoSG09 , such group information is called an output structure. The problem of GWA mapping of eQTLs can be formulated as a model selection problem under a multitask regression model with structured sparsity, where the resultant nonzero elements in the regression coefficient matrix expose the identities of the eQTLs and their associated traits.
Variants of this problem have been widely studied in the recent highdimensional inference and variable selection literature, and various penaltybased or Bayesian approaches for learning a shared sparsity pattern among either multiple inputs or multiple outputs in a regression model have been proposed tibshirani2005sparsity ; negahban2011simultaneous ; han2010multi ; li2010bayesian . Depending on the type of structural constraints, different penalty functions have been previously considered, including mixednorm (grouplasso) penalty for a simple grouping structure yuan2006model ; zhao2009composite , treeguided grouplasso penalty for a tree structure kim2009tree , or graphguided fused lasso for a graph structure kimxingPLoSG09 . Most previous approaches, however, considered either only the input structural constraints or only the output structural constraints, but not both. There have been a few approaches that attempted to use both structural information, including MGOMP lozanoblock and “group sparsity for multitask learning” tseng2009coordinate . MGOMP proposed to select the groups of regression coefficients from a predefined set of grouped variables in a greedy fashion, and tseng2009coordinate proposed to find the groups of inputs that influence the group of outputs. However, both methods may have limits on the number or the shapes of sparsity patterns that can be induced in . For example, given a large number of input groups and output groups (e.g. for genome data) the scalability of MGOMP can be substantially affected since it needs to select the groups of coefficients from all possible combinations of input and output groups. For tseng2009coordinate , only disjoint block sparsity patterns are considered, hence it may not capture the sparsity patterns where the grouped variables overlap.
In this paper, we address the problem of exploiting both the input and output structures in a highdimensional linear regression setting practically encountered in eQTL mapping. Furthermore, to detect epistatic (i.e., interaction) effects between SNP pairs, we additionally expand the input space to include pairwise terms (i.e.,
’s) guided by biological information, which necessitates attentions for avoiding excessive input dimension that can make the problem computationally prohibitive. Our main contributions can be summarized as follows:
We propose a highly general regression model with structured inputoutput regularizers called “jointly structured inputoutput lasso” (SIOL) that discovers structured associations between SNPs and expression traits (Section 3).

We develop a simple and highly efficient optimization method called “hierarchical group thresholding” (HiGT) for solving the proposed regression problem under complex sparsityinducing penalties in a very highdimensional space (Section 4).

Extending SIOL, we propose “structured polynomial multitask regression” to efficiently model nonadditive SNPSNP interactions guided by genetic interaction networks (Section 5).
Specifically, given knowledge of the groupings of the inputs (i.e., SNPs) and outputs (i.e., traits) in a highdimensional multitask regression setting, we employ an norm over such structures to impose a grouplevel sparsityinducing penalty simultaneously over both the columns and the rows of the regression coefficient matrix (In our setting, a row corresponds to coefficients regressing all SNP (or SNP pair) inputs to a particular trait output, thus reflecting possible epistatic effects; and a column corresponds to coefficients regressing a particular SNP (or SNP pair) input to all trait outputs in question, thus reflecting possible pleotropic effects). Given reliable input and output structures, rich structured sparsity can increase statistical power significantly since it makes it possible to borrow information not only within different output or input variables, but also across output and input variables.
The sparsityinducing penalties on both the inputs and outputs in SIOL introduce a nondifferentiable and nonseparable objective in an extremely highdimensional optimization space, which prevents standard optimization methods such as the interior point nesterov1987interior , the coordinatedescent friedman2007pathwise , or even the recently invented union of supports jacob2009group algorithms to be directly applied. We propose a simple and efficient algorithm called “hierarchical groupthresholding” to optimize our regression model with complex structured regularizers. Our method is an iterative optimization algorithm, designed to handle complex structured regularizers for very large scale problems. It starts with a nonzero
(e.g. initialized by ridge regression
hoerl1970ridge ), and progressively discards irrelevant groups of covariates using thresholding operations. In each iteration, we also update the coefficients of the remaining covariates. To speed up our method, we employ a directed acyclic graph (DAG) where nodes represent the zero patterns encoded by our inputoutput structured regularizers at different granularity, and edges indicate the inclusion relations among them. Guided by the DAG, we could efficiently discard irrelevant covariates.As our third contribution, we consider nonadditive pairwise interaction effects between the input variables, in a way that avoids a quadratic blowup of the input dimension. In eQTL mapping studies, it is not uncommon that the effect of one SNP on the expressionlevel of a gene is dependent on the genotype of another SNP, and this phenomenon is known as epistasis. To capture pairwise epistatic effects of SNPs on the trait variation, we additionally consider nonadditive interactions between the input covariates. However, in a typical eQTL mapping, as the input lies in a very highdimensional space, it is computationally and statistically infeasible to consider all possible input pairs. For example, for inputs (e.g. for a typical genome data set), we have candidate input pairs, and learning with all of them will require a significantly large sample size. Many of the previous approaches for learning the epistatic interactions relied on pruning candidate pairs based on the observed data Rat:96 or constructing candidate pairs from individual SNPs that were selected based on marginal effects in the previous learning phase without modeling interactions Taskar:10 ; devlin2003analysis . A main disadvantage of the later approach is that it will miss pairwise interactions when they have no or little individual effects on outputs. Instead of choosing candidate SNP pairs based on only marginal effects, we propose to use genetic interaction network costanzo2010genetic constructed from largescale biological experiments to consider biologically plausible candidate pairs.
The rest of this paper is organized as follows. In Section 2, we discuss previous works on learning a sparse regression model with prior knowledge on either output or input structure. In Section 3, we introduce our proposed model “jointly structured inputoutput lasso” (SIOL). To solve our regression problem, we present an efficient optimization method called “hierarchical groupthresholding” (HiGT) in Section 4. We further extend our model to consider pairwise interactions among input variables and propose “structured polynomial multitask regression” in Section 5. We demonstrate the accuracy of recovered structured sparsity and the speed of our optimization method in Section 6 via simulation study, and present eQTLs having marginal and interaction effects in yeast that we identified in Section 7. A discussion is followed in Section 8.
2 Background: Linear Regression with Structured Sparsity
In this section, we lay out the notation and then review existing sparse regression methods that recover a structured sparsity pattern in the estimated regression coefficients given prior knowledge on input or output structure.
2.1 Notation for matrix operations
Given a matrix , we denote the th row by , the th column by , and the element by . denotes the matrix Frobenius norm, denotes an norm (entrywise matrix norm for a matrix argument), and represents an norm. Given the set of column groups defined as a subset of the power set of ,
represents the row vector with elements
, which is a subvector of due to group . Similarly, for the set of row groups over rows of matrix , we denote by the column subvector with elements . We also define the submatrix of as a matrix with elements .2.2 Sparse estimation of linear regression
Let be the input data for inputs and individuals, and be the output data for outputs. We model the functional mapping from the common dimensional input space to the dimensional output space, using a linear model parametrized by unknown regression coefficients as follows:
where is a matrix of noise terms whose elements are assumed to be identically and independently distributed as Gaussian with zero mean and the identity covariance matrix. Throughout the paper, we assume that ’s and ’s are standardized such that all rows of and
have zero mean and a constant variance, and consider a model without an intercept. In eQTL analysis, inputs are genotypes for
loci encoded as 0, 1, or 2 in terms of the number of minor alleles at a given locus, and output data are given as expression levels of genes measured in a microarray experiment. Then, the regression coefficients represent the strengths of associations between genetic variations and gene expression levels.Our proposed method for estimating the coefficients is based on a groupstructured multitask regression approach that extends existing regularized regression approaches including lasso tibshirani1996regression , group lasso yuan2006model and multitask lasso obozinski2006multi , which we briefly review below in the context of our eQTL mapping problem. When and only a small number of inputs are expected to influence outputs, lasso has been widely used and shown effective in selecting the input variables relevant to outputs and setting the elements of for irrelevant inputs to zero zhang2008sparsity . Lasso obtains a sparse estimate of regression coefficients by optimizing the least squared error criterion with an penalty over as follows:
(1) 
where is the tuning parameter that determines the amount of penalization. The optimal value of can be determined by cross validation or via an informationtheoretic test based on BIC. As in eQTL analysis it is often believed that the expression level of each gene is affected by a relatively small number of genetic variations in the whole genome, lasso provides an effective tool for identifying eQTLs from a large number of genetic variations. Lasso has been previously applied to eQTL analysis brown2011application and more general genetic association mapping problems wu2009genome .
While lasso considers the input variables independently to select relevant inputs with nonzero regression coefficients, we may have prior knowledge on how related input variables are grouped together and want to perform variable selection at the group level rather than at the level of individual inputs. Grouped variable selection approach can combine the statistical strengths across multiple related input variables to achieve higher power for detecting relevant inputs in the case of low signaltonoise ratio. Assuming the grouping structure over inputs are available as , which is a subset of the power set of , group lasso uses penalization to enforce that all of the members in each group of input variables are jointly relevant or irrelevant to each output. Group lasso obtains an estimate of by solving the following optimization problem:
(2) 
where is the tuning parameter. The second term in the above equation represents an penalty over each row of for the th output given , defined by . The part of the penalty plays the role of enforcing a joint selection of inputs within each group, whereas the part of the penalty is applied across different groups to encourage a grouplevel sparsity. Group lasso can be applied to an eQTL mapping problem given biologically meaningful groups of genetic variations that are functionally related. For example, rather than individual genetic variations acting independently to affect (or not affect) gene expressions, the variations are often related through pathways that consist of multiple genes participating in a common function. Thus, genetic variations can be grouped according to pathways that contain genes carrying those genetic variations. Then, given this grouping, group lasso can be used to select groups of genetic variations in the same pathways as factors influencing gene expression levels silverpathway .
Instead of having groups over inputs with outputs being independent as in group lasso, the idea of using penalty for grouped variable selection has also been applied to take advantage of the relatedness among outputs in multiple output regression. In multitask regression for union support recovery obozinski2006multi , one assumes that all the outputs share a common support of relevant input variables and try to recover shared sparsity patterns across multiple outputs by solving the following optimization problem:
(3) 
where can be determined by crossvalidation. In eQTL mapping, as gene expression levels are often correlated for the genes that participate in a common function, it is reasonable to assume that those coexpressed genes may be influenced by common genetic variations. If gene module information is available, one can use the above model to detect genetic variations influencing the expressions of a subset of genes within each gene module. This strategy corresponds to a variation of the standard group lasso, where group is defined over outputs rather than inputs.
Extending the idea of lasso and group lasso, we may have group and individual level sparsity simultaneously using combined and penalty. In group lasso, if a group of coefficients is not jointly set to zero, all the members in the group should have nonzero values. However, sometimes it is desirable to set some members of the group to zero if they are irrelevant to outputs. Sparse group lasso friedman2010note is proposed to address the cases where groups of coefficients include both relevant and irrelevant ones. Using convex combination of and norms, it solves the following convex optimization problem:
(4) 
where and determine the individual and group level sparsity, respectively. The penalty shrinks groups of coefficients to zero, and at the same time, penalty sets irrelevant coefficients to zero individually within each group.
Our proposed model is motivated by group lasso, multitask lasso and sparse group lasso, each of which can exploit predefined groupingness of input or output variables to achieve better statistical power. In the next section, we will extend the existing models in such a way that we can use the groups in both input and output spaces simultaneously. Adopting the idea of sparse group lasso, we will also support variable selection at individual levels.
3 Jointly Structured InputOutput Lasso
In this section, we propose SIOL that incorporates structural constraints on both the inputs and outputs. The model combines the mixednorm regularizers for the groups of inputs and outputs, which leads to the following optimization problem:
(5a)  
(5b)  
(5c) 
where Eq. (5b) incorporates the groups of inputs , Eq. (5c) incorporates the groups of the outputs , and Eq. (5a) allows us to select individual coefficients. Note that it is possible that there are overlaps between and , between and , and between and , where and . The overlaps make it challenging to optimize Eq. (5), and this issue will be addressed by our optimization method in Section 4.
Let us characterize the structural constraints imposed by the penalties in our model. In our analysis, we investigate a block of coefficients involved in one output group and one input group , i.e, . We start with KarushKuhnTucker (KKT) condition for Eq. (5):
(6) 
where , , and are the subgradient of Eq. (5a), Eq. (5b), and Eq. (5c) with respect to , respectively. For simple notation, we also define .
First, we consider the case where all coefficients in become zero simultaneously, i.e., . Using KKT condition in Eq. (6), we can see that if and only if
(7) 
This condition is due to CauchySchwarz inequality, , and . Here if , and are large, is likely to be zero jointly. This structural sparsity is useful to filter out a large number of irrelevant covariates since it considers both the group of correlated inputs and the group of correlated outputs simultaneously.
Our model also inherits grouping effects for only input (or output) groups. For the analysis of such grouping effects, we fix the groups of zero coefficients that overlap with, say, an input group . Formally speaking, let us define , and fix s for all . Using the KKT condition in Eq. (7), if
(8) 
Here, we know that for ( and ) and , and hence . This technique was previously introduced in yuanefficient to handle overlapping group lasso. One can see that if the size of is large, tends to be zero together since it reduces the lefthand side of Eq. (8). This behavior explains the correlation effects between input and output group structures. When a group of coefficients (, ) corresponding to an input group or an output group become zero, they affect other groups of coefficients that overlap with them; and the overlapped coefficients are more likely to be zero. These correlation effects between overlapping groups are desirable for inducing appropriate structured sparsity as it allows us to share information across different inputs and different outputs simultaneously. We skip the analysis of the grouping effects for output groups as the argument is the same except that the input and output group are reversed.
Finally, we also have individual sparsity due to penalty in Eq. (5a). In this case, let us assume that and since if the group of coefficients is zero, we automatically have . Using the KKT condition, if and only if
(9) 
It is equivalent to the condition of lasso that sets a regression coefficient to zero. Note that if , we have sparsity only at the individual levels, and our model is the same as lasso. When a group of coefficients contains both relevant and irrelevant ones, we can set the irrelevant coefficients to zero using Eq. (9).
We briefly mention the three tuning parameters (, , ) which can be determined by cross validation. It is often computationally expensive to search for optimal parameters in 3dimensional grid. In practice, instead, we use the following tuning parameters: and . Here configures the mixing proportion of input and output group structures, and is the scaling factor that determines the degree of penalization for the input and output groups. In this setting, we also have three regularization parameters, however, it helps us to reduce the search space of the tuning parameters as we know the range of ().
Let us discuss the statistical and biological benefits of our model. First, our model can capture rich structured sparsity in . The structured sparsity patterns include zero (sub)rows, zero (sub)columns and zero blocks of . It is impossible to have such rich sparsity patterns if we use one part of information on either input or output side. For example, group lasso yuan2006model or multitask lasso obozinski2006multi consider structured sparsity patterns in either rows or columns in . Second, our model is robust to the groups which contain both relevant and irrelevant coefficients. If predefined groups of inputs and outputs are unreliable, our model may still work since the irrelevant coefficients can be set to zero individually via penalty even when their groups are not jointly set to zero. Third, the grouping effects induced by our model in Eq. (7, 8) show that we can use the correlation effects between input and output groups. When we have reliable input and output groups, the advantage from the structural information will be further enhanced by the correlation effects in addition to the sum of the benefits of both input and output groups.
When applied to GWA mapping of eQTLs, our model offers a number of desirable properties. It is likely that our model can detect association SNPs with low signaltonoise ratio by taking advantage of rich structural information. In GWA studies, one of the main challenges is to detect SNPs having weak signals with limited sample size. In complex diseases such as cancer and diabetes, biologists believe that multiple SNPs are jointly responsible for diseases but not necessarily with strong marginal effects mccarthy2008genome . However, such causal SNPs are hard to detect mainly due to insufficient number of samples. Our model can deal with this challenge by taking advantage of both input and output group structures. First, by grouping inputs (or SNPs), we can increase the signaltonoise ratio. Suppose each SNP has small signal marginally, if a group of coefficients is relevant, their joint strength will be increased, and it is unlikely that they are jointly set to zero. On the other hand, if a group of coefficients is irrelevant, their joint strength will still be small, and it is likely that they are set to zero. Second, taking advantage of the output groups, we can share information across the correlated outputs, and it decreases the sample size required for successful support recovery negahban2011simultaneous . Overall, to detect causal SNPs having small effects, our model increases signaltonoise ratio by grouping the SNPs, and simultaneously decreases the required number of samples by grouping phenotypic traits.
Unfortunately, the optimization problem resultant from Eq. (5) is nontrivial. One may find out that each appears in all the three penalties of Eq. (5a – 5c). Thus, our structured regularizer is nonseparable, which makes simple coordinate descent optimization inapplicable. The overlaps between/within input and output groups add another difficulty. Furthermore, we must induce appropriate sparsity patterns (i.e., exact zeros) in addition to the minimization of Eq. (5), therefore approximate methods based on merely relaxing the shrinkage functions are not appropriate. In the following section, we propose “hierarchical group thresholding” method (HiGT) that efficiently solves our optimization problem with hierarchically organized thresholding operations.
4 Optimization method
In this section, we propose our method to optimize Eq. (5). We start with a nonzero initialized by other methods (e.g. ridge regression), and always reduce the set of nonzero s using thresholding operations as our procedure proceeds. Our framework is an iterative procedure consisting of two steps. First, we set the groups (or individual) of regression coefficients to zero by checking optimality conditions (called thresholding) as we walk through a predefined directed acyclic graph (DAG). When we walk though the nodes in the DAG, some s might not achieve zero. Second, we update only these nonzero s using any available optimization techniques.
Let us first characterize the zero patterns induced by Eq. (5a – 5c). We separately consider a block of which consists of one input group and one output group . Our observation tells us that there are grouping effects (to be zero simultaneously) for each and : and . We also have when or . Each covariate can also be zero, i.e, due to the penalty in Eq. (5a). Figure 1(a) shows all the possible zero patterns of induced by Eq. (5a – 5c). Given these sparsity patterns, to induce structured sparsity, one might be able to check whether or not these zero patterns satisfy optimality conditions and discard irrelevant covariates accordingly. However, this approach may be inefficient as it needs to examine the large number of zero patterns. Instead, to efficiently check the zero patterns, we will construct a DAG, and exploit the inclusion relationships between the zero patterns. The main idea is that we want to be able to check all zero patterns by traversing the DAG while avoiding unnecessary optimality checks.
In Figure 1(b), we show an example of the DAG for when and . We denote the set of all possible zero patterns of by . For example, can be a zero pattern for (the root node in Figure 1(b)). Let us denote by the coefficients of corresponding to ’s zero pattern. Then we define the DAG as follows: A node is represented by , and there exists a directed edge from to if and only if and . For example, in Figure 1(b), the zero patterns of the nodes in the second level include the zero patterns of their children. In general, when we have multiple input and output groups, we can generate a DAG for each separately and then connect all the DAGs to the root node for . This graph is originated from Hasse diagram cameron1994combinatorics and it was previously utilized for finding a minimal set of groups for inducing structured sparsity jenatton2009structured .
We can readily observe that our procedure has the following properties:

Walking through the DAG, we can check all possible zero patterns explicitly without resorting to heuristics or approximations.

If , we know that all the descendants of are also zero due to the inclusion relations of the DAG. Hence, we can “skip” to check the optimality conditions that the descendants of are zero.
Considering these properties, we develop our optimization framework for the following reasons. First, we can achieve accurate zero patterns in since we check all possible sparsity patterns when walking through the DAG. Second, if is sparse, our framework is very efficient since we can skip the optimality checks for many zero patterns in . Mostly we will check only nodes located at the high levels of the DAG. Third, our framework is simple to implement. All we need is to check whether each node in the DAG attains zero and update nonzero s only when necessary.
Specifically, our hierarchical groupthresholding employs the following procedure:

Initialize a nonzero using any available methods (e.g. ridge regression).

Use depthfirstsearch (DFS) to traverse the DAG, and check the optimality conditions to see if the zero patterns at each node achieve zero. If or satisfies the optimality condition to be zero, set , skip the descendants of , and visit the next node according to the DFS order.

For those of ’s which did not achieve zero in the previous step, update the coefficients of the nonzero ’s using any available optimization algorithms.
Bellow we briefly present the derivations of the three ingredients of our optimization framework that include 1) the construction of a DAG, 2) the optimality condition of each in the DAG and 3) the rule for updating nonzero regression coefficients. Our optimization method is summarized in Algorithm 1.
Construction of the DAG
To generate the DAG, first we define the set of nodes . For each block of , we are interested in the four types of zero patterns as follows:

is zero: .

One row in is zero: , .

One column in is zero: , .

One regression coefficient in is zero: , and .
These zero patterns of are shown in Figure 1(b) when . For example, Case 2 and 3 correspond to the nodes at the second level of the DAG. For all and , we can define nodes using the above zero patterns. Then we need to determine the edges of the DAG by investigating the relations of the nodes. We can also easily see that there exists the relationship among the zero patterns: . Given the zero patterns and their relations, we create a directed edge if and only if and . In Figure 1(b) we show an example of the DAG. Finally, we make a dummy root node and generate an edge from the dummy node to the root of all DAGs for .
Optimality conditions for structured sparsity patterns
Given a block of , here we show optimality conditions for the four sparsity patterns: (1) , (2) , (3) , and (4) , (). In Figure 1(b), the root node corresponds to the first case, the nodes at the second level correspond to the second and third case, and the leaf nodes correspond to the fourth case. Our derivation of the following optimality conditions use the fact that all zero coefficients are fixed, as it makes it simple to deal with overlapping groups. We denote the column and row indices of zero entries by and .
First, the optimality condition for the first case is as follows: if
(10) 
where
It is derived using KKT condition in Eq. (6) and CauchySchwarz inequality.
The second case of structured sparsity, i.e, is achieved if
(11) 
and the optimality condition for the third case, i.e, is
(12) 
These conditions can be established using KKT condition in Eq. (6) fixing all the zero coefficients. Finally, assuming that and , the fourth case has the optimality condition of
(13) 
Update rule for nonzero coefficients
If all the above optimality conditions do not hold, we know that . In this case, the gradient of Eq. (5) with respect to exists, and we can update using any coordinate descent procedures. With a little bit of algebra, we derive the following update rule: where
(14)  
We close this section by summarizing the desirable properties of our optimization method. First, when is sparse, our optimization procedure is very fast. We take advantage of not only the hierarchical structure of the DAG, but also the simple forms of the optimality conditions with residuals. If we keep track of the residuals, we can efficiently check the optimality conditions for each sparsity pattern. Second, our thresholding operations check all possible sparsity patterns, resulting in appropriate structured sparsity in . It is important for eQTL mapping since the coefficients for irrelevant SNPs can be set to exactly zero. Third, our optimization method can deal with overlaps between/within the coefficients for input groups (’s) and output groups (’s). Since input or output groups may overlap, and they must be considered simultaneously, this property of our method is essential. Finally, unlike some previous methods yuan2006model ; tibshirani1996regression , we make no use of the assumption that the design matrix is orthonormal (). This dropping of the assumption is desirable for eQTL mapping in particular as covariates (SNPs) are highly correlated due to linkage disequilibrium. If one uses orthonormalization as a preprocessing step to make orthonormal, there is no guarantee that the same solution for the original problem is attained friedman2010note .
5 Dealing with structures inducing higherorder effects
So far, we have been dealing with input and output structures in the context of multivariate and multitask linear regression where the influences from the covariates on the responses are additive. When higher interactions take place among covariates, which is known as epistasis and is prevalent in genetic associations carlson2004mapping , a common approach to model such effects is polynomial regression montgomery2001introduction , where higherorder terms of the covariates are included as additional regressors. However, in highdimensional problems such as the one studied in this paper, this strategy is infeasible even for 2ndorder polynomial regression because, given say, even a standard genome dataset with SNPs, one is left with regressors which is both computationally and statistically unmanageable. In this section, we briefly show how to circumvent this difficulty using structured regularization based on prior information of covariate interactions. This strategy is essentially a straightforward generalization of the ideas in Section 3 to a polynomial regression setting using a special type of structure encoded by a graph. Therefore all the algorithmic solutions developed in Section 4 for the general optimization problem in Section 3 still apply here.
Following common practice in GWA literature, here we consider only 2ndorder interactions between SNP pairs. Instead of including all SNP pairs as regressors, we employ a synthetic genetic interaction network costanzo2010genetic to define a relatively small candidate set of interacting SNP pairs. A synthetic genetic interaction network is derived from biological evidences of pairwise functional interactions between genes, such as double knockout experiments tong2004global ; koh2009drygin ; costanzo2010genetic ; boone2007exploring . It contains information about the pairs of genes whose mutations affect the phenotype only when the mutations are present on both genes, and this represents a set of groundtruth interaction effects. Given such a network, we consider only those pairs of SNPs that are physically located in the genome near the genes that interact in the network within a certain distance. A 2ndorder regressor set generated by this scheme is not only much smaller than an exhaustive pairset, but also biologically more plausible. Note that it is possible to include other sets of SNP pairs from other resources in our candidate set. For example, in our experiments, we also added SNP pairs that passed twolocus epistasis test with pvalue into the set .
After finding the candidate SNP pairs, we generate the group of SNPs or interacting SNP pairs in two steps. In the first step, we find highly interconnected subgraphs (or clusters) from the genetic interaction network using any graph clustering algorithms. In our experiments, we used MCODE algorithm bader2003automated for clustering the network. In the second step, we group all the SNPs or SNP pairs that are linked to the genes in a cluster. We linked the genes and SNPs based on physical locations in the genome. For example, if a SNP is located nearby a gene within a certain distance (e.g. 500bp), they are linked together. Finally, we define individual SNPs in the th group as and SNP pairs in the th group as . We then look for associations between inputs/inputpairs and outputs via Eq. (15):
(15a)  
(15b)  
(15c)  
(15d)  
(15e) 
where is the set of input groups for marginal terms and is the set of input groups for pairwise interaction terms. Here, we use two tuning parameters for penalty depending on whether a covariate is modeling an individual effect () or interaction effect () because they might need different levels of sparsity. Note that this problem is identical to Eq. (5) if we treat interaction terms as additional covariates, and hence our optimization method presented in Section 4 is applicable to Eq. (15). However, Eq. (15) will be more computationally expensive than Eq. (5) since Eq. (15) has a larger number of covariates in including both marginal and interaction terms and additional tuning parameter .
6 Simulation Study
In this section we validate our proposed method using simulated genome/phenome datasets, and examine the effects of simultaneous use of input and output structures on the detection of true nonzero regression coefficients. We also evaluate the speed and the performance of our optimization method for support recovery in comparison to two other alternative methods. For the comparison of optimization methods, we selected smoothing proximal gradient method chen2010efficient and the union of supports jacob2009group since both methods are in principle able to use input/output structures and handle overlapping groups.
The simulated datasets with , and are generated as follows. For generating
, we first selected 60 input covariates from a uniform distribution over
which indicates major or minor genotype. We then simulated 60 pairwise interaction terms by randomly selecting inputpairs from the 60 covariates mentioned above. Pooling the 60 marginal terms and 60 pairwise interaction terms resulted in a input space of 120 dimensions. We also defined input and output groups as follows (for the sake of illustration and comprehension convenience, here our input and output groups correspond to variables to be jointly selected rather than jointly shrunk, the shrinkage penalty in our regression loss can be defined on the complements of these groups):where the numbers within a bracket represent the indices of inputs or outputs for an input group , or an output group . The inputs and outputs which did not belong to any groups were in a group by itself. We then simulated , i.e, the ground truth that we want to discover. We selected nonzero coefficients so that includes various cases, e.g., overlap between input and output groups, overlap within input groups, and overlap within output groups. Figure 2(a) shows the simulated where nonzero coefficients are represented by black blocks. Given and , we generated outputs by . We generated 20 datasets and optimized Eq. (5) using the three methods. We report the average performance using precision recall curves.
6.1 Evaluation of the Effects of Using Input and Output Structures
We first investigate the effects of using both input and output structures on the performance of our model. Here we applied our optimization method (HiGT) to the following three models with different use of structural information:
We then observed how the use of input/output structures affect the recovery of the true nonzero coefficients and the prediction error. In Figure 2, we visualize the examples of estimated by the three different models. Figure 2(b) shows that the model with input and output structure successfully recovered true regression coefficients in Figure 2(a). However, as shown in Figure 2(cd), the models with either input or output structure were less effective to suppress noisy signals, which resulted in many false positives.
Figure 3 shows the precision recall curves on the recovery of true nonzero coefficients by changing the threshold for choosing relevant covariates (), under different signal strengths of 0.4, 1 and 2. For all signal strengths, the model with input/output structures significantly outperformed the other models with either input or output structure. The most interesting result is that when the signal strength was very small such as 0.4, our model still achieved good performance by taking advantage of both structural information.
We also compare the prediction errors on our validation data with 280 () samples (each dataset had 14 samples for validation). For computing the prediction error, we first selected nonzero coefficients, and then recomputed the coefficients of those selected covariates using linear regression without shrinkage penalty. Using the unbiased coefficients of the chosen covariates, we measured the prediction error for our validation data. Figure 4 shows the prediction error under different signal strengths ranging from 0.4 to 2. For all signal strengths, we obtained significantly better prediction error using both input and output structures. When the signal strength was large such as 1 or 2, the use of both input and output structures was especially beneficial for reducing the prediction error since it helped the model to find most of the true covariates relevant to the outputs.
The effects of the size of input and output groups
6.2 Comparison of HiGT to Alternative Optimization Methods
In this section, we compare the accuracy and speed of our optimization method (HiGT) with those of the two alternative methods including smoothing proximal gradient method (SPG) chen2010efficient and union of supports jacob2009group . Both alternatives can handle overlapping groups. Specifically, the smoothing proximal gradient method is developed to efficiently deal with overlapping group lasso penalty and graphguided fusion penalty using an approximation approach. However, it may be inappropriate for our model since the maximum gap between the approximated penalty and the exact penalty is proportional to the total number of groups , where
. Thus, when dealing with high dimensional data (e.g
) such as genome data, the gap will be large, and the approximation method can be severely affected. On the other hand, “union of supports” finds the support of from the union support of overlapping groups. To obtain the union of supports, input variables are duplicated to convert the penalty with overlap into the one with disjoint groups, and a standard optimization technique for group lasso yuan2006model can be applied. One of disadvantages of union of supports is that the number of duplicated input variables increases dramatically when we have a large number of overlapping groups. In our experiment, we considered all possible combinations of overlapping input and output groups, and used a coordinate descent algorithm for sparse group lasso friedman2010note .Figure 6 shows the precision recall curves on the recovery of true nonzero coefficients under the SIOL model using the three optimization methods. The size of the problem is controlled by increasing number of input variables (from 30 to 600). The simulated data set used here was identical to the data in Section 6.1 except that we used 20 outputs () and different number of input variables. One can see that our method outperforms the other alternatives for all configurations. Our method and smoothing proximal gradient method showed similar performance when the input variable is small () but as increases, our method significantly performed better than SPG. It is consistent with our claim for the maximum gap between the approximated penalty and the exact penalty which is related to the number of groups. Union of supports did not work well even when the number of input variables is small () since the actual number of input variables considered was very large due to the duplicated covariates, which severely degraded the performance.
We also compared the speed of our method with the two alternatives of union of supports with all possible combinations of input and output groups and SPG that considered both input and output groups. Figure 7(a,b) show that our method converged faster than the other competitors, and was significantly more scalable than the two alternatives. Union of supports was very slow compared to our method and SPG because of the large number of duplicated input variables. Our experimental results confirms that our optimization technique is not only accurate but also fast, which can be explained by the use of DAG and the simple forms of optimality checks.
7 Analysis of Yeast eQTL Dataset
We apply our method to the budding yeast (Saccharomyces cerevisiae) data brem2005landscape with 1,260 unique SNPs (out of 2,956 SNPs) and the observed geneexpression levels of 5,637 genes. As network prior knowledge, we used genetic interaction network reported in costanzo2010genetic with stringent cutoff to construct the set of candidates of SNP pairs . We follow the procedure in section 5 to make with an additional set of significant SNP pairs with pvalue computed from twolocus epistasis test. When determining the set , we assumed that a SNP is linked to a gene if the distance between them is less than 500bp. We consider it a reasonable choice for ciseffect as the size of intergene regions for S. cerevisiae is 515bp on average sunnerhagen2006comparative . As a result, we included 982 interaction terms from the interaction network in with 1,260 individual SNPs. The number of SNP pairs from twolocus epistasis test varied depending on the trait. For generating input structures, we processed the network data as follows. We started with genetic interaction data which include 74,984 interactions between gene pairs. We then extracted genetic interactions with low pvalues (0.001). Given 44,056 significant interactions, using MCODE clustering algorithm, we found 55 gene clusters. Using the gene clusters, we generated the groups of individual SNPs and pairs of SNPs according to the scheme in section 5
. For generating output structures, we applied hierarchical clustering to the yeast gene expression data with cutoff 0.8, resulting in 2,233 trait clusters.
Marginal Effects in Yeast eQTL dataset
We briefly demonstrate the effects of input/output structures on the detection of eQTLs with marginal effects. In general, the association results for marginal effects by our method, lasso and single SNP analysis (the later two are standard methods in contemporary GWA mapping that use no structural information, and hence included for comparison) showed similar patterns for strong associations. However, we observed differences for SNPs with small or medium sized signals. For example, our results had fewer nonzero regression coefficients compared to lasso. One possible explanation would be that the grouping effects induced by our model with input/output structures might have removed false predictions with small or medium sized effects. To illustrate eQTLs with marginal effects, we show some examples of association SNPs using GenAMap curtisgenamap . Figure 8 demonstrates a Manhattan plot on chromosome 7 for two genes including YER160C and YJR029W. Both genes have the same GO category “transposition”. As both genes share the same GO category, it is likely that they are affected by the same SNPs if there exist any association SNPs for both genes. In our results, we could see that the same SNPs on chromosome 7 are associated with both genes as shown in Figure 8. However, single SNP analysis did not find any significant association SNPs in the region. Lasso detected association SNPs in the region but they were associated with only YER160C rather than both of them (lasso plot is not shown to avoid cluttered figure). This observation is interesting since it supports that our method can effectively detect the SNPs jointly associated with the gene traits by taking advantage of structural information.
Epistatic Effects in Yeast eQTL dataset
Hotspot  SNP1  SNP2  Number of  GO category of  Corrected pvalue of 

label  location  location  affected traits  affected traits  GO category 
1  chr1:154328  chr5:350744  455  ribosome biogenesis  
2  chr10:380085  chr15:170945  195  ribosome biogenesis  
3  chr10:380085  chr15:175594  185  ribosome biogenesis  
4  chr5:222998  chr15:108577  170  response to temperature stimulus  
5  chr11:388373  chr13:64970  155  regulation of translation  
6  chr2:499012  chr15:519764  145  vacuolar protein catabolic process  
7  chr1:41483  chr3:64311  130  
8  chr7:141949  chr9:277908  125  
9  chr3:64311  chr7:312740  115  glycoprotein metabolic process  
10  chr12:957108  chr15:170945  110  vacuolar protein catabolic process  
11  chr4:864542  chr13:64970  105  ribonucleoprotein complex biogenesis 
Now we show the benefits of using the input/output structures for detecting interaction effects among SNPs by comparing the results of our method to those of twolocus epistasis test performed by PLINK purcell2007plink which uses no structural information. Specifically, we compare the hotspots with interaction effects (i.e. SNP pairs that affect a large number of gene traits) which are identified by both methods. Recall that twolocus epistasis test is the most widely used statistical technique for detecting interaction effects in genomewide association studies, which computes the significance of interactions by comparing between the null model with only two marginal effects and the alternative model with two marginal effects and their interaction effect. In the following analysis, we discarded all SNP pairs if the correlation coefficient between the pairs to avoid trivial interaction effects.
We first identified the most significant hotspots that affect more than 100 gene traits. To make sure that we include only significant interactions, we considered interaction terms if their absolute value of regression coefficients are . For the results of twolocus epistasis test, we considered all SNP pairs with pvalue . Figure 9(a,b) show the hotspots found by our method and twolocus epistasis test. The rings in the figure represent the yeast genome from chromosome 1 (located at the top of each circle) to 16 clockwise, and the lines show interactions between the two genomic locations at both ends. One can see that our method detected 11 hotspots but twolocus epistasis test found only two significant hotspots with interaction effects. This observation shows that our method can find more significant hotspots with improved statistical power due to the use of the input/output structures. In Table 1, we summarized the hotspots found by our method. It turns out that our findings are also biologically interesting (e.g. 9 out of 11 hotspots showed GO enrichment). Notably, hotspot 1 (epistatic interaction between chr1:154328 and chr5:350744) affects 455 genes which are enriched with the GO category of ribosome biogenesis with a significant corrected pvalue (multiple testing correction is performed by false discovery rate maere2005bingo ). This SNP pair was included in our candidates from the genetic interaction network. There is a significant genetic interaction between NUP60 and RAD51 with pvalue costanzo2010genetic , and both genes are located at chr1:152257153877 and chr5:349975351178, respectively. As both SNPs are closely located to NUP60 and RAD51 (within 500bp), it is reasonable to hypothesize that two SNPs at chr1:154328 and chr5:350744 affected the two genes, and their genetic interaction in turn acted on a large number of genes related to ribosome biogenesis.
To provide additional biological insights, we further investigated the mechanism of this significant SNPSNP interaction. From literature survey, RAD51 (RADiation sensitive) is a strand exchange protein involved in DNA repair system sung1994catalysis , and NUP60 (NUclear Pore) is the subunit of unclear pore complex involved in nuclear export system denning2001nucleoporin . Also, it has been reported that yeast cells are excessively sensitive to DNA damaging agents if there exist mutations in NUP60 nagai2008functional . In our results, we also found out that the SNP close to NUP60 did not have significant marginal effects, and the SNP in RAD51 had marginal effects. According to these facts, it would be possible to hypothesize the following. When there is no mutation in RAD51, the point mutation in NUP60 cannot affect other traits since the single mutation is not strong enough and if there exist DNA damaging agents in the environment, DNA repair system would be able to handle them. However, when there exist point mutations in RAD51, DNA damaging agents would severely harm yeast cells with the point mutation in NUP60 since DNA repair system might not work properly due to the mutation in RAD51 (recall that the SNP in RAD51 had marginal effects). As a result, both mutations in NUP60 and RAD51 could make a large impact on many gene traits.
Furthermore, we looked at the hotspots which affect gene traits. Figure 10(a,b) show epistatic interactions identified by our method and twolocus epistasis test, respectively. In this figure, we show significant interactions with regression coefficient cutoff for our method, and pvalue cutoff for twolocus epistasis test. These cutoffs are arbitrarily chosen to make the number of hotspots found by both methods similar. Surprisingly, two methods showed very different hotspots with epistatic interactions. Figure 10(a) was very similar to Figure 9(a) but in Figure 10(b), several hotspots emerged which were absent in Figure 9(b). We will analyze these hotspots in two ways. First we will look at the hotspots with epistatic effects which appeared in both Figure 10(a) and (b). Then we will investigate the differences between the two results. First, we observed that both methods found significant epistatic effects between chromosome 1 and 5. Recall that in our previous analysis of the hotspots, this interaction was discussed (see hotspot 1 in Table 1). Among all significant SNP pairs found by twolocus epistasis test, there was no identical SNP pair to hotspot 1 but there were 30 SNP pairs close to it (within kb). Also, it turns out that these 30 SNP pairs had very strong correlation with hotspot 1. In Figure 11, we show scatter plot to illustrate the strong correlations between hotspot 1 and these 30 SNP pairs. More interestingly, the total number of affected traits by these 30 SNP pairs was 416, and it is very similar to 455 that is the number of affected genes by hotspot 1. According to these facts and our previous analysis for the mechanism of hotspot 1, it seems that hotspot 1 is truly significant, and twolocus epistasis test found significant SNP pairs that are close to the true location but failed to locate the exact location of hotspot 1. It supports that our algorithm could find such a significant hotspot affecting 400 genes by detecting exact SNP pairs. However, twolocus epistasis test was unable to locate many hotspots affecting a large number of traits due to insufficient statistical power. Second, we investigated the differences between the two results in Figure 10(a,b). As we cannot report all the results in the paper, we focused on a SNP pair (chr10:87113chr15:141621) in Figure 10(a), and another SNP pair (chr8:63314chr9:362631) in Figure 10(b). Figure 12(a,b) show the average gene expression levels for each SNP pair. In this figure, xaxis represents the genotype which is the multiplication of two SNPs (SNP1 SNP2, where SNP1, SNP2 ), and yaxis represents the average gene expression levels of individuals with given genotype. Each line in Figure 12
(a,b) shows how the average gene expression level varies as the genotype changes from 0 to 1 for each trait affected by the SNP pairs with error bars of one standard deviation. Interestingly, in Figure
12(a), we could see that there is a consistent pattern, where for most gene traits, the expression levels decreased as the genotype changed from 0 to 1. However, as shown in Figure 12(b), for the SNP pair found by twolocus epistasis test, we could not find such a coherent pattern. It demonstrates the differences between our method and twolocus epistasis test. As our model borrows information across input and output group structures, we could find consistent gene expression patterns for the SNP pair. On the other hand, twolocus pairwise test analyzed each SNP pair separately, and each trait affected by the SNP pair showed different gene expression patterns with different standard deviations. Thus, it seems that our method can provide interesting biological insights in terms of gene expression patterns in addition to the statistical significance.8 Discussions
In this paper, we presented jointly structured inputoutput lasso to simultaneously take advantage of both input and output structures. We also presented an efficient optimization technique for solving our multitask regression model with structured sparsity. Our experiments confirmed that our model is able to significantly improve the accuracy for detecting true nonzero coefficients using both input and output structures. Furthermore, we demonstrated that our optimization method is faster and more accurate than the other competitors. In our analysis of yeast eQTL datasets, we identified important pairs of eQTL hotspots that potentially interact with each other.
Prior knowledge about input and output structures
In practice, it is important to generate reliable input and output groups to maximize the benefits of the structural information. In our experiments, we showed that yeast genetic interaction networks can be used as prior knowledge to define input and output structures. However, such reliable prior knowledge cannot be easily attained when we deal with human eQTL datasets. Instead, we have a variety of resources for human genomes including proteinprotein interaction networks and pathway database. Generating reliable input and output structures exploiting multiple resources would be essential for the successful discovery of human eQTLs.
Comparison between HiGT and other optimization algorithms
Recently, an active set algorithm jenatton2009structured has been proposed developed for variable selection with structured sparsity, which can potentially be used for estimating the SIOL model. We observe two key differences between our method and the active set algorithm jenatton2009structured . First, the active set algorithm incrementally increases active sets by searching available nonzero patterns, hence one can see that it is a “bottomup” approach. On the other hand, our method adopts “topdown” approach where irrelevant covariates are discarded as we walk through the DAG. Second, our algorithm guarantees to search all zero patterns while the active set algorithm needs a heuristic to select candidate nonzero patterns. When is sparse, our algorithm is still very fast by taking advantage of the structures of DAG. However, when is not sparse, our algorithm needs to search a large number of zero patterns and update many nonzero coefficients but the active set algorithm still does not need to update many nonzero coefficients. Hence, in such a nonsparse case, the active set algorithm may have an advantage over our optimization method. Other alternative optimization methods for SIOL include MM (majorize/minimize) algorithm lange2004springer and generalized stagewise lasso zhao2007stagewise . However, these methods did not work well for SIOL as the approximated penalty by MM algorithm and the greedy procedure of generalized stagewise lasso were incapable of efficiently inducing complex sparsity patterns.
Future work
One promising research direction would be to systematically estimate the significance of the covariates that we found. For example, computing pvalues of our results would be helpful to control the false discovery rate. To handle both sparse and nonsparse , it would be also interesting to develop an optimization method for our model that can take advantage of both “bottomup” and “topdown” strategies. For example, we can select variables using “bottomup” approach and discard irrelevant variables using ”topdown” approach alternatively in a single framework. Finally, we are interested in applying our method to human disease datasets. In that case, the extension of our work to handle casecontrol studies and finding reliable structural information will be necessary.
References
 [1] J.L. Badano, C.C. Leitch, S.J. Ansley, H. MaySimera, S. Lawson, R.A. Lewis, P.L. Beales, H.C. Dietz, S. Fisher, and N. Katsanis. Dissection of epistasis in oligogenic bardet–biedl syndrome. Nature, 439(7074):326–330, 2005.
 [2] G. Bader and C. Hogue. An automated method for finding molecular complexes in large protein interaction networks. BMC bioinformatics, 4(1):2, 2003.
 [3] C. Boone, H. Bussey, and B.J. Andrews. Exploring genetic interactions and networks with yeast. Nature Reviews Genetics, 8(6):437–449, 2007.
 [4] R.B. Brem and L. Kruglyak. The landscape of genetic complexity across 5,700 gene expression traits in yeast. PNAS, 102(5):1572, 2005.
 [5] A.A. Brown, S. Richardson, and J. Whittaker. Application of the lasso to expression quantitative trait loci mapping. Statistical Applications in Genetics and Molecular Biology, 10(1):15, 2011.
 [6] P.J. Cameron. Combinatorics: topics, techniques, algorithms. Cambridge Univ Pr, 1994.
 [7] C.S. Carlson, M.A. Eberle, L. Kruglyak, and D.A. Nickerson. Mapping complex disease loci in wholegenome association studies. Nature, 429(6990):446–452, 2004.
 [8] X. Chen, Q. Lin, S. Kim, and E.P. Xing. An efficient proximalgradient method for single and multitask regression with structured sparsity. Annals of Applied Statistics, 2010.
 [9] J.H. Cho, D.L. Nicolae, L.H. Gold, C.T. Fields, M.C. LaBuda, P.M. Rohal, M.R. Pickles, L. Qin, Y. Fu, J.S. Mann, et al. Identification of novel susceptibility loci for inflammatory bowel disease on chromosomes 1p, 3q, and 4q: evidence for epistasis between 1p and ibd1. Proceedings of the National Academy of Sciences, 95(13):7502, 1998.
 [10] M. Costanzo et al. The genetic landscape of a cell. Science, 327(5964):425, 2010.
 [11] R.E. Curtis, P. Kinnaird, and E.P. Xing. Genamap: Visualization strategies for structured association mapping.
 [12] D. Denning, B. Mykytka, N.P.C. Allen, L. Huang, AL Burlingame, M. Rexach, et al. The nucleoporin nup60p functions as a gsp1p–gtpsensitive tether for nup2p at the nuclear pore complex. The Journal of cell biology, 154(5):937–950, 2001.
 [13] B. Devlin, K. Roeder, and L. Wasserman. Analysis of multilocus models of association. Genetic Epidemiology, 25(1):36–47, 2003.
 [14] A.L. Ducrest, L. Keller, and A. Roulin. Pleiotropy in the melanocortin system, coloration and behavioural syndromes. Trends in ecology & evolution, 23(9):502–510, 2008.
 [15] J. Friedman, T. Hastie, H. Hofling, and R. Tibshirani. Pathwise coordinate optimization. Annals of Applied Statistics, 1(2):302–332, 2007.
 [16] J. Friedman, T. Hastie, and R. Tibshirani. A note on the group Lasso and a sparse group Lasso. arXiv:1001.0736v1 [math.ST], 2010.

[17]
Y. Han, F. Wu, J. Jia, Y. Zhuang, and B. Yu.
Multitask sparse discriminant analysis (mtsda) with overlapping
categories.
In
Proceedings of the TwentyFourth AAAI Conference on Artificial Intelligence (AAAI10)
, pages 469–474, 2010.  [18] A.E. Hoerl and R.W. Kennard. Ridge regression: Biased estimation for nonorthogonal problems. Technometrics, pages 55–67, 1970.

[19]
L. Jacob, G. Obozinski, and J.P. Vert.
Group lasso with overlap and graph lasso.
In
Proceedings of the 26th Annual International Conference on Machine Learning
, pages 433–440. ACM, 2009.  [20] R. Jenatton, J.Y. Audibert, and F. Bach. Structured variable selection with sparsityinducing norms. Arxiv preprint arXiv:0904.3523, 2009.
 [21] CM Kendziorski, M. Chen, M. Yuan, H. Lan, and AD Attie. Statistical methods for expression quantitative trait loci (eQTL) mapping. Biometrics, 62(1):19–27, 2006.
 [22] S. Kim and E. P. Xing. Treeguided group lasso for multitask regression with structured sparsity. In Proceedings of the 27th Annual International Conference on Machine Learning, 2010.
 [23] S. Kim and E.P. Xing. Statistical estimation of correlated genome associations to a quantitative trait network. PLoS genetics, 5(8):e1000587, 2009.
 [24] J.L.Y. Koh, H. Ding, M. Costanzo, A. Baryshnikova, K. Toufighi, G.D. Bader, C.L. Myers, B.J. Andrews, and C. Boone. DRYGIN: a database of quantitative genetic interaction networks in yeast. Nucleic Acids Research, 2009.
 [25] M. Krzywinski et al. Circos: an information aesthetic for comparative genomics. Genome research, 19(9):1639–1645, 2009.
 [26] K. Lange. Optimization. SpringerVerlag, New York, 2004.
 [27] F. Li and N.R. Zhang. Bayesian variable selection in structured highdimensional covariate spaces with applications in genomics. Journal of the American Statistical Association, 105(491):1202–1214, 2010.
 [28] A.C. Lozano and V. Sindhwani. Block variable selection in multivariate regression and highdimensional causal inference. Advances in Neural Information Processing Systems (NIPS), 2010.
 [29] S. Maere, K. Heymans, and M. Kuiper. Bingo: a cytoscape plugin to assess overrepresentation of gene ontology categories in biological networks. Bioinformatics, 21(16):3448–3449, 2005.
 [30] M.I. McCarthy, G.R. Abecasis, L.R. Cardon, D.B. Goldstein, J. Little, J.P.A. Ioannidis, and J.N. Hirschhorn. Genomewide association studies for complex traits: consensus, uncertainty and challenges. Nature Reviews Genetics, 9(5):356–369, 2008.

[31]
D.C. Montgomery, E.A. Peck, G.G. Vining, and J. Vining.
Introduction to linear regression analysis
, volume 3. Wiley New York, 2001.  [32] J.H. Moore, F.W. Asselbergs, and S.M. Williams. Bioinformatics challenges for genomewide association studies. Bioinformatics, 26(4):445–455, 2010.
 [33] S. Nagai, K. Dubrana, M. TsaiPflugfelder, M.B. Davidson, T.M. Roberts, G.W. Brown, E. Varela, F. Hediger, S.M. Gasser, and N.J. Krogan. Functional targeting of dna damage to a nuclear poreassociated sumodependent ubiquitin ligase. Science, 322(5901):597, 2008.
 [34] S.N. Negahban and M.J. Wainwright. Simultaneous support recovery in high dimensions: Benefits and perils of block regularization. Information Theory, IEEE Transactions on, 57(6):3841–3863, 2011.
 [35] Y. Nesterov and A. Nemirovskii. Interiorpoint polynomial algorithms in convex programming, volume 13. Society for Industrial Mathematics, 1987.

[36]
G. Obozinski, B. Taskar, and M. Jordan.
Multitask feature selection.
In Technical Report, Department of Statistics, University of California, Berkeley, 2006.  [37] S. Purcell, B. Neale, K. ToddBrown, L. Thomas, M.A.R. Ferreira, D. Bender, J. Maller, P. Sklar, P.I.W. de Bakker, M.J. Daly, et al. PLINK: a tool set for wholegenome association and populationbased linkage analyses. The American Journal of Human Genetics, 81(3):559–575, 2007.
 [38] A. Ratnaparkhi. A maximum entropy model for partofspeech tagging. In Proceedings of EMNLP, 1996.
 [39] D. Segre, A. DeLuna, G.M. Church, and R. Kishony. Modular epistasis in yeast metabolism. Nature Genetics, 37(1):77–83, 2005.
 [40] M. Silver and G. Montana. Pathway selection for gwas using the group lasso with overlaps. 2011 International Conference on Bioscience, Biochemistry and Bioinformatics, 2011.
 [41] F.W. Stearns. One hundred years of pleiotropy: a retrospective. Genetics, 186(3):767, 2010.
 [42] P. Sung. Catalysis of atpdependent homologous dna pairing and strand exchange by yeast rad51 protein. Science, 265(5176):1241, 1994.
 [43] P. Sunnerhagen and J. Piskur. Comparative genomics: using fungi as models, volume 15. Springer Verlag, 2006.
 [44] R. Tibshirani. Regression shrinkage and selection via the lasso. Journal of the Royal Statistical Society. Series B (Methodological), 58(1):267–288, 1996.
 [45] R. Tibshirani, M. Saunders, S. Rosset, J. Zhu, and K. Knight. Sparsity and smoothness via the fused lasso. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 67(1):91–108, 2005.
 [46] A.H.Y. Tong et al. Global mapping of the yeast genetic interaction network. Science, 303(5659):808, 2004.
 [47] P. Tseng and S. Yun. A coordinate gradient descent method for nonsmooth separable minimization. Mathematical Programming, 117(1):387–423, 2009.
 [48] K. Wang, M. Li, and H. Hakonarson. Analysing biological pathways in genomewide association studies. Nature Reviews Genetics, 11(12):843–854, 2010.
 [49] D. Weiss and B. Taskar. Structured Prediction Cascades. In International Conference on Artificial Intelligence and Statistics (AISTATS), 2010.

[50]
T.T. Wu, Y.F. Chen, T. Hastie, E. Sobel, and K. Lange.
Genomewide association analysis by lasso penalized logistic regression.
Bioinformatics, 25(6):714–721, 2009.  [51] L. Yuan, J. Liu, and J. Ye. Efficient methods for overlapping group lasso. Advances in Neural Information Processing Systems, 2011.
 [52] M. Yuan and Y. Lin. Model selection and estimation in regression with grouped variables. The Journal of the Royal Statistical Society, Series B (Statistical Methodology), 68(1):49, 2006.
 [53] C.H. Zhang and J. Huang. The sparsity and bias of the lasso selection in highdimensional linear regression. The Annals of Statistics, 36(4):1567–1594, 2008.
 [54] P. Zhao, G. Rocha, and B. Yu. The composite absolute penalties family for grouped and hierarchical variable selection. The Annals of Statistics, 37(6A):3468–3497, 2009.
 [55] P. Zhao and B. Yu. Stagewise lasso. The Journal of Machine Learning Research, 8:2701–2726, 2007.
Comments
There are no comments yet.