1 Introduction
The concept of parsimony is central in many scientific domains. In the context of statistics, signal processing or machine learning, it takes the form of variable or feature selection problems, and is commonly used in two situations: First, to make the model or the prediction more interpretable or cheaper to use, i.e., even if the underlying problem does not admit sparse solutions, one looks for the best sparse approximation. Second, sparsity can also be used given prior knowledge that the model should be sparse.
Sparse linear models seek to predict an output by linearly combining a small subset of the features describing the data. To simultaneously address variable selection and model estimation, norm regularization has become a popular tool, which benefits both from efficient algorithms (see, e.g., Efron et al., 2004; Beck and Teboulle, 2009; Yuan, 2010; Bach et al., 2012, and multiple references therein) and a welldeveloped theory for generalization properties and variable selection consistency (Zhao and Yu, 2006; Wainwright, 2009; Bickel, Ritov and Tsybakov, 2009; Zhang, 2009).
When regularizing with the
norm, each variable is selected individually, regardless of its position in the input feature vector, so that existing relationships and structures between the variables (e.g., spatial, hierarchical or related to the physics of the problem at hand) are merely disregarded. However, in many practical situations the estimation can benefit from some type of prior knowledge, potentially both for interpretability and to improve predictive performance.
This a priori can take various forms: in neuroimaging based on functional magnetic resonance (fMRI) or magnetoencephalography (MEG), sets of voxels allowing to discriminate between different brain states are expected to form small localized and connected areas (Gramfort and Kowalski, 2009; Xiang et al., 2009, and references therein)
. Similarly, in face recognition, as shown in Section
4.4, robustness to occlusions can be increased by considering as features, sets of pixels that form small convex regions of the faces. Again, a plain norm regularization fails to encode such specific spatial constraints (Jenatton, Obozinski and Bach, 2010). The same rationale supports the use of structured sparsity for background subtraction (Cevher et al., 2008; Huang, Zhang and Metaxas, 2011; Mairal et al., 2011).Another example of the need for higherorder prior knowledge comes from bioinformatics. Indeed, for the diagnosis of tumors, the profiles of arraybased comparative genomic hybridization (arrayCGH) can be used as inputs to feed a classifier
(Rapaport, Barillot and Vert, 2008). These profiles are characterized by many variables, but only a few observations of such profiles are available, prompting the need for variable selection. Because of the specific spatial organization of bacterial artificial chromosomes along the genome, the set of discriminative features is expected to consist of specific contiguous patterns. Using this prior knowledge in addition to standard sparsity leads to improvement in classification accuracy (Rapaport, Barillot and Vert, 2008). In the context of multitask regression, a problem of interest in genetics is to find a mapping between a small subset of loci presenting single nucleotide polymorphisms (SNP’s) that have a phenotypic impact on a given family of genes (Kim and Xing, 2010). This target family of genes has its own structure, where some genes share common genetic characteristics, so that these genes can be embedded into some underlying hierarchy. Exploiting directly this hierarchical information in the regularization term outperforms the unstructured approach with a standard norm (Kim and Xing, 2010).These real world examples motivate the need for the design of sparsityinducing regularization schemes, capable of encoding more sophisticated prior knowledge about the expected sparsity patterns. As mentioned above, the norm corresponds only to a constraint on cardinality and is oblivious of any other information available about the patterns of nonzero coefficients (“nonzero patterns” or “supports”) induced in the solution, since they are all theoretically possible. In this paper, we consider a family of sparsityinducing norms that can address a large variety of structured sparse problems: a simple change of norm will induce new ways of selecting variables; moreover, as shown in Section 3.5 and Section 3.6, algorithms to obtain estimators (e.g., convex optimization methods) and theoretical analyses are easily extended in many situations. As shown in Section 3, the norms we introduce generalize traditional “group norms”, that have been popular for selecting variables organized in nonoverlapping groups (Turlach, Venables and Wright, 2005; Yuan and Lin, 2006; Roth and Fischer, 2008; Huang and Zhang, 2010). Other families for different types of structures are presented in Section 3.4.
The paper is organized as follows: we first review in Section 2 classical norm regularization in supervised contexts. We then introduce several families of norms in Section 3, and present applications to unsupervised learning in Section 4, namely for sparse principal component analysis in Section 4.4 and hierarchical dictionary learning in Section 4.5. We briefly show in Section 5 how these norms can also be used for highdimensional nonlinear variable selection.
Notations.
Throughout the paper, we shall denote vectors with bold lower case letters, and matrices with bold upper case ones. For any integer in the set , we denote the th coefficient of a dimensional vector by . Similarly, for any matrix , we refer to the entry on the th row and th column as , for any . We will need to refer to subvectors of , and so, for any , we denote by the vector consisting of the entries of indexed by . Likewise, for any , , we denote by the submatrix of formed by the rows (respectively the columns) indexed by (respectively by ). We extensively manipulate norms in this paper. We thus define the norm for any vector by For , we extend the definition above to pseudonorms. Finally, for any matrix , we define the Frobenius norm of by
2 Unstructured sparsity via the norm
Regularizing by the norm has been a topic of intensive research over the last decade. This line of work has witnessed the development of nice theoretical frameworks (Tibshirani, 1996; Chen, Donoho and Saunders, 1998; Mallat, 1999; Tropp, 2004, 2006; Zhao and Yu, 2006; Zou, 2006; Wainwright, 2009; Bickel, Ritov and Tsybakov, 2009; Zhang, 2009; Negahban et al., 2009) and the emergence of many efficient algorithms (Efron et al., 2004; Nesterov, 2007; Friedman et al., 2007; Wu and Lange, 2008; Beck and Teboulle, 2009; Wright, Nowak and Figueiredo, 2009; Needell and Tropp, 2009; Yuan et al., 2010). Moreover, this methodology has found quite a few applications, notably in compressed sensing (Candès and Tao, 2005), for the estimation of the structure of graphical models (Meinshausen and Bühlmann, 2006) or for several reconstruction tasks involving natural images (e.g., see Mairal, 2010, for a review). In this section, we focus on supervised learning and present the traditional estimation problems associated with sparsityinducing norms such as the norm (see Section 4 for unsupervised learning).
In supervised learning, we predict (typically onedimensional) outputs in from observations in ; these observations are usually represented by dimensional vectors with . Mestimation and in particular regularized empirical risk minimization are well suited to this setting. Indeed, given pairs of data points , we consider the estimators solving the following form of convex optimization problem
(2.1) 
where
is a loss function and
is a sparsityinducing—typically nonsmooth and nonEuclidean—norm. Typical examples of differentiable loss functions are the square loss for least squares regression, i.e., with in , and the logistic lossfor logistic regression, with
in . We refer the readers to ShaweTaylor and Cristianini (2004) and to Hastie, Tibshirani and Friedman (2001) for more complete descriptions of loss functions.Within the context of leastsquares regression, norm regularization is known as the Lasso (Tibshirani, 1996) in statistics and as basis pursuit in signal processing (Chen, Donoho and Saunders, 1998). For the Lasso, formulation (2.1) takes the form
(2.2) 
and, equivalently, basis pursuit can be written^{1}^{1}1Note that the formulations which are typically encountered in signal processing are either , which corresponds to the limiting case of Eq. (2.3) where and is in the span of the dictionary , or which is a constrained counterpart of Eq. (2.3) leading to the same set of solutions (see the explanation following Eq. (2.4)).
(2.3) 
These two equations are obviously identical but we write them both to show the correspondence between notations used in statistics and in signal processing. In statistical notations, we will use to denote a set of observations described by variables (covariates), while represents the corresponding set of targets (responses) that we try to predict. For instance, may have discrete entries in the context of classification. With notations of signal processing, we will consider an dimensional signal that we express as a linear combination of dictionary elements composing the dictionary . While the design matrix is usually assumed fixed and given beforehand, we shall see in Section 4 that the dictionary may correspond either to some predefined basis (e.g., see Mallat, 1999, for wavelet bases) or to a representation that is actually learned as well (Olshausen and Field, 1996).
Geometric intuitions for the norm ball.
While we consider in (2.1) a regularized formulation, we could have considered an equivalent constrained problem of the form
(2.4) 
for some : It is indeed the case that the solutions to problem (2.4) obtained when varying is the same as the solutions to problem (2.1), for some of depending on (e.g., see Section 3.2 in Borwein and Lewis, 2006).
At optimality, the opposite of the gradient of evaluated at any solution of (2.4) must belong to the normal cone to at (Borwein and Lewis, 2006). In other words, for sufficiently small values of (i.e., ensuring that the constraint is active) the level set of for the value is tangent to . As a consequence, important properties of the solutions follow from the geometry of the ball . If is taken to be the norm, then the resulting ball is the standard, isotropic, “round” ball that does not favor any specific direction of the space. On the other hand, when is the norm, corresponds to a diamondshaped pattern in two dimensions, and to a double pyramid in three dimensions. In particular, is anisotropic and exhibits some singular points due to the nonsmoothness of . Since these singular points are located along axisaligned linear subspaces in , if the level set of with the smallest feasible value is tangent to at one of those points, sparse solutions are obtained. We display on Figure 1 the balls for both the  and norms. See Section 3 and Figure 2 for extensions to structured norms.
3 Structured SparsityInducing Norms
In this section, we consider structured sparsityinducing norms that induce estimated vectors that are not only sparse, as for the norm, but whose support also displays some structure known a priori that reflects potential relationships between the variables.
3.1 SparsityInducing Norms with Disjoint Groups of Variables
The most natural form of structured sparsity is arguably group sparsity, matching the a priori knowledge that prespecified disjoint blocks of variables should be selected or ignored simultaneously. In that case, if is a collection of groups of variables, forming a partition of , and is a positive scalar weight indexed by group , we define as
(3.1) 
This norm is usually referred to as a mixed norm, and in practice, popular choices for are . As desired, regularizing with leads variables in the same group to be selected or set to zero simultaneously (see Figure 2 for a geometric interpretation). In the context of leastsquares regression, this regularization is known as the group Lasso (Turlach, Venables and Wright, 2005; Yuan and Lin, 2006). It has been shown to improve the prediction performance and/or interpretability of the learned models when the block structure is relevant (Roth and Fischer, 2008; Stojnic, Parvaresh and Hassibi, 2009; Lounici et al., 2009; Huang and Zhang, 2010). Moreover, applications of this regularization scheme arise also in the context of multitask learning (Obozinski, Taskar and Jordan, 2010; Quattoni et al., 2009; Liu, Palatucci and Zhang, 2009) to account for features shared across tasks, and multiple kernel learning (Bach, 2008) for the selection of different kernels (see also Section 5).
Choice of the weights.
When the groups vary significantly in size, results can be improved, in particular under highdimensional scaling, by an appropriate choice of the weights which compensate for the discrepancies of sizes between groups. It is difficult to provide a unique choice for the weights. In general, they depend on and on the type of consistency desired. We refer the reader to Yuan and Lin (2006); Bach (2008); Obozinski, Jacob and Vert (2011); Lounici et al. (2011) for general discussions.
It might seem that the case of groups that overlap would be unnecessarily complex. It turns out, in reality, that appropriate collections of overlapping groups allow to encode quite interesting forms of structured sparsity. In fact, the idea of constructing sparsityinducing norms from overlapping groups will be key. We present two different constructions based on overlapping groups of variables that are essentially complementary of each other in Sections 3.2 and 3.3.
3.2 SparsityInducing Norms with Overlapping Groups of Variables
In this section, we consider a direct extension of the norm introduced in the previous section to the case of overlapping groups; we give an informal overview of the structures that it can encode and examples of relevant applied settings. For more details see Jenatton, Audibert and Bach (2011).
Starting from the definition of in Eq. (3.1), it is natural to study what happens when the set of groups is allowed to contain elements that overlap. In fact, and as shown by Jenatton, Audibert and Bach (2011), the sparsityinducing behavior of remains the same: when regularizing by , some entire groups of variables in are set to zero. This is reflected in the set of nonsmooth extreme points of the unit ball of the norm represented on Figure 2LABEL:sub@intro:subfig:overl1l2ball. While the resulting patterns of nonzero variables—also referred to as supports, or nonzero patterns—were obvious in the nonoverlapping case, it is interesting to understand here the relationship that ties together the set of groups and its associated set of possible nonzero patterns. Let us denote by the latter set. For any norm of the form (3.1), it is still the case that variables belonging to a given group are encouraged to be set simultaneously to zero; as a result, the possible zero patterns for solutions of (2.1) are obtained by forming unions of the basic groups, which means that the possible supports are obtained by taking the intersection of a certain number of complements of the basic groups.
Moreover, under mild conditions (Jenatton, Audibert and Bach, 2011), given any intersectionclosed^{2}^{2}2A set is said to be intersectionclosed, if for any , and for any , we have . family of patterns of variables (see examples below), it is possible to build an adhoc set of groups —and hence, a regularization norm —that enforces the support of the solutions of (2.1) to belong to .
These properties make it possible to design norms that are adapted to the structure of the problem at hand, which we now illustrate with a few examples.
Onedimensional interval pattern.
Given variables organized in a sequence, using the set of groups of Figure 3, it is only possible to select contiguous nonzero patterns. In this case, we have . Imposing the contiguity of the nonzero patterns can be relevant in the context of variable forming time series, or for the diagnosis of tumors, based on the profiles of CGH arrays (Rapaport, Barillot and Vert, 2008), since a bacterial artificial chromosome will be inserted as a single continuous block into the genome.
Twodimensional convex support.
Similarly, assume now that the variables are organized on a twodimensional grid. To constrain the allowed supports to be the set of all rectangles on this grid, a possible set of groups to consider is represented in the top of Figure 4. This set is relatively small since . Groups corresponding to halfplanes with additional orientations (see Figure 4 bottom) may be added to “carve out” more general convex patterns. See an illustration in Section 4.4.
Twodimensional block structures on a grid.
Using sparsityinducing regularizations built upon groups which are composed of variables together with their spatial neighbors leads to good performances for background subtraction (Cevher et al., 2008; Baraniuk et al., 2010; Huang, Zhang and Metaxas, 2011; Mairal et al., 2011), topographic dictionary learning (Kavukcuoglu et al., 2009; Mairal et al., 2011), and waveletbased denoising (Rao et al., 2011).
Hierarchical structure.
A fourth interesting example assumes that the variables are organized in a hierarchy. Precisely, we assume that the variables can be assigned to the nodes of a tree (or a forest of trees), and that a given variable may be selected only if all its ancestors in have already been selected. This hierarchical rule is exactly respected when using the family of groups displayed on Figure 5. The corresponding penalty was first used by Zhao, Rocha and Yu (2009); one of it simplest instance in the context of regression is the sparse group Lasso (Sprechmann et al., 2010; Friedman, Hastie and Tibshirani, 2010); it has found numerous applications, for instance, waveletbased denoising (Zhao, Rocha and Yu, 2009; Baraniuk et al., 2010; Huang, Zhang and Metaxas, 2011; Jenatton et al., 2011b), hierarchical dictionary learning for both topic modelling and image restoration (Jenatton et al., 2011b), loglinear models for the selection of potential orders (Schmidt and Murphy, 2010), bioinformatics, to exploit the tree structure of gene networks for multitask regression (Kim and Xing, 2010), and multiscale mining of fMRI data for the prediction of simple cognitive tasks (Jenatton et al., 2011a).
Extensions.
Possible choices for the sets of groups are not limited to the aforementioned examples: more complicated topologies can be considered, for example threedimensional spaces discretized in cubes or spherical volumes discretized in slices (see an application to neuroimaging by Varoquaux et al. (2010)), and more complicated hierarchical structures based on directed acyclic graphs can be encoded as further developed in Section 5.
Choice of the weights.
The choice of the weights is significantly more important in the overlapping case both theoretically and in practice. In addition to compensating for the discrepancy in group sizes, the weights additionally have to make up for the potential overpenalization of parameters contained in a larger number of groups. For the case of onedimensional interval patterns, Jenatton, Audibert and Bach (2011) showed that it was more efficient in practice to actually weight each individual coefficient inside of a group as opposed to weighting the group globally.
3.3 Norms for Overlapping Groups: a Latent Variable Formulation
The family of norms defined in Eq. (3.1) is adapted to intersectionclosed sets of nonzero patterns. However, some applications exhibit structures that can be more naturally modelled by unionclosed families of supports. This idea was introduced by Jacob, Obozinski and Vert (2009) and Obozinski, Jacob and Vert (2011) who, given a set of groups , proposed the following norm
(3.2) 
where again is a positive scalar weight associated with group .
The norm we just defined provides a different generalization of the norm to the case of overlapping groups than the norm presented in Section 3.2. In fact, it is easy to see that solving Eq. (2.1) with the norm is equivalent to solving
(3.3) 
and setting . This last equation shows that using the norm can be interpreted as implicitly duplicating the variables belonging to several groups and regularizing with a weighted norm for disjoint groups in the expanded space. Again in this case a careful choice of the weights is important (Obozinski, Jacob and Vert, 2011).
This latent variable formulation pushes some of the vectors to zero while keeping others with no zero components, hence leading to a vector with a support which is in general the union of the selected groups. Interestingly, it can be seen as a convex relaxation of a nonconvex penalty encouraging similar sparsity patterns which was introduced by Huang, Zhang and Metaxas (2011) and which we present in Section 3.4.
Graph Lasso. One type of a priori knowledge commonly encountered takes the form of a graph defined on the set of input variables, which is such that connected variables are more likely to be simultaneously relevant or irrelevant; this type of prior is common in genomics where regulation, coexpression or interaction networks between genes (or their expression level) used as predictors are often available. To favor the selection of neighbors of a selected variable, it is possible to consider the edges of the graph as groups in the previous formulation (see Jacob, Obozinski and Vert, 2009; Rao et al., 2011).
Patterns consisting of a small number of intervals. A quite similar situation occurs, when one knows a priori—typically for variables forming sequences (times series, strings, polymers)—that the support should consist of a small number of connected subsequences. In that case, one can consider the sets of variables forming connected subsequences (or connected subsequences of length at most ) as the overlapping groups (Obozinski, Jacob and Vert, 2011).
3.4 Related Approaches to Structured Sparsity
Norm design through submodular functions.
Another approach to structured sparsity relies on submodular analysis (Bach, 2010). Starting from a nondecreasing, submodular^{3}^{3}3Let be a finite set. A function is said to be submodular if for any subset , we have the inequality ; see Bach (2011) and references therein. setfunction of the supports of the parameter vector —i.e., —a structured sparsityinducing norm can be built by considering its convex envelope (tightest convex lower bound) on the unit norm ball. By selecting the appropriate setfunction , similar structures to those described above can be obtained. This idea can be further extended to symmetric, submodular setfunctions of the level sets of , that is, , thus leading to different types of structures (Bach, 2011), allowing to shape the level sets of rather than its support. This approach can also be generalized to any setfunction and other priors on the the nonzero variables than the norm (Obozinski and Bach, 2012).
Nonconvex approaches.
We mainly focus in this review on convex penalties but in fact many nonconvex approaches have been proposed as well. In the same spirit as the norm (3.2), Huang, Zhang and Metaxas (2011) considered the penalty
where is a given set of groups, and is a set of positive weights which defines a coding length. In other words, the penalty measures from an informationtheoretic viewpoint, “how much it costs” to represent . Finally, in the context of compressed sensing, the work of Baraniuk et al. (2010) also focuses on unionclosed families of supports, although without informationtheoretic considerations. All of these nonconvex approaches can in fact also be relaxed to convex optimization problems (Obozinski and Bach, 2012).
Other forms of sparsity.
We end this review by discussing sparse regularization functions encoding other types of structures than the structured sparsity penalties we have presented. We start with the totalvariation penalty originally introduced in the image processing community (Rudin, Osher and Fatemi, 1992), which encourages piecewise constant signals. It can be found in the statistics literature under the name of “fused lasso” (Tibshirani et al., 2005). For onedimensional signals, it can be seen as the norm of finite differences for a vector in : . Extensions have been proposed for multidimensional signals and for recovering piecewise constant functions on graphs (Kim, Sohn and Xing, 2009).
We remark that we have presented groupsparsity penalties in Section 3.1, where the goal was to select a few groups of variables. A different approach called “exclusive Lasso” consists instead of selecting a few variables inside each group, with some applications in multitask learning (Zhou, Jin and Hoi, 2010).
Finally, we would like to mention a few works on automatic feature grouping (Bondell and Reich, 2008; Shen and Huang, 2010; Zhong and Kwok, 2011), which could be used when no apriori group structure is available. These penalties are typically made of pairwise terms between all variables, and encourage some coefficients to be similar, thereby forming “groups”.
3.5 Convex Optimization with Proximal Methods
In this section, we briefly review proximal methods which are convex optimization methods particularly suited to the norms we have defined. They essentially allow to solve the problem regularized with a new norm at low implementation and computational costs. For a more complete presentation of optimization techniques adapted to sparsityinducing norms, see Bach et al. (2012).
Proximal methods constitute a class of firstorder techniques typically designed to solve problem (2.1) (Nesterov, 2007; Beck and Teboulle, 2009; Combettes and Pesquet, 2010). They take advantage of the structure of (2.1) as the sum of two convex terms. For simplicity, we will present here the proximal method known as forwardbackward splitting which assumes that at least one of these two terms, is smooth. Thus, we will typically assume that the loss function is convex differentiable, with Lipschitzcontinuous gradients (such as the logistic or square loss), while will only be assumed convex.
Proximal methods have become increasingly popular over the past few years, both in the signal processing (e.g., Becker, Bobin and Candes, 2009; Wright, Nowak and Figueiredo, 2009; Combettes and Pesquet, 2010, and numerous references therein) and in the machine learning communities (e.g., Jenatton et al., 2011b; Chen et al., 2011; Bach et al., 2012, and references therein). In a broad sense, these methods can be described as providing a natural extension of gradientbased techniques when the objective function to minimize has a nonsmooth part. Proximal methods are iterative procedures. Their basic principle is to linearize, at each iteration, the function around the current estimate , and to update this estimate as the (unique, by strong convexity) solution of the socalled proximal problem. Under the assumption that is a smooth function, it takes the form:
(3.4) 
The role of the added quadratic term is to keep the update in a neighborhood of where stays close to its current linear approximation; is a parameter which is an upper bound on the Lipschitz constant of .
Provided that we can solve efficiently the proximal problem (3.4), this first iterative scheme constitutes a simple way of solving problem (2.1). It appears under various names in the literature: proximalgradient techniques (Nesterov, 2007), forwardbackward splitting methods (Combettes and Pesquet, 2010), and iterative shrinkagethresholding algorithm (Beck and Teboulle, 2009). Furthermore, it is possible to guarantee convergence rates for the function values (Nesterov, 2007; Beck and Teboulle, 2009), and after iterations, the precision be shown to be of order , which should contrasted with rates for the subgradient case, that are rather .
This first iterative scheme can actually be extended to “accelerated” versions (Nesterov, 2007; Beck and Teboulle, 2009). In that case, the update is not taken to be exactly the result from (3.4); instead, it is obtained as the solution of the proximal problem applied to a wellchosen linear combination of the previous estimates. In that case, the function values converge to the optimum with a rate of , where is the iteration number. From Nesterov (2004), we know that this rate is optimal within the class of firstorder techniques; in other words, accelerated proximalgradient methods can be as fast as without a nonsmooth component.
We have so far given an overview of proximal methods, without specifying how we precisely handle its core part, namely the computation of the proximal problem, as defined in (3.4).
Proximal Problem.
We first rewrite problem (3.4) as
Under this form, we can readily observe that when , the solution of the proximal problem is identical to the standard gradient update rule. The problem above can be more generally viewed as an instance of the proximal operator (Moreau, 1962) associated with :
For many choices of regularizers , the proximal problem has a closedform solution, which makes proximal methods particularly efficient. It turns out that for the norms defined in this paper, we can compute in a large number of cases the proximal operator exactly and efficiently (see Bach et al., 2012). If is chosen to be the norm, the proximal operator is simply the softthresholding operator applied elementwise (Donoho and Johnstone, 1995). More formally, we have for all in , . For the group Lasso penalty of Eq. (3.1) with , the proximal operator is a groupthresholding operator and can be also computed in closed form: for all in . For norms with hierarchical groups of variables (in the sense defined in Section 3.2), the computation of the proximal operator can be obtained by a composition of groupthresholding operators in a time linear in the number of variables (Jenatton et al., 2011b). In other settings, e.g., general overlapping groups of norms, the exact proximal operator implies a more expensive polynomial dependency on using networkflow techniques (Mairal et al., 2011), but approximate computation is possible without harming the convergence speed (Schmidt, Le Roux and Bach, 2011). Most of these norms and the associated proximal problems are implemented in the opensource software SPAMS^{4}^{4}4http://www.di.ens.fr/willow/SPAMS/.
In summary, with proximal methods, generalizing algorithms from the norm to a structured norm requires only to be able to compute the corresponding proximal operator, which can be done efficiently in many cases.
3.6 Theoretical Analysis
Sparse methods are traditionally analyzed according to three different criteria; it is often assumed that the data were generated by a sparse loading vector . Denoting a solution of the estimation problem in Eq. (2.1), traditional statistical consistency results aim at showing that is small for a certain norm ; model consistency considers the estimation of the support of as a criterion, while, prediction efficiency only cares about the prediction of the model, i.e., with the square loss, the quantity has to be as small as possible.
A striking consequence of assuming that has many zero components is that for the three criteria, consistency is achievable even when is much larger than (Zhao and Yu, 2006; Wainwright, 2009; Bickel, Ritov and Tsybakov, 2009; Zhang, 2009).
However, to relax the often unrealistic assumption that the data are generated by a sparse loading vector, and also because a good predictor, especially in the highdimensional setting, can possibly be much sparser than any potential true model generating the data, prediction efficiency is often formulated under the form of oracle inequalities, where the performance of the estimator is upper bounded by the performance of any function in a fixed complexity class, reflecting approximation error, plus a complexity term characterizing the class and reflecting the hardness of estimation in that class. We refer the reader to van de Geer (2010) for a review and references on oracle results for the Lasso and the group Lasso.
It should be noted that model selection consistency and prediction efficiency are obtained in quite different regimes of regularization, so that it is not possible to obtain both types of consistency with the same Lasso estimator (ShalevShwartz, Srebro and Zhang, 2010). For prediction consistency, the regularization parameter is easily chosen by crossvalidation on the prediction error. For model selection consistency, the regularization coefficient should typically be much larger than for prediction consistency; but rather than trying to select an optimal regularization parameter in that case, it is more natural to consider the collection of models obtained along the regularization path and to apply usual model selection methods to choose the best model in the collection. One method that works reasonably well in practice, sometimes called “OLS hybrid” for the least squares loss (Efron et al., 2004), consists in refitting the different models without regularization and to choose the model with the best fit by crossvalidation.
In structured sparse situations, such highdimensional phenomena can also be characterized. Essentially, if one can make the assumption that is compatible with the additional prior knowledge on the sparsity pattern encoded in the norm , then, some of the assumptions required for consistency can sometimes be relaxed (see Huang and Zhang, 2010; Jenatton, Audibert and Bach, 2011; Huang, Zhang and Metaxas, 2011; Bach, 2010), and faster rates can sometimes be obtained (Huang and Zhang, 2010; Huang, Zhang and Metaxas, 2011; Obozinski, Wainwright and Jordan, 2011; Negahban and Wainwright, 2011; Bach, 2009; Percival, 2012). However, one major difficulty that arises is that some of the conditions for recovery or to obtain fast rates of convergence depend on an intricate interaction between the sparsity pattern, the design matrix and the noise covariance, which leads in each case to sufficient conditions that are typically not directly comparable between different structured or unstructured cases (Jenatton, Audibert and Bach, 2011). Moreover, even if the sufficient conditions are satisfied simultaneously for the norms to be compared, sharper bounds on rates and sample complexities would still often be needed to characterize more accurately the improvement resulting from having a stronger structural a priori.
4 Sparse principal component analysis and dictionary learning
Unsupervised learning aims at extracting latent representations of the data that are useful for analysis, visualization, denoising or to extract relevant information to solve subsequently a supervised learning problem. Sparsity or structured sparsity are essential to specify, on the representations, constraints that improve their identifiability and interpretability.
4.1 Analysis and Synthesis Views of PCA
Depending on how the latent representation is extracted or constructed from the data, it is useful to distinguish two points of view. This is illustrated well in the case of PCA.
In the analysis view, PCA aims at finding sequentially
a set of directions in space that explain the largest fraction of the variance of the data. This can be formulated as an iterative procedure in which a onedimensional projection of the data with maximal variance is found first, then the data are projected on the orthogonal subspace (corresponding to a
deflation of the covariance matrix), and the process is iterated. In the synthesis view, PCA aims at finding a set of vectors, or dictionary elements(in a terminology closer to signal processing) such that all observed signals admit a linear decomposition on that set with low reconstruction error. In the case of PCA, these two formulations lead to the same solution (an eigenvalue problem). However, in extensions of PCA, in which either the dictionary elements or the decompositions of signals are constrained to be sparse or structured, they lead to different algorithms with different solutions.
The analysis interpretation leads to sequential formulations (d’Aspremont, Bach and El Ghaoui, 2008; Moghaddam, Weiss and Avidan, 2006; Jolliffe, Trendafilov and Uddin, 2003) that consider components one at a time and perform a deflation of the covariance matrix at each step (see Mackey, 2009). The synthesis interpretation leads to nonconvex global formulations (see, e.g., Zou, Hastie and Tibshirani, 2006; Moghaddam, Weiss and Avidan, 2006; Aharon, Elad and Bruckstein, 2006; Mairal et al., 2010) which estimate simultaneously all principal components, typically do not require the orthogonality of the components, and are referred to as matrix factorization problems (Singh and Gordon, 2008; Bach, Mairal and Ponce, 2008) in machine learning, and dictionary learning in signal processing (Olshausen and Field, 1996).
While we could also impose structured sparse priors in the analysis view, we will consider from now on the synthesis view, that we will introduce with the terminology of dictionary learning.
4.2 Dictionary Learning
Given a matrix of columns corresponding to observations in , the dictionary learning problem is to find a matrix , called dictionary, such that each observation can be well approximated by a linear combination of the columns of called the dictionary elements. If is the matrix of the linear combination coefficients or decomposition coefficients (or codes), with the th column of being the coefficients for the th signal , the matrix product is called a decomposition of .
Learning simultaneously the dictionary and the coefficients corresponds to a matrix factorization problem (see Witten, Tibshirani and Hastie, 2009, and reference therein).
As formulated by Bach, Mairal and Ponce (2008) or Witten, Tibshirani and Hastie (2009), it is natural, when learning a decomposition, to penalize or constrain some norms or pseudonorms of and , say and respectively, to encode prior information — typically sparsity — about the decomposition of . While in general the penalties could be defined globally on the matrices and , we assume that each column of and is penalized separately. This can be written as
(4.1) 
where the regularization parameter controls to which extent the dictionary is regularized. If we assume that both regularizations and are convex, problem (4.1) is convex with respect to for fixed and vice versa. It is however not jointly convex in the pair , but alternating optimization schemes generally lead to good performance in practice.
4.3 Imposing Sparsity
The choice of the two norms and is crucial and heavily influences the behavior of dictionary learning. Without regularization, any solution is such that is the best fixedrank approximation of , and the problem can be solved exactly with a classical PCA. When is the norm and the norm, we aim at finding a dictionary such that each signal admits a sparse decomposition on the dictionary. In this context, we are essentially looking for a basis where the data have sparse decompositions, a framework we refer to as sparse dictionary learning. On the contrary, when is the norm and the norm, the formulation induces sparse principal components, i.e., atoms with many zeros, a framework we refer to as sparse PCA. In Section 4.4 and Section 4.5, we replace the norm by structured norms introduced in Section 3, leading to structured versions of the above estimation problems.
4.4 Adding Structures to Principal Components
One of PCA’s main shortcomings is that, even if it finds a small number of important factors, the factor themselves typically involve all original variables. In the last decade, several alternatives to PCA which find sparse and potentially interpretable factors have been proposed, notably nonnegative matrix factorization (NMF) (Lee and Seung, 1999) and sparse PCA (SPCA) (Jolliffe, Trendafilov and Uddin, 2003; Zou, Hastie and Tibshirani, 2006; Zass and Shashua, 2007; Witten, Tibshirani and Hastie, 2009).
However, in many applications, only constraining the size of the supports of the factors does not seem appropriate because the considered factors are not only expected to be sparse but also to have a certain structure. In fact, the popularity of NMF for face image analysis owes essentially to the fact that the method happens to retrieve sets of variables that are partly localized on the face and capture some features or parts of the face which seem intuitively meaningful given our a priori. We might therefore gain in the quality of the factors induced by enforcing directly this a priori in the matrix factorization constraints. More generally, it would be desirable to encode higherorder information about the supports that reflects the structure
of the data. For example, in computer vision, features associated to the pixels of an image are naturally organized on a grid and the supports of factors explaining the variability of images could be expected to be localized, connected or have some other regularity with respect to that grid. Similarly, in genomics, factors explaining the gene expression patterns observed on a microarray could be expected to involve groups of genes corresponding to biological pathways or set of genes that are neighbors in a proteinprotein interaction network.
Based on these remarks and with the norms presented earlier, sparse PCA is readily extended to structured sparse PCA (SSPCA), which explains the variance of the data by factors that are not only sparse but also respect some a priori structural constraints deemed relevant to model the data at hand: slight variants of the regularization term defined in Section 3 (with the groups defined in Figure 4) can be used successfully for .
Experiments on face recognition.
By definition, dictionary learning belongs to unsupervised learning; in that sense, our method may appear first as a tool for exploratory data analysis, which leads us naturally to qualitatively analyze the results of our decompositions (e.g., by visualizing the learned dictionaries). This is obviously a difficult and subjective exercise, beyond the assessment of the consistency of the method in artificial examples where the “true” dictionary is known. For quantitative results, see Jenatton, Obozinski and Bach (2010).^{5}^{5}5A Matlab toolbox implementing our method can be downloaded from http://www.di.ens.fr/~jenatton/.
We apply SSPCA on the cropped AR Face Database (Martinez and Kak, 2001) that consists of 2600 face images, corresponding to 100 individuals (50 women and 50 men). For each subject, there are 14 nonoccluded poses and 12 occluded ones (the occlusions are due to sunglasses and scarfs). We reduce the resolution of the images from pixels to pixels for computational reasons.
Figure 7 shows examples of learned dictionaries (for elements), for NMF, unstructured sparse PCA (SPCA), and SSPCA. While NMF finds sparse but spatially unconstrained patterns, SSPCA selects sparse convex areas that correspond to a more natural segment of faces. For instance, meaningful parts such as the mouth and the eyes are recovered by the dictionary.
4.5 Hierarchical Dictionary Learning
In this section, we consider sparse dictionary learning, where the structured sparse prior knowledge is put on the decomposition coefficients, i.e., the matrix in Eq. (4.1), and present an application to text documents.
Text documents.
The goal of probabilistic topic models is to find a lowdimensional representation of a collection of documents, where the representation should provide a semantic description of the collection. Approaching the problem in a parametric Bayesian framework, latent Dirichlet allocation (LDA), Blei, Ng and Jordan (2003) models documents, represented as vectors of word counts, as a mixture of a predefined number of latent topics, defined as multinomial distributions over a fixed vocabulary. The number of topics is usually small compared to the size of the vocabulary (e.g., 100 against ), so that the topic proportions of each document provide a compact representation of the corpus.
In fact the problem addressed by LDA is fundamentally a matrix factorization problem. For instance, Buntine (2002) argued that LDA can be interpreted as a Dirichletmultinomial counterpart of factor analysis. We can actually cast the problem in the dictionary learning formulation that we presented before^{6}^{6}6Doing so we simply trade the multinomial likelihood with a leastsquare formulation.. Indeed, suppose that the signals in are each the socalled bagofword representation of each of documents over a vocabulary of words, i.e., is a vector whose th component is the empirical frequency in document of the
th word of a fixed lexicon. If we constrain the entries of
and to be nonnegative, and the dictionary elements to have unit norm, the decomposition can be interpreted as the parameters of a topicmixture model. Sparsity here ensures that a document is described by a small number of topics.Switching to structured sparsity allows in this case to organize automatically the dictionary of topics in the process of learning it. Assume that in Eq. (4.1) is a treestructured regularization, such as illustrated on Figure 5; in this case, in the light of Section 3.2, if the decomposition of a document involves a certain topic, then all ancestral topics in the tree are also present in the topic decomposition. Since the hierarchy is shared by all documents, the topics close to the root participate in every decomposition, and given that the dictionary is learned, this mechanism forces those topics to be quite generic—essentially gathering the lexicon which is common to all documents. Conversely, the deeper the topics in the tree, the more specific they should be. It should be noted that such hierarchical dictionaries can also be obtained with generative probabilistic models, typically based on nonparametric Bayesian prior over trees or paths in trees, and which extend the LDA model to topic hierarchies (Blei, Griffiths and Jordan, 2010; Adams, Ghahramani and Jordan, 2010).
Visualization of NIPS proceedings.
We qualitatively illustrate our approach on the NIPS proceedings from 1988 through 1999 (Griffiths and Steyvers, 2004). After removing words appearing fewer than 10 times, the dataset is composed of 1714 articles, with a vocabulary of 8274 words. As explained above, we enforce both the dictionary and the sparse coefficients to be nonnegative, and constrain the dictionary elements to have a unit norm. Figure 8 displays an example of a learned dictionary with 13 topics, obtained by using a treestructured penalty (see Section 3.2) on the coefficients and by selecting manually^{7}^{7}7The regularization parameter striking a good compromise between sparsity and reconstruction of the data is chosen here by hand because (a) crossvalidation would yield a significantly less sparse dictionary and (b) model selection criteria would not apply without serious caveats here since the dictionary is learned at the same time. . As expected and similarly to Blei, Griffiths and Jordan (2010), we capture the stopwords at the root of the tree, and topics reflecting the different subdomains of the conference such as neurosciences, optimization or learning theory.
5 Highdimensional nonlinear variable selection
In this section, we show how structured sparsityinducing norms may be used to provide an efficient solution to the problem of highdimensional nonlinear variable selection. Namely, given variables , our aim is to find a nonlinear function which depends only on a few variables. First approaches to the problem have considered restricted functional forms such as , where each are univariate nonlinear functions (Ravikumar et al., 2009; Bach, 2008). However, many nonlinear functions cannot be expressed as sums of functions of these forms. Additional interactions have been added leading to functions of the form (Lin and Zhang, 2006). While secondorder interactions make the class of functions larger, our aim in this section is to consider functions which can be expressed as a sparse linear combination of the form , i.e., a combination of functions defined on potentially larger subsets of variables.
The main difficulties associated with this problem are that (1) each function has to be estimated, leading to a nonparametric problem, and (2) there are exponentially many such functions. We propose however an approach that overcomes both difficulties in the next section, based on the ideas that estimating functions rather than vectors can be tackled with estimators in reproducing kernel Hilbert spaces (see Section 5.1), and that the complexity issues can be addressed by imposing some structure among all the subsets (see Section 5).
5.1 Multiple Kernel Learning: From Linear to NonLinear Predictions
Reproducing kernel Hilbert spaces are arguably the simplest spaces for the nonparametric estimation of nonlinear functions since most learning algorithms for linear models are directly ported to any RKHS via simple kernelization. We therefore start by reviewing learning from a single and later multiple reproducing kernels, since our approach will be based on combining functions from multiple (actually a hierarchy) of RKHSes. For more details, see Bach (2008).
Single kernel learning.
Let us assume that the input datapoints belong to a set (not necessarily ), and consider predictors of the form where is a map from the input space to a reproducing kernel Hilbert space (associated to the kernel function ), which we refer to as the feature space. These predictors are linearly parameterized, but may depend nonlinearly on . We consider the following estimation problem:
where is the Hilbertian norm associated to . The representer theorem (Kimeldorf and Wahba, 1971) states that, for all loss functions (potentially nonconvex), the solution admits the expansion , so that, replacing by its new expression, we can now minimize
where is the kernel matrix, an matrix whose element is equal to . This optimization problem involves the observations only through the kernel matrix , and can thus be solved, as long as can be evaluated efficiently. See ShaweTaylor and Cristianini (2004) for more details.
Multiple kernel learning (MKL).
We can now assume that we are given Hilbert spaces , , and look for predictors of the form , where^{8}^{8}8Notice that the function is not restricted to depend only on a subpart of as before. each . In order to have many equal to zero, we can penalize using a sum of norms similar to the group Lasso penalties introduced earlier, namely . This leads to selection of functions. Moreover, it turns out that the optimization problems may be expressed also in terms of the kernel matrices, and it is equivalent to learn a sparse linear combination (with many ’s equal to zero) of kernel matrices with then solution of the single kernel learning problem for . For more details, see Bach (2008).
From MKL to sparse generalized additive models.
As shown above, the MKL framework is defined with any set of RKHSes defined on the same base set . When the base set is itself defined as a cartesian product of base sets, i.e., , then it is common to consider RKHSes which are each of them defined on a single , leading to the desired functional form . To overcome the limitation of this functional form we need to consider a more complex expansion.
5.2 Hierarchical Kernel Learning
In this section, we consider functional expansions with up to terms corresponding to different RKHSes, each defined on a cartesian product of a subset of the separate input spaces. Specifically, we consider functions of the form with chosen to live in a RKHS defined on variables . Penalizing by the norm would in theory lead to an appropriate selection of functions from the various RKHSes (and to learning a sparse linear combination of the corresponding kernel matrices). However, in practice, there are such predictors, which is not algorithmically feasible.
This is where structured sparsity comes into play. In order to obtain polynomialtime algorithms and theoretically controlled predictive performance, we may add an extra constraint to the problem. Namely, we endow the power set of with the partial order of the inclusion of sets, and in this directed acyclic graph (DAG), we require that predictors select a subset only after all of its ancestors have been selected. This can be achieved in a convex formulation using a structuredsparsity inducing norm of the type presented in Section 3.2, but defined by a hierarchy of groups as follows
As illustrated in Figure 9, this norm corresponds to overlapping groups of variables defined on the directed acyclic graphs of all subsets of . We will explain briefly how introducing this norm may lead to polynomial time algorithms and what theoretical guarantees are associated with it. Illustrations of the application of hierarchical kernel learning to real data can be found in Bach (2009).
Polynomialtime estimation algorithm.
While we are, a priori, still facing an estimation problem with functions, it can be solved using an active set method, which considers adding a component (resp. ) to the active set of predictors (resp. kernels). The two crucial aspects are (1) to add the right kernel, i.e., choose among the which one to add, and (2) when to stop. As shown in Bach (2009), these steps may be carried out efficiently for certain collections of RKHSes , in particular those for which we are able to compute efficiently (i.e., in polynomial time in ) the sum . This is the case, for example, for Gaussian kernels .
Theoretical analysis.
Bach (2009) showed that under appropriate assumptions, estimation under highdimensional scaling, i.e., for but , is possible in this situation, in spite of the fact that the number of terms in the expansion is now potentially doubly exponential in .
6 Conclusion
In this paper, we reviewed several approaches for structured sparsity, based on convex optimization and the design of appropriate sparsityinducing norms. Analyses and algorithms for the traditional
norm can readily be extended to these new norms, making them an efficient and flexible tools for introducing prior knowledge in highdimensional statistical problems. We also presented several applications to supervised and unsupervised learning problems, where the proper use of additional knowledge leads to improved interpretability of the sparse estimates and/or increased predictive performance.
Acknowledgements
Francis Bach, Rodolphe Jenatton and Guillaume Obozinski are supported in part by ANR under grant MGA ANR07BLAN0311 and the European Research Council (SIERRA Project). Julien Mairal is supported by the NSF grant SES0835531 and NSF award CCF0939370. The authors would like to thank the anonymous reviewers, whose comments have greatly contributed to improve the quality of this paper.
References
 Adams, Ghahramani and Jordan (2010) [author] Adams, RyanR., Ghahramani, ZoubinZ. Jordan, MichaelM. (2010). TreeStructured Stick Breaking for Hierarchical Data. In Advances in Neural Information Processing Systems 23 (J.J. Lafferty, C. K. I.C. K. I. Williams, J.J. ShaweTaylor, R. S.R. S. Zemel A.A. Culotta, eds.) 19–27.
 Aharon, Elad and Bruckstein (2006) [author] Aharon, M.M., Elad, M.M. Bruckstein, A.A. (2006). KSVD: An Algorithm for Designing Overcomplete Dictionaries for Sparse Representation. IEEE Trans. Signal Processing 54 4311–4322.
 Bach (2008) [author] Bach, F.F. (2008). Consistency of the group Lasso and multiple kernel learning. Journal of Machine Learning Research 9 1179–1225.
 Bach (2009) [author] Bach, F.F. (2009). Exploring large feature spaces with hierarchical multiple kernel learning. In Neural Information Processing Systems 21.
 Bach (2010) [author] Bach, F.F. (2010). Structured Sparsityinducing Norms Through Submodular Functions. In Advances in Neural Information Processing Systems 23.
 Bach (2011) [author] Bach, FF. (2011). Learning with Submodular Functions: A Convex Optimization Perspective Technical Report No. 00645271, HAL.
 Bach (2011) [author] Bach, F.F. (2011). Shaping level sets with submodular functions. In Advances in Neural Information Processing Systems 24.
 Bach, Mairal and Ponce (2008) [author] Bach, F.F., Mairal, J.J. Ponce, J.J. (2008). Convex Sparse Matrix Factorizations Technical Report, Preprint arXiv:0812.1869.
 Bach et al. (2012) [author] Bach, F.F., Jenatton, R.R., Mairal, J.J. Obozinski, G.G. (2012). Optimization with sparsityinducing penalties. Foundations and Trends in Machine Learning 4 1–106.
 Baraniuk et al. (2010) [author] Baraniuk, R. G.R. G., Cevher, V.V., Duarte, M. F.M. F. Hegde, C.C. (2010). Modelbased compressive sensing. IEEE Transactions on Information Theory 56 1982–2001.
 Beck and Teboulle (2009) [author] Beck, A.A. Teboulle, M.M. (2009). A fast iterative shrinkagethresholding algorithm for linear inverse problems. SIAM Journal on Imaging Sciences 2 183–202.
 Becker, Bobin and Candes (2009) [author] Becker, S.S., Bobin, J.J. Candes, E.E. (2009). NESTA: A Fast and Accurate Firstorder Method for Sparse Recovery. SIAM Journal on Imaging Sciences 4 1–39.
 Bickel, Ritov and Tsybakov (2009) [author] Bickel, P.P., Ritov, Y.Y. Tsybakov, A.A. (2009). Simultaneous analysis of Lasso and Dantzig selector. Annals of Statistics 37 1705–1732.
 Blei, Griffiths and Jordan (2010) [author] Blei, D.D., Griffiths, T. L.T. L. Jordan, M. I.M. I. (2010). The nested Chinese restaurant process and Bayesian nonparametric inference of topic hierarchies. Journal of the ACM 57 1–30.
 Blei, Ng and Jordan (2003) [author] Blei, D.D., Ng, A.A. Jordan, M.M. (2003). Latent Dirichlet allocation. Journal of Machine Learning Research 3 993–1022.
 Bondell and Reich (2008) [author] Bondell, H. D.H. D. Reich, B. J.B. J. (2008). Simultaneous regression shrinkage, variable selection, and supervised clustering of predictors with OSCAR. Biometrics 64 115–123.
 Borwein and Lewis (2006) [author] Borwein, J. M.J. M. Lewis, A. S.A. S. (2006). Convex Analysis and Nonlinear Optimization: Theory and Examples. Springer.
 Buntine (2002) [author] Buntine, W. L.W. L. (2002). Variational Extensions to EM and Multinomial PCA. In Proceedings of the European Conference on Machine Learning (ECML).

Candès and Tao (2005)
[author] Candès, E. J.E. J. Tao, T.T. (2005). Decoding by linear programming. IEEE Transactions on Information Theory 51 4203–4215.
 Cevher et al. (2008) [author] Cevher, V.V., Duarte, M. F.M. F., Hegde, C.C. Baraniuk, R. G.R. G. (2008). Sparse signal recovery using Markov random fields. In Advances in Neural Information Processing Systems 20.
 Chen, Donoho and Saunders (1998) [author] Chen, S. S.S. S., Donoho, D. L.D. L. Saunders, M. A.M. A. (1998). Atomic Decomposition by Basis Pursuit. SIAM Journal on Scientific Computing 20 33–61.

Chen et al. (2011)
[author] Chen, X.X., Lin, Q.Q., Kim, S.S., Carbonell, J. G.J. G. Xing, E. P.E. P. (2011). Smoothing Proximal Gradient Method for General Structured Sparse Learning. In Proceedings of the TwentyFifth Conference on Uncertainty in Artificial Intelligence (UAI).
 Combettes and Pesquet (2010) [author] Combettes, P. L.P. L. Pesquet, J. C.J. C. (2010). Proximal splitting methods in signal processing. In FixedPoint Algorithms for Inverse Problems in Science and Engineering Springer.
 d’Aspremont, Bach and El Ghaoui (2008) [author] d’Aspremont, A.A., Bach, F.F. El Ghaoui, L.L. (2008). Optimal Solutions for Sparse Principal Component Analysis. Journal of Machine Learning Research 9 1269–1294.
 Donoho and Johnstone (1995) [author] Donoho, D. L.D. L. Johnstone, I. M.I. M. (1995). Adapting to Unknown Smoothness Via Wavelet Shrinkage. Journal of the American Statistical Association 90 1200–1224.
 Efron et al. (2004) [author] Efron, B.B., Hastie, T.T., Johnstone, I.I. Tibshirani, R.R. (2004). Least angle regression. Annals of Statistics 32 407–451.
 Friedman, Hastie and Tibshirani (2010) [author] Friedman, J.J., Hastie, T.T. Tibshirani, R.R. (2010). A note on the group Lasso and a sparse group Lasso. preprint.
 Friedman et al. (2007) [author] Friedman, J.J., Hastie, T.T., Höfling, H.H. Tibshirani, R.R. (2007). Pathwise coordinate optimization. Annals of Applied Statistics 1 302–332.
 Gramfort and Kowalski (2009) [author] Gramfort, A.A. Kowalski, M.M. (2009). Improving M/EEG source localization with an intercondition sparse prior. In IEEE International Symposium on Biomedical Imaging.
 Griffiths and Steyvers (2004) [author] Griffiths, T. L.T. L. Steyvers, M.M. (2004). Finding scientific topics. Proceedings of the National Academy of Sciences 101 5228–5235.
 Hastie, Tibshirani and Friedman (2001) [author] Hastie, T.T., Tibshirani, R.R. Friedman, J.J. (2001). The Elements of Statistical Learning. SpringerVerlag.
 Huang and Zhang (2010) [author] Huang, J.J. Zhang, T.T. (2010). The benefit of group sparsity. Annals of Statistics 38 1978–2004.
 Huang, Zhang and Metaxas (2011) [author] Huang, J.J., Zhang, T.T. Metaxas, D.D. (2011). Learning with structured sparsity. Journal of Machine Learning Research 12 3371–3412.
 Jacob, Obozinski and Vert (2009) [author] Jacob, L.L., Obozinski, G.G. Vert, J. P.J. P. (2009). Group Lasso with overlaps and graph Lasso. In Proceedings of the International Conference on Machine Learning (ICML).
 Jenatton, Audibert and Bach (2011) [author] Jenatton, R.R., Audibert, J. Y.J. Y. Bach, F.F. (2011). Structured Variable Selection with SparsityInducing Norms. Journal of Machine Learning Research 12 2777–2824.
 Jenatton, Obozinski and Bach (2010) [author] Jenatton, R.R., Obozinski, G.G. Bach, F.F. (2010). Structured sparse principal component analysis. In International Conference on Artificial Intelligence and Statistics (AISTATS).
 Jenatton et al. (2011a) [author] Jenatton, R.R., Gramfort, A.A., Michel, V.V., Obozinski, G.G., Eger, E.E., Bach, F.F. Thirion, B.B. (2011a). Multiscale mining of fMRI data with hierarchical structured sparsity Technical Report, Preprint arXiv:1105.0363. To appear in SIAM Journal on Imaging Sciences.
 Jenatton et al. (2011b) [author] Jenatton, R.R., Mairal, J.J., Obozinski, G.G. Bach, F.F. (2011b). Proximal Methods for Hierarchical Sparse Coding. Journal of Machine Learning Research 12 22972334.
 Jolliffe, Trendafilov and Uddin (2003) [author] Jolliffe, I. T.I. T., Trendafilov, N. T.N. T. Uddin, M.M. (2003). A modified principal component technique based on the Lasso. Journal of Computational and Graphical Statistics 12 531–547.

Kavukcuoglu et al. (2009)
[author] Kavukcuoglu, K.K., Ranzato, M. A.M. A., Fergus, R.R. LeCun, Y.Y. (2009). Learning invariant features through topographic filter maps. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
 Kim, Sohn and Xing (2009) [author] Kim, S.S., Sohn, K. A.K. A. Xing, E. P.E. P. (2009). A multivariate regression approach to association analysis of a quantitative trait network. Bioinformatics 25 204–212.
 Kim and Xing (2010) [author] Kim, S.S. Xing, E. P.E. P. (2010). TreeGuided Group Lasso for MultiTask Regression with Structured Sparsity. In Proceedings of the International Conference on Machine Learning (ICML).
 Kimeldorf and Wahba (1971) [author] Kimeldorf, G. S.G. S. Wahba, G.G. (1971). Some results on Tchebycheffian spline functions. J. Math. Anal. Applicat. 33 82–95.
 Lee and Seung (1999) [author] Lee, D. D.D. D. Seung, H. S.H. S. (1999). Learning the parts of objects by nonnegative matrix factorization. Nature 401 788–791.
 Lin and Zhang (2006) [author] Lin, Y.Y. Zhang, H. H.H. H. (2006). Component selection and smoothing in multivariate nonparametric regression. Annals of Statistics 34 2272–2297.
 Liu, Palatucci and Zhang (2009) [author] Liu, H.H., Palatucci, M.M. Zhang, J.J. (2009). Blockwise coordinate descent procedures for the multitask lasso, with applications to neural semantic basis discovery. In Proceedings of the International Conference on Machine Learning (ICML).
 Lounici et al. (2009) [author] Lounici, K.K., Pontil, M.M., Tsybakov, A. B.A. B. van de Geer, S.S. (2009). Taking Advantage of Sparsity in MultiTask Learning. In Proceedings of the Conference on Learning Theory.
 Lounici et al. (2011) [author] Lounici, K.K., Pontil, M.M., van de Geer, S.S. Tsybakov, A. B.A. B. (2011). Oracle inequalities and optimal inference under group sparsity. Annals of Statistics 39 2164–2204.
 Mackey (2009) [author] Mackey, L.L. (2009). Deflation Methods for Sparse PCA. In Advances in Neural Information Processing Systems 21.
 Mairal (2010) [author] Mairal, J.J. (2010). Sparse coding for machine learning, image processing and computer vision PhD thesis, École normale supérieure de Cachan  ENS Cachan Available at http://tel.archivesouvertes.fr/tel00595312/fr/.
 Mairal et al. (2010) [author] Mairal, J.J., Bach, F.F., Ponce, J.J. Sapiro, G.G. (2010). Online learning for matrix factorization and sparse coding. Journal of Machine Learning Research 11 19–60.
 Mairal et al. (2011) [author] Mairal, J.J., Jenatton, R.R., Obozinski, G.G. Bach, F.F. (2011). Convex and Network Flow Optimization for Structured Sparsity. Journal of Machine Learning Research 12 2681–2720.
 Mallat (1999) [author] Mallat, S. G.S. G. (1999). A wavelet tour of signal processing. Academic Press.
 Martinez and Kak (2001) [author] Martinez, A. M.A. M. Kak, A. C.A. C. (2001). PCA versus LDA. IEEE Transactions on Pattern Analysis and Machine Intelligence 23 228–233.
 Meinshausen and Bühlmann (2006) [author] Meinshausen, N.N. Bühlmann, P.P. (2006). Highdimensional graphs and variable selection with the Lasso. Annals of Statistics 34 1436–1462.
 Moghaddam, Weiss and Avidan (2006) [author] Moghaddam, B.B., Weiss, Y.Y. Avidan, S.S. (2006). Spectral bounds for sparse PCA: Exact and greedy algorithms. In Advances in Neural Information Processing Systems 18.
 Moreau (1962) [author] Moreau, J. J.J. J. (1962). Fonctions convexes duales et points proximaux dans un espace hilbertien. C. R. Acad. Sci. Paris Sér. A Math. 255 2897–2899.
 Needell and Tropp (2009) [author] Needell, D.D. Tropp, J. A.J. A. (2009). CoSaMP: Iterative signal recovery from incomplete and inaccurate samples. Applied and Computational Harmonic Analysis 26 301–321.
 Negahban and Wainwright (2011) [author] Negahban, S. N.S. N. Wainwright, M. J.M. J. (2011). Simultaneous Support Recovery in High Dimensions: Benefits and Perils of Block {}{}Regularization. Information Theory, IEEE Transactions on 57 3841–3863.
 Negahban et al. (2009) [author] Negahban, S.S., Ravikumar, P.P., Wainwright, M. J.M. J. Yu, B.B. (2009). A unified framework for highdimensional analysis of Mestimators with decomposable regularizers. In Advances in Neural Information Processing Systems 22.
 Nesterov (2004) [author] Nesterov, Y.Y. (2004). Introductory lectures on convex optimization: a basic course. Kluwer Academic Publishers.
 Nesterov (2007) [author] Nesterov, Y.Y. (2007). Gradient methods for minimizing composite objective function Technical Report, Center for Operations Research and Econometrics (CORE), Catholic University of Louvain.
 Obozinski and Bach (2012) [author] Obozinski, G.G. Bach, F.F. (2012). Convex relaxation for combinatorial penalties Technical Report, HAL.
 Obozinski, Jacob and Vert (2011) [author] Obozinski, G.G., Jacob, L.L. Vert, J. P.J. P. (2011). Group Lasso with overlaps: the Latent group Lasso approach Technical Report No. inria00628498, HAL.
 Obozinski, Taskar and Jordan (2010) [author] Obozinski, G.G., Taskar, B.B. Jordan, M. I.M. I. (2010). Joint covariate selection and joint subspace selection for multiple classification problems. Statistics and Computing 20 231–252.
 Obozinski, Wainwright and Jordan (2011) [author] Obozinski, G.G., Wainwright, M. J.M. J. Jordan, M. I.M. I. (2011). Support Union Recovery in Highdimensional Multivariate regression. Annals of statistics 39 1–47.
 Olshausen and Field (1996) [author] Olshausen, B. A.B. A. Field, D. J.D. J. (1996). Emergence of simplecell receptive field properties by learning a sparse code for natural images. Nature 381 607–609.
 Percival (2012) [author] Percival, D.D. (2012). Theoretical Properties of the Overlapping Group Lasso. Electron. J. Statist. 6 269–288.
 Quattoni et al. (2009) [author] Quattoni, A.A., Carreras, X.X., Collins, M.M. Darrell, T.T. (2009). An efficient projection for regularization. In Proceedings of the International Conference on Machine Learning (ICML).
 Rao et al. (2011) [author] Rao, N. S.N. S., Nowak, R. D.R. D., Wright, S. J.S. J. Kingsbury, N. G.N. G. (2011). Convex approaches to model wavelet sparsity patterns. In International Conference on Image Processing (ICIP).
 Rapaport, Barillot and Vert (2008) [author] Rapaport, F.F., Barillot, E.E. Vert, J. P.J. P. (2008). Classification of arrayCGH data using fused SVM. Bioinformatics 24 i375–i382.
 Ravikumar et al. (2009) [author] Ravikumar, P.P., Lafferty, J.J., Liu, H.H. Wasserman, L.L. (2009). Sparse additive models. Journal of the Royal Statistical Society. Series B, Statistical methodology 71 1009–1030.
 Roth and Fischer (2008) [author] Roth, V.V. Fischer, B.B. (2008). The groupLasso for generalized linear models: uniqueness of solutions and efficient algorithms. In Proceedings of the International Conference on Machine Learning (ICML).
 Rudin, Osher and Fatemi (1992) [author] Rudin, L. I.L. I., Osher, S.S. Fatemi, E.E. (1992). Nonlinear total variation based noise removal algorithms. Physica D: Nonlinear Phenomena 60 259–268.
 Schmidt, Le Roux and Bach (2011) [author] Schmidt, M.M., Le Roux, N.N. Bach, F.F. (2011). Convergence Rates of Inexact ProximalGradient Methods for Convex Optimization. In Advances in Neural Information Processing Systems 24.
 Schmidt and Murphy (2010) [author] Schmidt, M.M. Murphy, K.K. (2010). Convex Structure Learning in LogLinear Models: Beyond Pairwise Potentials. In Proceedings of the International Conference on Artificial Intelligence and Statistics (AISTATS).
 ShalevShwartz, Srebro and Zhang (2010) [author] ShalevShwartz, S.S., Srebro, N.N. Zhang, T.T. (2010). Trading accuracy for sparsity in optimization problems with sparsity constraints. SIAM Journal on Optimization 20.
 ShaweTaylor and Cristianini (2004) [author] ShaweTaylor, J.J. Cristianini, N.N. (2004). Kernel Methods for Pattern Analysis. Cambridge University Press.
 Shen and Huang (2010) [author] Shen, X.X. Huang, H. C.H. C. (2010). Grouping pursuit through a regularization solution surface. Journal of the American Statistical Association 105 727–739.
 Singh and Gordon (2008) [author] Singh, A. P.A. P. Gordon, G. J.G. J. (2008). A Unified View of Matrix Factorization Models. In Proceedings of the European conference on Machine Learning and Knowledge Discovery in Databases.
 Sprechmann et al. (2010) [author] Sprechmann, P.P., Ramirez, I.I., Sapiro, G.G. Eldar, Y.Y. (2010). Collaborative hierarchical sparse modeling. In 44th Annual Conference on Information Sciences and Systems (CISS) 1–6. IEEE.
 Stojnic, Parvaresh and Hassibi (2009) [author] Stojnic, M.M., Parvaresh, F.F. Hassibi, B.B. (2009). On the reconstruction of blocksparse signals with an optimal number of measurements. IEEE Transactions on Signal Processing 57 3075–3085.
 Tibshirani (1996) [author] Tibshirani, R.R. (1996). Regression shrinkage and selection via the Lasso. Journal of the Royal Statistical Society. Series B 267–288.
 Tibshirani et al. (2005) [author] Tibshirani, R.R., Saunders, M.M., Rosset, S.S., Zhu, J.J. Knight, K.K. (2005). Sparsity and smoothness via the fused Lasso. J. Roy. Stat. Soc. B 67 91–108.
 Tropp (2004) [author] Tropp, J. A.J. A. (2004). Greed is good: Algorithmic results for sparse approximation. IEEE Transactions on Information Theory 50 2231–2242.
 Tropp (2006) [author] Tropp, J. A.J. A. (2006). Just relax: Convex programming methods for identifying sparse signals in noise. IEEE Transactions on Information Theory 52.
 Turlach, Venables and Wright (2005) [author] Turlach, B. A.B. A., Venables, W. N.W. N. Wright, S. J.S. J. (2005). Simultaneous variable selection. Technometrics 47 349–363.
 van de Geer (2010) [author] van de Geer, S.S. (2010). Regularization in HighDimensional Statistical Models. In Proceedings of the International Congress of Mathematicians 4 2351–2369.
 Varoquaux et al. (2010) [author] Varoquaux, G.G., Jenatton, R.R., Gramfort, A.A., Obozinski, G.G., Thirion, B.B. Bach, F.F. (2010). Sparse Structured Dictionary Learning for Brain RestingState Activity Modeling. In NIPS Workshop on Practical Applications of Sparse Modeling: Open Issues and New Directions.
 Wainwright (2009) [author] Wainwright, M. J.M. J. (2009). Sharp thresholds for noisy and highdimensional recovery of sparsity using  constrained quadratic programming. IEEE Transactions on Information Theory 55 2183–2202.
 Witten, Tibshirani and Hastie (2009) [author] Witten, D. M.D. M., Tibshirani, R.R. Hastie, T.T. (2009). A penalized matrix decomposition, with applications to sparse principal components and canonical correlation analysis. Biostatistics 10 515.
 Wright, Nowak and Figueiredo (2009) [author] Wright, S. J.S. J., Nowak, R. D.R. D. Figueiredo, M. A. T.M. A. T. (2009). Sparse reconstruction by separable approximation. IEEE Transactions on Signal Processing 57 2479–2493.
 Wu and Lange (2008) [author] Wu, T. T.T. T. Lange, K.K. (2008). Coordinate descent algorithms for Lasso penalized regression. Annals of Applied Statistics 2 224–244.
 Xiang et al. (2009) [author] Xiang, Z. J.Z. J., Xi, Y. T.Y. T., Hasson, U.U. Ramadge, P. J.P. J. (2009). Boosting with spatial regularization. In Advances in Neural Information Processing Systems 22.
 Yuan (2010) [author] Yuan, M.M. (2010). High Dimensional Inverse Covariance Matrix Estimation via Linear Programming. Journal of Machine Learning Research 11 2261–2286.
 Yuan and Lin (2006) [author] Yuan, M.M. Lin, Y.Y. (2006). Model selection and estimation in regression with grouped variables. Journal of the Royal Statistical Society. Series B 68 49–67.
 Yuan et al. (2010) [author] Yuan, G. X.G. X., Chang, K. W.K. W., Hsieh, C. J.C. J. Lin, C. J.C. J. (2010). Comparison of Optimization Methods and Software for Largescale L1regularized Linear Classification. Journal of Machine Learning Research 11 3183–3234.
 Zass and Shashua (2007) [author] Zass, R.R. Shashua, A.A. (2007). Nonnegative sparse PCA. In Advances in Neural Information Processing Systems 19.
 Zhang (2009) [author] Zhang, T.T. (2009). Some sharp performance bounds for least squares regression with l1 regularization. Annals of Statistics 37 2109–2144.
 Zhao, Rocha and Yu (2009) [author] Zhao, P.P., Rocha, G.G. Yu, B.B. (2009). The composite absolute penalties family for grouped and hierarchical variable selection. Annals of Statistics 37 3468–3497.
 Zhao and Yu (2006) [author] Zhao, P.P. Yu, B.B. (2006). On model selection consistency of Lasso. Journal of Machine Learning Research 7 2541–2563.
 Zhong and Kwok (2011) [author] Zhong, L. W.L. W. Kwok, J. T.J. T. (2011). Efficient Sparse Modeling with Automatic Feature Grouping. In Proceedings of the International Conference on Machine Learning (ICML).
 Zhou, Jin and Hoi (2010) [author] Zhou, Y.Y., Jin, R.R. Hoi, S. C. H.S. C. H. (2010). Exclusive Lasso for multitask feature selection. In Proceedings of the International Conference on Artificial Intelligence and Statistics (AISTATS).
 Zou (2006) [author] Zou, H.H. (2006). The adaptive Lasso and its oracle properties. Journal of the American Statistical Association 101 1418–1429.
 Zou, Hastie and Tibshirani (2006) [author] Zou, H.H., Hastie, T.T. Tibshirani, R.R. (2006). Sparse principal component analysis. Journal of Computational and Graphical Statistics 15 265–286.
Comments
There are no comments yet.