1 Introduction
Many Machine Learning and Data Mining tasks can be formulated, at some stage, in the form of an optimization problem. As constantly growing amounts of high dimensional data are becoming available in the Big Data era, a fundamental thread in research is the developement of highperformance implementations of algorithms tailored to solving these problems in a very largescale setting. One of the most popular and powerful techniques for highdimensional data analysis is the
Lasso [43]. In the last decade there has been intense interest in this method, and several papers describe generalizations and variants of the Lasso [44]. In the context of supervised learning, it was recently proved that the Lasso problem can be reduced to an equivalent SVM formulation, which potentially allows one to leverage a wide range of efficient algorithms devised for the latter problem
[23]. For unsupervised learning, the idea of Lasso regression has been used in
[30] for bi–clustering in biological research.From an optimization point of view, the Lasso can be formulated as an regularized least squares problem, and largescale instances must usually be tackled by means of an efficient firstorder algorithm. Several such methods have already been discussed in the literature. Variants of Nesterov’s Accelerated Gradient Descent, for example, guarantee an optimal convergence rate among firstorder methods [36]
. Stochastic algorithms such as Stochastic Gradient Descent and Stochastic Mirror Descent have also been proposed for the Lasso problem
[29, 41]. More recently, Coordinate Descent (CD) algorithms [11, 12], along with their stochastic variants [41, 38], are gaining popularity due to their efficiency on structured largescale problems. In particular, the CD implementation of Friedman et al. mentioned above is specifically tailored for Lasso problems, and is currently recognized as one of the best solvers for this class of problems.The contribution of the present paper in this context can be summarized as follows:

We propose a highperformance stochastic implementation of the classical FrankWolfe (FW) algorithm to solve the Lasso problem. We show experimentally how the proposed method is able to efficiently scale up to problems with a very large number of features, improving on the performance of other state of the art methods such as the Coordinate Descent algorithm in [12].

We include an analysis of the complexity of our algorithm, and prove a novel convergence result that yields an convergence rate analogous (in terms of expected value) to the one holding for the standard FW method.

We highlight how the properties of the FW method allow to obtain solutions that are significantly more sparse in terms of the number of features compared with those from various competing methods, while retaining the same optimization accuracy.
On a broader level, and in continuity with other works from the recent literature [35, 17, 42], the goal of this line of research is to show how FW algorithms provide a general and flexible optimization framework, encompassing several fundamental problems in Machine Learning.
Structure of the Paper
In Section 2, we provide an overview of the Lasso problem and its formulations, and review some of the related literature. Then, in Section 3, we discuss FW optimization and specialize the algorithm for the Lasso problem. The randomized algorithm used in our implementation is discussed in Section 4, and its convergence properties are analyzed. In Section 5 we show several experiments on benchmark datasets and discuss the obtained results. Finally, Section 6 closes the paper with some concluding remarks.
2 The Lasso Problem
Suppose we are given data points , , where
are some predictor variables and
the respective responses. A common approach in statistical modeling and Data Mining is the linear regression model, which predicts
as a linear combination of the input attributes:In a highdimensional, low sample size setting (
), estimating the coefficient vector
using ordinary least squares leads to an illposed problem, i.e. the solution becomes not unique and unstable. In this case, a widely used approach for model estimation is regularization. Among regularization methods, the Lasso is of particular interest due to its ability to perform variable selection and thus obtain more interpretable models
[43].2.1 Formulation
Let be the design matrix where the data is arranged rowwise, i.e. . Similarly, let be the dimensional response vector. Without loss of generality, we can assume (e.g. by centering the training data such that each attribute has zero mean) [43]. The Lasso estimates the coefficients as the solution to the following problem
(1) 
where the norm constraint has the role of promoting sparsity in the regression coefficients. It is wellknown that the constrained problem (1) is equivalent to the unconstrained problem
(2) 
in the sense that given a solution of (2) with a certain value for parameter , it is possible to find a such that is also a solution of (1), and vice versa. Specifically, if one has an exact solution of problem (2), corresponding to a given , it is easy to see that is also a solution of (1) for . It is immediate to see that corresponds to the unconstrained solution , i.e. to the plain leastsquares regression, which can be obtained by setting in (1). On the opposite, in (1) corresponds to the null solution, which is obtained for large enough values of (specifically, for the case , it can be shown that the solution of (2) is whenever [47]).Since the optimal tradeoff between the sparsity of the model and its predictive power is not known apriori, practical applications of the Lasso require to find a solution and the profiles of estimated coefficients for a range of values of the regularization parameter (or in the penalized formulation). This is known in the literature as the Lasso regularization path [12].
2.2 Relevance and Applications
The Lasso is part of a powerful family of regularized linear regression methods for highdimensional data analysis, which also includes ridge regression (RR)
[19, 18], the ElasticNet [53], and several recent extensions thereof [52, 54, 45]. From a statistical point of view, they can be viewed as methods for trading off the bias and variance of the coefficient estimates in order to find a model with better predictive performance. From a Machine Learning perspective, they allow to adaptively control the capacity of the model space in order to prevent overfitting. In contrast to RR, which is obtained by substituting the
norm in (2) by the squared norm , it is wellknown that the Lasso does not only reduce the variance of coefficient estimates but is also able to perform variable selection by shrinking many of these coefficients to zero. Elasticnet regularization trades off and norms using a “mixed” penalty which requires tuning the additional parameter [53]. norms with can enforce a more aggressive variable selection, but lead to computationally challenging nonconvex optimization problems. For instance, , which corresponds to “direct” variable selection, leads to an NPhard problem [49].Thanks to its ability to perform variable selection and model estimation simultaneously, the Lasso is used in many fields involving highdimensional data. These scenarios have become of increasing importance in the last decades since, as technologies for collecting and processing data evolve, classification and regression problems with a large number of candidate predictors have become ubiquitous. Advances in molecular technologies, for example, enable scientists to measure the status of thousands to millions of biomolecules simultaneously [16]. In text analysis, vector space models for representing documents easily leads to several thousands or even millions of documentterm counts that correspond to potentially informative variables [25, 21]. Similarly, in the analysis of functional magnetic resonance imaging (fMRI) data, one can easily obtain datasets with millions of voxels representing the activity of particular portions of the brain [31]. In all these cases, the number dimensions or attributes can far exceed the number of data instances . It is also worth mentioning that popular data analysis tools such as
regularized logistic regression and penalized Cox regression models can be implemented using iterative algorithms which solve Lasso problems at each iteration
[12].2.3 Related Work
Problem (1) is a quadratic programming problem with a convex constraint, which in principle may be solved using standard techniques such as interiorpoint methods, guaranteeing convergence in few iterations. However, the computational work required per iteration as well as the memory demanded by these approaches make them practical only for small and mediumsized problems. A faster specialized interior point method for the Lasso was proposed in [24], which however compares unfavorably with the baseline used in this paper [12].
One of the first efficient algorithms proposed in the literature for finding a solution of (2) is the Least Angle Regression (LARS) by Efron et al. [4]
. As its main advantage, LARS allows to generate the entire Lasso regularization path with the same computational cost as standard leastsquares via QR decomposition, i.e.
, assuming [18]. At each step, it identifies the variable most correlated with the residuals of the model obtained so far and includes it in the set of “active” variables, moving the current iterate in a direction equiangular with the rest of the active predictors. It turns out that the algorithm we propose makes the same choice but updates the solution differently using cheaper computations. A similar homotopy algorithm to calculate the regularization path has been proposed in [47], which differs slightly from LARS in the choice of the search direction.More recently, it has been shown by Friedman et al. that a careful implementation of the Coordinate Descent method (CD) provides an extremely efficient solver [11], [12], [10], which also applies to more general models such as the ElasticNet proposed by Zou and Hastie [53]. In contrast to LARS, this method cyclically chooses one variable at a time and performs a simple analytical update. The full regularization path is built by defining a sensible range of values for the regularization parameter and taking the solution for a given value as warmstart for the next. This algorithm has been implemented into the Glmnet packageand can be considered the current standard for solving this class of problems. Recent works have also advocated the use of Stochastic Coordinate Descent (SCD) [41], where the order of variable updates is chosen randomly instead of cyclically. This strategy can prevent the adverse effects caused by possibly unfavorable orderings of the coordinates, and allows to prove stronger theoretical guarantees compared to the plain CD [38].
Other methods for regularized regression may be considered. For instance, Zhou et al. recently proposed a geometrical approach where the Lasso is reformulated as a nearest point problem and solved using an algorithm inspired by the classical Wolfe method [51]. However, the popularity and proved efficiency of Glmnet on highdimensional problems make it the chosen baseline in this work.
3 FrankWolfe Optimization
One of the earliest constrained optimization approaches [9, 50], the FrankWolfe algorithm has recently seen a sudden resurgence in interest from the Optimization and Machine Learning communities, due to its powerful theoretical properties and proved efficiency in the context of largescale problems [3, 22, 17]. On the theoretical side, FW methods come with iteration complexity bounds that are independent of the number of variables in the problem, and sparsity guarantees that hold during the whole execution of the algorithm [3, 22]. In addition, several variants of the basic procedure have been analyzed, which can improve the convergence rate and practical performance of the basic FW iteration [15, 35, 26, 6]. From a practical point of view, they have emerged as efficient alternatives to traditional methods in several contexts, such as largescale SVM classification [7, 8, 35, 6] and nuclear normregularized matrix recovery [22, 42]. In view of these developements, FW algorithms have come to be regarded as a suitable approach to largescale optimization in various areas of Machine Learning, statistics, bioinformatics and related fields [1, 27].
Overall, though, the number of works showing experimental results for FW on practical applications is limited compared to that of the theoretical studies which have appeared in the literature. In the context of problems with regularization or sparsity constraints, the use of FW has been discussed in [40], but no experiments are provided. A closely related algorithm has been proposed in [51], however its implementation has a high computational cost in terms of time and memory requirements, and is not suitable for solving largescale problems on a standard desktop or laptop machine. As such, the current literature does not provide many examples of efficient FWbased software for largescale Lasso or regularized optimization. In this work, we aim to fill this gap by showing how a properly implemented stochastic FW method can best the current state of the art solvers on Lasso problems with a very large number of features.
3.1 The Standard FrankWolfe Algorithm
The FW algorithm is a general method to solve problems of the form
(3) 
where is a convex differentiable function, and is a compact convex set. Given an initial guess , the standard FW method consists of the steps outlined in Algorithm 1.
(4) 
(5) 
From an implementation point of view, a fundamental advantage of FW is that the most demanding part of the iteration, i.e. the solution of the linear subproblem (4), has a computationally convenient solution for several problems of practical interest, mainly due to the particular form of the feasible set. The key observation is that, when is a polytope (e.g. the unit simplex for SVMs [35], the ball of radius for the Lasso problem (1), a spectrahedron in nuclear norm for matrix recovery [14]), the search in step can be reduced to a search among the vertices of . This allows to devise cheap analytical formulas to find , ensuring that each iteration has an overall cost of . The fact that FW methods work with projectionfree iterations is also a huge advantage on many practical problems, since a projection step to maintain the feasibility of the iterates (as needed by classical approaches such as proximal methods for Matrix Recovery problems) generally has a superlinear complexity, making the solution of largescale problems difficult in practice [1].
Another distinctive feature of the algorithm is the fact that the solution at a given iteration can be expressed as a convex combination of the vertices , . Due to the incremental nature of the FW iteration, at most one new extreme point of is discovered at each iteration, implying that at most of such points are active at iteration . Furthermore, this sparsity bound holds for the entire run of the algorithm, effectively allowing to control the sparsity of the solution as it is being computed. This fact carries a particular relevance in the context of sparse approximation, and generally in all applications where it is crucial to find models with a small number of features. It also represents, as we will show in our experiments in Section 5, one of the major differences between incremental, forward approximation schemes and more general solvers for regularized optimization, which in general cannot guarantee to find sparse solutions along the regularization path.
3.2 Theoretical Properties
We summarize here some wellknown theoretical results for the FW algorithm which are instrumental in understanding the behaviour of the method. We refer the reader to [22] for the proof of the following proposition. To prove the result, it is sufficient to assume that has bounded curvature, which, as explained in [22], is roughly equivalent to the Lipschitz continuity of .
Proposition 1 (Sublinear convergence, [22]).
An immediate consequence of Proposition 1 is an upper bound on the iteration complexity: given a tolerance , the FW algorithms finds an approximate solution, i.e. an iterate such that , after iterations. Besides giving an estimate on the total number of iterations which has been shown experimentally to be quite tight in practice [5, 6], this fact tells us that the tradeoff between sparsity and accuracy can be partly controlled by appropriately setting the tolerance parameter. Recently, Garber and Hazan showed that under certain conditions the FW algorithm can obtain a convergence rate of , comparable to that of firstorder algorithms such as Nesterov’s method [13]. However, their results require strong convexity of the objective function and of the feasible set, a set of hypotheses which is not satisfied for several important ML problems such as the Lasso or the Matrix Recovery problem with trace norm regularization.
Another possibility is to employ a FullyCorrective variant of the algorithm, where at each step the solution is updated by solving the problem restricted to the currently active vertices. The algorithm described in [51], where the authors solve the Lasso problem via a nearest point solver based on Wolfe’s method, operates with a very similar philosophy. A similar case can be made for the LARS algorithm of [4], which however updates the solution in a different way. The FullyCorrective FW also bears a resemblance to the Orthogonal Matching Pursuit algorithms used in the Signal Processing literature [46], a similarity which has already been discussed in [3] and [22]. However, as mentioned in [3], the increase in computational cost is not paid off by a corresponding improvement in terms of complexity bounds. In fact, the work in [28] shows that the result in Proposition 1 cannot be improved for any firstorder method based on solving linear subproblems without strengthening the assumptions. Greedy approximation techniques based on both the vanilla and the FullyCorrective FW have also been proposed in the context of approximate risk minimization with an constraint by ShalevShwartz et al., who proved several strong runtime bounds for the sparse approximations of arbitrary target solutions [40].
Finally, it is worth mentioning that the result of Proposition 1 can indeed be improved by using variants of FW that employ additional search directions, and allow under suitable hypotheses to obtain a linear convergence rate [35, 26]. It should be mentioned, however, that such rates only hold in the vicinity of the solution and that, as shown in [6], a large number of iterations might be required to gain substantial advantages. For this reason, we choose not to pursue this strategy in the present paper.
4 Randomized FrankWolfe for Lasso Problems
A specialized FW algorithm for problem (1) can be obtained straightforwardly by setting equal to the ball of radius , hereafter denoted as . In this case, the vertices of the feasible set (i.e., the candidate points among which is selected in the FW iterations) are , where is the th element of the canonical basis. It is easy to see that the linear subproblem (4) in Algorithm 1 has a closedform solution, given by:
(6)  
In order to efficiently execute the iteration, we can exploit the form of the objective function to obtain convenient expressions to update the function value and the gradient after each FW iteration. The gradient of in (1) is
There are two possible ways to to compute efficiently. One is to keep track of the vector of residuals and compute as
(7) 
where is the th column of the design matrix , i.e., the vector formed by collecting the th attribute of all the training points. We refer to this approach as the “method of residuals”. The other way is to expand the second line in (4)
and keep track of the inner products between and the predictors corresponding to nonzero coefficients of the current iterate. We call this the “method of active covariates”. The discussion in the next subsections reveals that the first approach is more convenient if, at each iteration, we only need to access a small subset of the coordinates of . It is clear from (6) that after computing for the solution to the linear subproblem in Algorithm 1 follows easily. The other quantities required by the algorithm are the objective value (in order to monitor convergence) and the line search stepsize in (5), which can be obtained as
(8)  
where , , and the terms can be updated recursively as
with starting values and . If we store the products before the execution of the algorithm, the only nontrivial computation required here is which was already computed to solve the subproblem in (6).
4.1 Randomized FrankWolfe Iterations
Although the FW method is generally very efficient for structured problems with a sparse solution, it also has a number of practical drawbacks. For example, it is well known that the total number of iterations required by a FW algorithm can be large, thus making the optimization prohibitive on very large problems. Even when (4) has an analytical solution due to the problem structure, the resulting complexity depends on the problem size [35], and can thus be impractical in cases where handling largescale datasets is required. For example, in the specialization of the algorithm to problem (1), the main bottleneck is the computation of the FW vertex in (6) which corresponds to examining all the candidate predictors and choosing the one most correlated with the current residuals (assuming the design matrix has been standardized s.t. the predictors have unit norm). Coincidentally, this is the same strategy underlying wellknown methods for variable selection such as LARS and Forward Stepwise Regression (see Section ).
A simple and effective way to avoid this dependence on is to compute the FW vertex approximately, by limiting the search to a fixed number of extreme points on the boundary of the feasible set [39, 5]. Specialized to the Lasso problem (1), this technique can be formalized as extracting a random sample and solving
(9) 
Formally, one can think of a randomized FW iteration as the optimization of an approximate linear model, built by considering the partial gradient , i.e. the projection of the gradient onto the subspace identified by the sampled coordinates [48]. The number of coordinates of the gradient that need to be estimated with this scheme is instead of . If , this leads to a significant reduction in terms of computational overhead. Our stochastic specialization of the FW iteration for the Lasso problem takes thus the form of Algorithm 2. After selecting the variable best correlated with the current vector of residuals, the algorithm updates the current solution along the segment connecting with . Note how this approach differs from a method like LARS, where the direction to move the last iterate is equiangular to all the active variables. ^{1}^{1}1As shown in [18], the LARS direction is where is the restriction of the design matrix to the active variables. The FW direction is just .
It also differs from CD, which can make active more than one variable at each “epoch” or cycle through the predictors. The algorithm computes the stepsize by looking explicitly to the value of the objective, which can be computed analytically without increasing the cost of the iteration. Finally, the method updates the vector of residuals and proceeds to the next iteration.
(10) 
Note that, although in this work we only discuss the basic Lasso problem, extending the proposed implementation to the more general ElasticNet model of [53] is straightforward. The derivation of the necessary analytical formulae is analogous to the one shown above. Furthermore, an extension of the algorithm to solve regularized logistic regression problems, another relevant tool in highdimensional data analysis, can be easily obtained following the guidelines in [12].
4.2 Complexity and Implementation Details
In Algorithm 2, we compute the coordinates of the gradient using the method of residuals given by equation (7). Due to the randomization, this method becomes very advantageous with respect to the use of the alternative method based on the active covariates, even for very large . Indeed, if we denote by the cost of performing a dot product between a predictor and another vector in , the overall cost of picking out the FW vertex in step of our algorithm is . Using the method of the active covariates would instead give an overall cost of , which is always worse. Note however that this method may be better than the method of the residuals in a deterministic implementation by using caching tricks as proposed in [11], [12]. For instance, caching the dot products between all the predictors and the active ones and keeping updated all the coordinates of the gradient would cost except when new active variables appear in the solution, in which case the cost becomes . However, this would allow to find the FW vertex in operations. In this scenario, the fixed cost of the method of residuals may be worse if the Lasso solution is very sparse. It is worth noting that the dot product cost is proportional to the number of nonzero components in the predictors, which in typical highdimensional problems is significantly lower than .
4.3 Relation to SVM Algorithms and Sparsity Certificates
The previous implementation suggests that the FW algorithm will be particularly suited to the case where a regression problem has a very large number of features but not so many training points. It is interesting to compare this scenario to the situation in SVM problems: in the SVM case, the FW vertices correspond to training points, and the standard FW algorithm is able to quickly discover the relevant “atoms” (the Support Vectors), but has no particular advantage when handling lots of features. In contrast, in Lasso applications, where we are using the ’s as training points, the situation is somewhat inverted: the algorithm should discover the critical features in at most iterations and guarantee that at most attributes will be used to perform predictions. This is, indeed, the scenario in which Lasso is used for several applications of practical interest, as problems with a very large number of features arise frequently in specific domains like bioinformatics, web and text analysis and sensor networks.
In the context of SVMs, the randomized FW algorithm has been already discussed in [5]. However, the results in the mentioned paper were experimental in nature, and did not contain a proof of convergence, which is instead provided in this work. Note that, although we have presented the randomized search for the specific case of problem (1), the technique applies more generally to the case where is a polytope (or has a separable structure with every block being a polytope, as in [27]). We do not feel this hypothesis to be restrictive, as basically every practical application of the FW algorithm proposed in the literature falls indeed into this setting.
4.4 Convergence Analysis
We show that the stochastic FW converges (in the sense of expected value) with the same rate as the standard algorithm. First, we need the following technical result.
Lemma 1.
Let be picked at random from the set of all equiprobable subsets of , , and let be any vector in . Then
Proof.
Let and an element of . For , and For ,
is a Bernoulli random variable with expectation
. In fact, iff ; as there are subsets of containing ,Therefore, for ,
∎
Note that selecting a random subset of size to solve (9
) is equivalent to (i) building a random matrix
as in Lemma 1, (ii) computing the restricted gradient and then (iii) solving the linear subproblem (6) substituting by . In other words, the proposed randomization can be viewed as approximating by . Lemma 1 implies that , which is the key to prove our main result.Proposition 2.
This result has a similar flavor to that in [27], and the analysis is similar to the one presented in [48]. However, in contrast to the above works, we do not assume any structure in the optimization problem or in the sampling. A detailed proof can be found in the Appendix. As in the deterministic case, Proposition 2 implies a complexity bound of iterations to reach an approximate solution such that .
4.5 Choosing the Sampling Size
When using a randomized FW iteration it is important to choose the sampling size in a sensible way. Indeed, some recent works showed how this choice entails a tradeoff between accuracy (in the sense of premature stopping) and complexity, and henceforth CPU time [5]. This kind of approximation is motivated by the following result, which suggests that it is reasonable to pick .
Theorem 1 ([39], Theorem 6.33).
Let s.t. and let be a random subset of size
. Then, the probability that the largest element in
is greater than or equal to elements of is at least .The value of this result lies in the ability to obtain probabilistic bounds for the quality of the sampling independently of the problem size . For example, in the case of the Lasso problem, where and , it is easy to see that it suffices to take to guarantee that, with probability at least , lies between the largest gradient components (in absolute value), independently of . This kind of sampling has been discussed for example in [6].
The issue with this strategy is that, for problems with very sparse solutions (which is the case for strong levels of regularization), even a large confidence interval does not guarantee that the algorithm can sample a good vertex in most of the iterations. Intuitively, the sampling strategy should allow the algorithm to detect the set of vertices active at the optimum, which correspond, at various stages of the optimization process, to descent directions for the objective function. In sparse approximation problems, extracting a sampling set without relevant features may imply adding “spurious” components to the solution, reducing the sparseness of the model we want to find.
A more effective strategy in this context would be ask for a certain probability that the sampling will include at least one of the “optimal” features. Letting be the index set of the active vertices at the optimum, and denoting and , we have
(11) 
with the latter inequality being a reasonable approximation if . From (11), we can guarantee that with probability at least by imposing:
(12) 
On the practical side, this sampling strategy often implies taking a larger . Assuming that the fraction of relevant features () is constant, we essentially get the bound for provided by Theorem 1, which is independent of . However, for the worst case , we get
(13) 
which suggests to use a sampling size proportional to . A more involved strategy, which exploits the incremental structure of the FW algorithm, is using a large at early iterations and smaller values of as the solution gets more dense. If the optimal solution is very sparse, the algorithm requires few expensive iterations to converge. In contrast, if the solution is dense, the algorithm requires more, but faster, iterations (e.g. for a confidence and the already mentioned suffices). For the problems considered in this paper, setting to a small fraction of works well in practice, as shown by the experiments in the next Section.
5 Numerical Experiments
In this section, we assess the practical effectiveness of the randomized FW algorithm by performing experiments on both synthetic datasets and realworld benchmark problems with hundreds of thousands or even millions of features. The characteristics of the datasets are summarized in Table 1, where we denote by the number of training examples, by the number of test examples, and by the number of features. The synthetic datasets were generated with the Scikitlearn function make_regression [37]. Each of them comes in two versions corresponding to a different number of relevant features in the true linear model used to generate the data (32 and 100 features for the problem of size , and 158 and 500 features for that of size ). The real largescale regression problems E2006tfidf and E2006log1p, and the datasets Pyrim and Triazines, are available from [2].
Dataset  

Synthetic10000  
Synthetic50000  
Pyrim  
Triazines  
E2006tfidf  
E2006log1p 
In assessing the performance of our method, we used as a baseline the following algorithms, which in our view, and according to the properties summarized in Table 2, can be considered among the most competitive solvers for Lasso problems:

The wellknown CD algorithm by Friedman et al., as implemented in the Glmnet package [12]. This method is highly optimized for the Lasso and is considered a state of the art solver in this context.

The SCD algorithm as described in [41], which is significant both for being a stochastic method and for having better theoretical guarantees than the standard cyclic CD.

The Accelerated Gradient Descent with projections for both the regularized and the constrained Lasso, as this algorithm guarantees an optimal complexity bound. We choose as a reference the implementation in the SLEP package by Liu et al. [32].
Among other possible firstorder methods, the classical SGD suffers from a worse convergence rate, and its variant SMIDAS has a complexity bound which depends on , thus we did not include them in our experiments. Indeed, the authors of [40] conclude that SCD is both faster and produces significantly sparser models compared to SGD. Finally, although the recent GeoLasso algorithm of [51] is interesting because of its close relationship with FW, its running time and memory requirements are clearly higher compared to the above solvers.
Approach  Form  Number of  Complexity  Sparse 
Iterations  per Iteration  Its.  
Accelerated Gradient + Proj.  (1)  No  
[33]  
Accelerated Gradient + Reg. Proj.  (2)  No  
[34]  
Cyclic Coordinate Descent  (2)  Unknown  Yes  
(CD) [11, 12]  
Stochastic Gradient Descent  (2)  No  
(SGD) [29]  
Stochastic Mirror Descent  (2)  No  
[41]  
GeoLasso [51]  (1)  Yes  
FrankWolfe (FW) [22]  (1)  Yes  
Stochastic Coord. Descent (SCD)  (2)  Yes  
[38]  
Stochastic FrankWolfe  (1)  Yes 
if a non trivial bound for the number of nonzero entries of each iterate holds at any moment.
is required for the projections. An iteration of cyclic coordinate descent corresponds to a complete cycle through the features. An iteration of SCD corresponds to the optimization on a single feature.Since an appropriate level of regularization needs to be automatically selected in practice, the algorithms are compared by computing the entire regularization path on a range of values of the regularization parameters and (depending on whether the method solves the penalized or the constrained formulation). Specifically, we first estimate two intervals and , and then solve problems (2) and (1) on a point parameter grid in logarithmic scale. For the penalized Lasso, we use , where is determined as in the Glmnet code. Then, to make the comparison fair (i.e. to ensure that all the methods solve the same problems according to the equivalence in Section 2), we choose for the constrained Lasso and , where is the solution obtained by Glmnet with the smallest regularization parameter and a high precision (). The idea is to give the solvers the same “sparsity budget” to find a a solution of the regression problem.
Warmstart strategy
As usually done in these cases, and for all the methods, we compute the path using a warmstart strategy where each solution is used as an initial guess for the optimization with the next value of the parameter. Note that we always start from the most sparse solution. This means that in the cases of CD, SCD and regularized SLEP we start from towards , while for FW and constrained SLEP we go from towards . Furthermore, since , where
is the unconstrained solution, we know that the solution will lie on the boundary, therefore we adopt a heuristic strategy whereby the previous solution is scaled so that its
norm is . Both algorithms are initialized with the null solution as the initial guess. Regarding the stopping criterion for each problem in the path, we stop the current run when for all the algorithms. Other choices are possible (for example, FW methods are usually stopped using a duality gapbased criterion [22]), but this is the condition used by default to stop CD in Glmnet. A value of is used in the following experiments. All the considered algorithms have been coded in C++. The code and datasets used in this paper are available from public repositories on Github (https://github.com/efrandi/FWLasso) and Dataverse (https://goo.gl/PTQ05R), respectively. The SLEP package has a Matlab interface, but the key functions are all coded in C. Overall, we believe our comparison can be considered very fair. We executed the experiments on a 3.40GHz Intel i7 machine with 16GB of main memory running CentOS. For the randomized experiments, results were averaged over 10 runs.5.1 “Sanity Check” on the Synthetic Datasets
The aim of these experiments is not to measure the performance of the algorithms (which will be assessed below on four larger, reallife datasets), but rather to compare their ability to capture the evolution of the most relevant features of the models, and discuss how this relates to their behaviour from an optimization point of view. To do this, we monitor the values of the 10 most relevant features along the path, as computed by both the CD and FW algorithms, and plot the results in Figures 1 and 2. To determine the relevant variables, we use as a reference the regularization path obtained by Glmnet with (which is assumed to be a reasonable approximation of the exact solution), and compute the 10 variables having, on average, the highest absolute value along the path. As this experiment is intended mainly as a sanity check to verify that our solver reconstructs the solution correctly, we do not include other algorithms in the comparison. In order to implement the random sampling strategy in the FW algorithm, we first calculated the average number of active features along the path, rounded up to the nearest integer, as an empirical estimate of the sparsity level. Then, we chose based on the probability of capturing at least one of the relevant features at each sampling, according to (13). A confidence level of was used for this experiment, leading to sampling sizes of 372 and 324 points for the two problems of size 10000, and of 1616 and 1572 points for those of size 50000.
Figure 3 depicts, for two of the considered problems, the prediction error on the test set along the path found by both algorithms. It can be seen that both methods are able to find the same value of the best prediction error (i.e. to correctly identify the best model). The FW algorithm also seems to obtain slightly better results towards the end of the path, consistently with the fact that the coefficients of the most relevant variables tend to be more stable, compared with CD, when the regularization is weaker.
5.2 Results on Largescale Datasets
In this section, we report the results on the problems Pyrim, Triazines, E2006tfidf and E2006log1p. These datasets represent actual largescale problems of practical interest. The Pyrim and Triazines
datasets stem from two quantitative structureactivity relationship models (QSAR) problems, where the biological responses of a set of molecules are measured and statistically related to the molecular structure on their surface. We expanded the original feature set by means of product features, i.e. modeling the response variable
as a linear combination of polynomial basis functions, as suggested in [20]. For this experiment, we used product features of order and respectively, which leads to largescale regression problems with and . Problems E2006tfidf and E2006log1p stem instead from the reallife NLP task of predicting the risk of investment (i.e. the stock return volatility) in a company based on available financial reports [25].To implement the random sampling for the FW algorithm, we cannot use the technique described above for the experiments on the synthetic datasets, as in real problems we do not have an apriori estimate of the sparsity level. We therefore adopt a simpler strategy, and set to a fixed, small fraction of the total number of features. Our choices are summarized in Table 3.
% of  Pyrim  Triazines  E2006tfidf  E2006log1p 

1%  2,014  6,354  1,504  42,723 
2%  4,028  12,708  3,008  85,445 
3%  6,042  19,062  4,511  128,167 
As a measure of the performance of the considered algorithms, we report the CPU time in seconds, the total number of iterations and the number of requested dot products (which account for most of the required running time for all the algorithms^{2}^{2}2Note that the SLEP code uses highly optimized libraries for matrix multiplication, therefore matrixvector computations can be faster than naive C++ implementations.) along the entire regularization path. Note that, in assessing the number of iterations, we consider one complete cycle of CD to be roughly equivalent to a full deterministic iteration of FW (since in both cases the complexity is determined by a full pass through all the coordinates) and to random coordinate explorations in SCD. Finally, in order to evaluate the sparsity of the solutions, we report the average number of active features along the path. Results are displayed in Tables 4 and 5. In the second table, the speedups with respect to the CD algorithm are also reported. It can be seen how for all the choices of the sampling size the FW algorithm allows for a substantial improvement in computational performance, as confirmed by both the CPU times and the machineindependent number of requested dot products (which are roughly proportional to the running times). The plain SCD algorithm performs somewhat worse than CD, something we attribute mainly to the fact that the Glmnet implementation of CD is a highly optimized one, using a number of adhoc tricks tailored to the Lasso problem that we decided to preserve in our comparison. If we used a plain implementation of CD, we would expect to obtain results very close to those exhibited by SCD.
Furthermore, FW is always able to find the sparsest solution among the considered methods. The extremely large gap in average sparsity between FW and CD on one side, and the SLEP solvers on the other, is due to the fact that the latter compute in general dense iterates. Although the Accelerated Gradient Descent solver is fast and competitive from an optimization point of view, providing always the lower number of iterations as predicted by the theory, it is not able to keep the solutions sparse along the path. This behavior clearly highlights the advantage of using incremental approximations in the context of sparse recovery and feature selection. Importantly, note that the small number of features found by FW is not a result of the randomization technique: it is robust with respect to the sampling size, and additional experiments performed using a deterministic FW solver revealed that the average number of nonzero entries in the solution does not change even if the randomization is completely removed.
CD  SCD  SLEP Reg.  SLEP Const.  

Pyrim  
Time (s)  
Iterations  
Dot products  
Active features  
Triazines  
Time (s)  
Iterations  
Dot products  
Active features  29,104  130,303  
E2006tfidf  
Time (s)  
Iterations  
Dot products  
Active features  
E2006log1p  
Time (s)  
Iterations  
Dot products  
Active features  12,806  54,704 
FW  FW  FW  

Pyrim  
Time (s)  
Speedup  
Iterations  
DotProd  
Active features  
Triazines  
Time (s)  
Speedup  
Iterations  
DotProd  
Active features  
E2006tfidf  
Time (s)  
Speedup  
Iterations  
DotProd  
Active features  
E2006log1p  
Time (s)  
Speedup  
Iterations  
DotProd  
Active features 
To better assess the effect of using an incremental algorithm in obtaining a sparse model, we plot in Figure 4 the evolution of the number of active features along the path on problems E2006tfidf and E2006log1p. It can be clearly seen how CD and FW (with the latter performing the best overall) are able to recover sparser solutions, and can do so without losing accuracy in the model, as we show below.
In order to evaluate the accuracy of the obtained models, we plot in Figures 5 and 6 the mean square error (MSE) against the norm of the solution along the regularization path, computed both on the original training set (curves 5(ac) and 6(ac)) and on the test set (curves 5(bd) and 6(bd)). Note that the value of the objective function in Problem (1) coincides with the mean squared error (MSE) on the training set.
First of all, we can see how the decrease in the objective value is basically identical in all cases, which indicates that with our sampling choices the use of a randomized algorithm does not affect the optimization accuracy. Second, the test error curves show that the predictive capability of all the FW models is competitive with that of the models found by the CD algorithm (particularly in the case of the larger problem E2006log1p). It is also important to note that in all cases the best model, corresponding to the minimum of the test error curves, is found for a relatively low value of the constraint parameter, indicating that sparse solutions are preferable and that solutions involving more variables tend to cause overfitting, which is yet another incentive to use algorithms that can naturally induce sparsity. Again, it can be seen how the minima of all the curves coincide, indicating that all the algorithms are able to correctly identify the best compromise between sparsity and training error. The fact that we are able to attain the same models obtained by a state of the art algorithm such as Glmnet using a sampling size as small as of the total number of features is particularly noteworthy. Combined with the consistent advantages in CPU time over other competing solvers and its attractive sparsity properties, it shows how the randomized FW represents a solid, highperformance option for solving highdimensional Lasso problems.
6 Conclusions and Perspectives
In this paper, we have studied the practical advantages of using a randomized FrankWolfe algorithm to solve the constrained formulation of the Lasso regression problem on highdimensional datasets involving a number of variables ranging from the hundred thousands to a few millions. We have presented a theoretical proof of convergence based on the expected value of the objective function. Our experiments show that we are able to obtain results that outperform those of other stateoftheart solvers such as the Glmnet algorithm, a standard among practitioners, without sacrificing the accuracy of the model in a significant way. Importantly, our solutions are consistently more sparse than those found using several popular firstorder methods, demonstrating the advantage of using an incremental, greedy optimization scheme in this context.
In a future work, we intend to address the issue of whether it is possible to find suitable sampling conditions which can lead to a stronger stochastic convergence result, i.e. to certifiable probability bounds for approximate solutions. Finally, we remark that the proposed approach can be readily extended to other similar problems such as ElasticNet or more general regularized problems such as logistic regression, or to related applications such as the sparsification of SVM models. Another possibility to tackle various Lasso formulations is to exploit an equivalent formulation in terms of SVMs, an area where FW methods have already shown promising results. Together, all these elements strengthen the conclusions of our previous research work, showing that FW algorithms can provide a complete and flexible framework to efficiently solve a wide variety of largescale Machine Learning and Data Mining problems.
Acknowledgments
The research leading to these results has received funding from the European Research Council under the European Union’s Seventh Framework Programme (FP7/20072013) / ERC AdG ADATADRIVEB (290923). This paper reflects only the authors’ views and the Union is not liable for any use that may be made of the contained information. Research Council KUL: GOA/10/09 MaNet, CoE PFV/10/002 (OPTEC), BIL12/11T; Flemish Government: FWO: projects: G.0377.12 (Structured systems), G.088114N (Tensor based data similarity); PhD/Postdoc grants; iMinds Medical Information Technologies SBO 2014; IWT: POM II SBO 100031; Belgian Federal Science Policy Office: IUAP P7/19 (DYSCO, Dynamical systems, control and optimization, 20122017). The second author received funding from CONICYT Chile through FONDECYT Project 11130122. The first author thanks the colleagues from the Department of Computer Science and Engineering, University of Bologna, for their hospitality during the period in which this research was conceived.
References

[1]
Andreas Argyriou, Marco Signoretto, and Johan A. K. Suykens.
Hybrid algorithms with applications to sparse and low rank
regularization.
In Johan A. K. Suykens, Marco Signoretto, and Andreas Argyriou,
editors,
Regularization, Optimization, Kernels, and Support Vector Machines
, chapter 3, pages 53–82. Chapman & Hall/CRC, 2014.  [2] ChihChung Chang and ChihJen Lin. LIBSVM: a library for support vector machines, http://www.csie.ntu.edu.tw/~cjlin/libsvm, 2011.
 [3] Kenneth Clarkson. Coresets, sparse greedy approximation, and the FrankWolfe algorithm. ACM Transactions on Algorithms, 6(4):63:1–63:30, 2010.
 [4] Bradley Efron, Trevor Hastie, Iain Johnstone, and Robert Tibshirani. Least angle regression. Ann. Stat., 32(2):407–499, 2004.
 [5] E. Frandi, R. Ñanculef, and J. A. K. Suykens. Complexity issues and randomization strategies in FrankWolfe algorithms for machine learning. In 7th NIPS Workshop on Optimization for Machine Learning, 2014.
 [6] E. Frandi, R. Ñanculef, and J. A. K. Suykens. A PARTANaccelerated FrankWolfe algorithm for largescale SVM classification. In Proceedings of the IJCNN 2015, 2015.

[7]
Emanuele Frandi, Maria Grazia Gasparo, Stefano Lodi, Ricardo Ñanculef, and
Claudio Sartori.
A new algorithm for training SVMs using approximate minimal
enclosing balls.
In
Proceedings of the 15th Iberoamerican Congress on Pattern Recognition
, pages 87–95. Springer, 2010.  [8] Emanuele Frandi, Maria Grazia Gasparo, Stefano Lodi, Ricardo Ñanculef, and Claudio Sartori. Training support vector machines using FrankWolfe methods. IJPRAI, 27(3), 2011.
 [9] Marguerite Frank and Philip Wolfe. An algorithm for quadratic programming. Naval Research Logistics Quarterly, 1:95–110, 1956.
 [10] Jerome Friedman. Fast sparse regression and classification. Int. J. Forecast., 28:722–738, 2012.
 [11] Jerome Friedman, Trevor Hastie, Holger Höfling, and Robert Tibshirani. Pathwise coordinate optimization. Ann. Appl. Stat., 1(2):302–332, 2007.
 [12] Jerome Friedman, Trevor Hastie, and Robert Tibshirani. Regularization paths for generalized linear models via coordinate descent. J. Stat. Softw., 33(1):1–22, 2010.
 [13] Dan Garber and Elan Hazan. Faster rates for the FrankWolfe method over stronglyconvex sets. In Proceedings of the 32nd ICML, 2015.
 [14] Joachim Giesen, Martin Jaggi, and Søoren Laue. Optimizing over the growing spectrahedron. In 20th Annual European Symposium on Algorithms, LNCS, pages 503–514. Springer, 2012.
 [15] Jacques Guélat and Patrice Marcotte. Some comments on Wolfe’s “away step”. Math. Prog., 35:110–119, 1986.

[16]
Jiang Gui and Hongzhe Li.
Penalized Cox regression analysis in the highdimensional and lowsample size settings, with applications to microarray gene expression data.
Bioinformatics, 21(13):3001–3008, 2005.  [17] Zaid Harchaoui, Anatoli Juditski, and Arkadi Nemirovski. Conditional gradient algorithms for normregularized smooth convex optimization. Math. Prog. Ser. A, 13(1):1–38, 2014.
 [18] Trevor Hastie, Robert Tibshirani, and Jerome Friedman. The Elements of Statistical Learning. Springer New York Inc., 2001.
 [19] Arthur E Hoerl and Robert W Kennard. Ridge regression: Biased estimation for nonorthogonal problems. Technometrics, 12(1):55–67, 1970.
 [20] L. Huang, J. Jia, B. Yu, B. G. Chun, P. Maniatis, and M. Naik. Predicting execution time of computer programs using sparse polynomial regression. In Advances in Neural Information Processing Systems, pages 883–891, 2010.

[21]
Georgiana Ifrim, Gökhan Bakir, and Gerhard Weikum.
Fast logistic regression for text categorization with variablelength ngrams.
In Proceedings of the 14th ACM SIGKDD, pages 354–362, 2008.  [22] Martin Jaggi. Revisiting FrankWolfe: Projectionfree sparse convex optimization. In Proceedings of the 30th International Conference on Machine Learning, 2013.
 [23] Martin Jaggi. An equivalence between the lasso and support vector machines. In Johan A. K. Suykens, Marco Signoretto, and Andreas Argyriou, editors, Regularization, Optimization, Kernels, and Support Vector Machines, chapter 1, pages 1–26. Chapman & Hall/CRC, 2014.
 [24] SeungJean Kim, Kwangmoo Koh, Michael Lustig, Stephen Boyd, and Dimitry Gorinevsky. An interiorpoint method for largescale l 1regularized least squares. IEEE J. Sel. Top. Sign. Proces., 1(4):606–617, 2007.
 [25] Shimon Kogan, Dimitry Levin, Bryan R. Routledge, Jacob S. Sagi, and Noah A. Smith. Predicting risk from financial reports with regression. In Proceedings of the NAACL ’09, pages 272–280, 2009.
 [26] Simon LacosteJulien and Martin Jaggi. An affine invariant linear convergence analysis for FrankWolfe algorithms. arXiv:1312.7864v2, 2014.
 [27] Simon LacosteJulien, Martin Jaggi, Mark Schmidt, and Patrick Pletscher. Blockcoordinate FrankWolfe optimization for structural SVMs. In Proceedings of the 30th International Conference on Machine Learning, 2013.
 [28] Guanghui Lan. The complexity of largescale convex programming under a linear optimization oracle. arXiv:1309.5550v2, 2014.
 [29] John Langford, Lihong Li, and Tong Zhang. Sparse online learning via truncated gradient. In Advances in Neural Information Processing Systems, pages 905–912, 2009.

[30]
Mihee Lee, Haipeng Shen, Jianhua Z. Huang, and J. S. Marron.
Biclustering via sparse singular value decomposition.
Biometrics, 66(4):1087–1095, Dec 2010.  [31] Yuanqing Li, Praneeth Namburi, Zhuliang Yu, Cuntai Guan, Jianfeng Feng, and Zhenghui Gu. Voxel selection in fmri data analysis based on sparse representation. IEEE Trans. Biomed. Eng., 56(10):2439–2451, 2009.
 [32] J. Liu, S. Ji, and J. Ye. SLEP: Sparse Learning with Efficient Projections, http://www.yelab.net/software/SLEP/. Arizona State University, 2009.
 [33] Jun Liu and Jieping Ye. Efficient euclidean projections in linear time. In Proceedings of the 26th ICML, pages 657–664. ACM, 2009.
 [34] Jun Liu and Jieping Ye. Efficient / norm regularization. arXiv:1009.4766, 2010.
 [35] Ricardo Ñanculef, Emanuele Frandi, Claudio Sartori, and Héctor Allende. A novel FrankWolfe algorithm. Analysis and applications to largescale SVM training. Inf. Sci., 285:66–99, 2014.
 [36] Yu. Nesterov. Gradient methods for minimizing composite functions. Math. Prog. Ser. B, 140(1):125–161, 2013.
 [37] F. Pedregosa, G. Varoquaux, A. Gramfort, V. Michel, B. Thirion, O. Grisel, M. Blondel, P. Prettenhofer, R. Weiss, V. Dubourg, J. Vanderplas, A. Passos, D. Cournapeau, M. Brucher, M. Perrot, and E. Duchesnay. Scikitlearn: Machine Learning in Python. J. Mach. Learn. Res., 12:2825–2830, 2011.
 [38] Peter Richtárik and Martin Takáĉ. Iteration complexity of randomized blockcoordinate descent methods for minimizing a composite function. Math. Prog. Ser. A, 144(1):1–38, 2014.
 [39] Bernard Schölkopf and Alexander Smola. Learning with Kernels: Support Vector Machines, Regularization, Optimization, and Beyond. MIT Press, 2001.
 [40] Shai ShalevShwartz, Nathan Srebro, and Tong Zhang. Trading accuracy for sparsity in optimization problems with sparsity constraints. SIAM J. on Optimization, 20(6):2807–2832, 2010.
 [41] Shai ShalevShwartz and Ambuj Tewari. Stochastic methods for regularized loss minimization. J. Mach. Learn. Res., 12:1865–1892, 2011.
 [42] M. Signoretto, E. Frandi, Z. Karevan, and J. A. K. Suykens. High level high performance computing for multitask learning of timevarying models. In IEEE CIBD 2014, 2014.
 [43] Robert Tibshirani. Regression shrinkage and selection via the lasso. J. R. Stat. Soc., Series B, 58(1):267–288, 1996.
 [44] Robert Tibshirani. Regression shrinkage and selection via the lasso: a retrospective. J. R. Stat. Soc., Series B, 73(3):273–282, 2011.
 [45] Robert Tibshirani, Michael Saunders, Saharon Rosset, Ji Zhu, and Keith Knight. Sparsity and smoothness via the fused lasso. J. R. Stat. Soc., Series B, 67(1):91–108, 2005.
 [46] Joel A. Tropp. Greed is good: Algorithmic results for sparse approximation. IEEE Trans. Inf. Theory, 50(10):2231–2242, 2004.
 [47] B. A. Turlach. On algorithms for solving least squares problems under an penalty or an constraint. In Proc. Am. Stat. Assoc., Stat. Comp. Sec., pages 2572–2577, 2005.
 [48] Yijie Wang and Xiaoning Qian. Stochastic coordinate descent FrankWolfe algorithm for largescale biological network alignment. In GlobalSIP14  Workshop on Genomic Signal Processing and Statistics, 2014.
 [49] Jason Weston, André Elisseeff, Bernhard Schölkopf, and Mike Tipping. Use of the zero norm with linear models and kernel methods. J. Mach. Learn. Res., 3:1439–1461, 2003.
 [50] Philip Wolfe. Convergence theory in nonlinear programming. In Integer and Nonlinear Programming, pages 1–36. NorthHolland, 1970.
 [51] Quan Zhou, Shiji Song, Gao Huang, and Cheng Wu. Efficient Lasso training from a geometrical perspective. Neurocomputing (in press), 2015.
 [52] Hui Zou. The adaptive lasso and its oracle properties. JASA, 101(476):1418–1429, 2006.
 [53] Hui Zou and Trevor Hastie. Regularization and variable selection via the elastic net. J. R. Stat. Soc., Series B, 67:301–320, 2005.
 [54] Hui Zou and Hao Helen Zhang. On the adaptive elasticnet with a diverging number of parameters. Ann. Stat., 37(4):1733, 2009.
Appendix: proof of Proposition 2
Let be a convex differentiable real function on . Given , we define the restricted gradient of with respect to as its scaled projection i.e.
(14) 
where . We define the curvature constant of over a compact set as
(15) 
where .
For any we define its primal gap and duality gap as
(16)  
(17) 
respectively. Convexity of the function implies that is nowhere greater than . Therefore,
(18) 
For any iterate generated by our algorithm, we define its expected primal gap and duality gap as
(19)  
(20) 
respectively. Here we denote by the random subset of
Comments
There are no comments yet.