Efficiently Using Second Order Information in Large l1 Regularization Problems

03/27/2013
by   Xiaocheng Tang, et al.
Lehigh University
0

We propose a novel general algorithm LHAC that efficiently uses second-order information to train a class of large-scale l1-regularized problems. Our method executes cheap iterations while achieving fast local convergence rate by exploiting the special structure of a low-rank matrix, constructed via quasi-Newton approximation of the Hessian of the smooth loss function. A greedy active-set strategy, based on the largest violations in the dual constraints, is employed to maintain a working set that iteratively estimates the complement of the optimal active set. This allows for smaller size of subproblems and eventually identifies the optimal active set. Empirical comparisons confirm that LHAC is highly competitive with several recently proposed state-of-the-art specialized solvers for sparse logistic regression and sparse inverse covariance matrix selection.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

07/01/2016

A multilevel framework for sparse optimization with application to inverse covariance estimation and logistic regression

Solving l1 regularized optimization problems is common in the fields of ...
06/17/2020

Structured Stochastic Quasi-Newton Methods for Large-Scale Optimization Problems

In this paper, we consider large-scale finite-sum nonconvex problems ari...
02/16/2018

WHInter: A Working set algorithm for High-dimensional sparse second order Interaction models

Learning sparse linear models with two-way interactions is desirable in ...
10/03/2019

A sparse semismooth Newton based augmented Lagrangian method for large-scale support vector machines

Support vector machines (SVMs) are successful modeling and prediction to...
06/27/2012

Efficient and Practical Stochastic Subgradient Descent for Nuclear Norm Regularization

We describe novel subgradient methods for a broad class of matrix optimi...
07/20/2018

A Fast, Principled Working Set Algorithm for Exploiting Piecewise Linear Structure in Convex Problems

By reducing optimization to a sequence of smaller subproblems, working s...
06/20/2018

A Distributed Second-Order Algorithm You Can Trust

Due to the rapid growth of data and computational resources, distributed...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

We consider convex sparse unconstrained minimization problem of the following general form

(1)

where is convex and twice differentiable and is the regularization parameter that controls the sparsity of . More generally, the regularization term can be extended to to allow for different regularization weights on different entries, e.g., when there is a certain sparsity pattern desired in the solution . We will focus on the simpler form as in (1) in this work for the sake of simplicity of presentation, as the extension to the general form is straightforward.

Problems of form (1

) have been the focus of much research lately in the fields of signal processing and machine learning. This form encompasses a variety of machine learning models, in which feature selection is desirable, such as sparse logistic regression

Yuan et al. (2010, 2011); Shalev-Shwartz and Tewari (2009), sparse inverse covariance selection Hsieh et al. (2011); Olsen et al. (2012); Scheinberg and Rish (2009), Lasso Tibshirani (1996), etc. These settings often present common difficulties to optimization algorithms due to their large scale. During the past decade most optimization effort aimed at these problems focused on development of efficient first-order methods, such as accelerated proximal gradients methods Nesterov (2007); Beck and Teboulle (2009); Wright et al. (2009), block coordinate descent methods Yuan et al. (2011); Friedman et al. (2010, 2008); Scheinberg and Rish (2009) and alternating directions methods Scheinberg et al. (2010). These methods enjoy low per-iteration complexity, but typically have low local convergence rates. Their performance is often hampered by small step sizes. This, of course, has been known about first-oder methods for a long time, however, due to the very large size of these problems, second order methods are often not a practical alternative. In particular, constructing and storing a Hessian matrix, let alone inverting it, is prohibitively expensive for values of larger than , which often makes the use of the Hessian in large-scale problems impractical, regardless of the benefits of fast local convergence rate.

Nevertheless, recently several new methods were proposed for sparse optimization which make careful use of second order information Hsieh et al. (2011); Yuan et al. (2011); Olsen et al. (2012); Byrd et al. (2012). These methods explore the following special properties of the sparse problems: at the optimality many of the elements of are expected to equal , hence methods which explore active set-like approaches can benefit from small sizes of subproblems. Whenever the subproblems are not small, these new methods exploit the idea that the subproblems do not need to be solved accurately. In particular we take a note of the following methods.

Yuan et al. (2011) proposes a specialized GLMNET Friedman et al. (2010) implementation for sparse logistic regression, where coordinate descent method is applied to the lasso subproblem constructed using the Hessian of – the smooth component of the objective . Two major improvements are discussed to enhance GLMNET for larger problem – exploitation of the special structure of the Hessian to reduce the complexity of each coordinate step so that it is linear on the number of training instances, and a two-level shrinking scheme proposed to focus the minimization on smaller subproblems. Hsieh et al. (2011) later use the similar ideas in their specialized algorithm called QUIC for sparse inverse covariance selection. Benefiting from both its active-set strategy, which eventually converges to the optimal nonzero subspace, and its efficient use of the Hessian, QUIC behaves as Newton-like algorithms and is able to claim quadratic local convergence. Another related line of work begins with paper by Wright (2012), which proposes and analyzes an algorithm that is characterized by a two-phase minimization step for obtaining the improving direction, with one phase where a gradient descent step is taken towards minimizing the subproblem, and an enhanced phase where a Newton-like step is carried out in the nonzero subspace resulted from the first phase. Similar ideas are also explored in Olsen et al. (2012), in which a class of orthant-based Newton method is proposes such that Newton-like algorithm is applied in an orthant face which lies in a reduced subspace and is identified by first taking a steepest descent step. A backtracking line search, however, has to be put in place afterwards to project the step back onto the orthant whenever the step overshoots.

The above mentioned methods share similar attributes. In particular, all of them incorporate actual Hessian either by confining the subproblem minimization to a smooth subspace Wright (2012); Olsen et al. (2012), or by using it along with coordinate descent methods Yuan et al. (2011); Hsieh et al. (2011). Active-set strategy is another key element shared by these approaches, which facilitates the use of the Hessian, often requiring only the reduced Hessian rather than the full one, and more importantly, help identify the optimal nonzero subspace and eventually achieve (with the Hessian) fast local convergence. Unfortunately, however, most of those active-set methods are only able to shrink the size of the subspace significantly when the current iterate is close enough to the optimality. Some algorithms are aware of this, e.g., Wright (2012) gives up the Hessian and returns to first-order steps if the size of the subspace exceeds 500, QUIC uses a small number (set to a constant) to control the subspace size, etc. Hence, the efficient use of second order information in large problems is still a challenge. Of the aforementioned methods, the ones by Yuan et al. (2011) and Hsieh et al. (2011) produce the most satisfying results. But we note that both are specialized algorithms that heavily depends on the special structure present in the Hessians of the corresponding models. Hence their use will be limited.

In this work we make use of similar ideas as in Yuan et al. (2011) and Hsieh et al. (2011), but further improve upon these ideas to obtain efficient general schemes. In particular, we use a different working set selection strategy than those used in Hsieh et al. (2011)

. We choose to select a working set by observing the largest violations in the dual constraints. Similar technique has been successfully used in optimization for many decades, for instance in Linear Programming

Bixby et al. (1992) and in SVM Scheinberg (2006). As in QUIC and GLMNET we optimize the Lasso subproblems using coordinate descent, but we estimate the Hessian using limited memory BFGS method Nocedal and Wright (2006) because the low rank form of the Hessian estimates reduce the per iteration complexity of each coordinate descent step to a constant.

Our goal here, as mentioned above, is to achieve second-order type convergence rate while maintaining a comparable per iteration complexity with that of most first-order methods. Following the approach as in Yuan et al. (2011) and Hsieh et al. (2011), we apply coordinate descent methods iteratively to the lasso subproblems constructed at the current point. The acceleration of subproblem minimization, therefore, depends on controlling either the number of coordinate descent steps or the complexity of each individual step. The contributions of our work are thus twofolds. First of all, we adaptively maintain a working set of coordinates with the largest optimality violations, such that the steps we take along those coordinates always provide the best objective function value improvement. The greedy nature of this approach helps reduce the violation of optimality conditions rather aggressively in practice while effectively avoiding zero updates, and more importantly, extends/shrinks (depending on the initial point) the working set incrementally until it converges to the complement of the optimal active set. Secondly, we explore the use of the Hessian and Hessian approximations in the coordinate descent framework. We show that each coordinate descent step can be greatly accelerated by the use of a special form of limited-memory BFGS method Nocedal and Wright (2006). For example, in the case of sparse logistic regression, given the Hessian of logistic loss as where is a diagonal matrix and is the data matrix, the best implementation so far can only reduce the complexity of each coordinate descent step to flops Yuan et al. (2011), while with the help of LBFGS which approximates the Hessian by a low rank matrix where and , we are able to bring that complexity down to a constant time , depending on the limited-memory parameter which is often chosen between 5 and 20. The key observation here is that the Hessian approximation obtained by LBFGS is low rank unlike the true Hessian, and that can be exploited to expedite the computation of , the main expense of every coordinate descent step, by letting , where is the

-th row vector of the matrix

and is cached and updated using one column of for each step.

The paper is organized as follows. In Section 2 we explain how the subproblems are generated using LBFGS updates and working set selection. In Section 3 we explain how coordinate descent is applied to solve the subproblems with low rank Hessians. In Section 4 we present computations results on two instances of sparse logistic regression and five instances of sparse inverse covariance selection. The results demonstrate significant advantage of our approach compared to the other methods which inspired this work.

2 Outer iterations

Based on a generalization of the sequential quadratic programming method for nonlinear optimization Nocedal and Wright (2006); Tseng and Yun (2009), our approach iteratively constructs a piecewise quadratic model to be used in the step computation. At iteration the model is obtained by expanding the smooth component around the current iterate and keeping the regularization term, as follows:

(2)

where can be any positive definite matrix Tseng and Yun (2009). In particular, we note that both Hsieh et al. (2011) and Yuan et al. (2011) choose to be the Hessian of , in which case the objective function of (2) will be simply composed of the second-order Taylor expansion of and the term .

An active-set method maintains a set of indices that iteratively estimates the optimal active set which contains indices of zero entries in the optimal solution of (1)

(3)

where . We use to denote the set at -th iteration. For those coordinates , we fix its corresponding entry in the current iteration such that it does not change from current iteration to the next . That is, equivalently, to say that the descent step along that coordinate stays zero . Adding the active-set strategy to our descent step computation (2), we thus obtain

(4)

Next we are going to discuss the particular choice we make in selecting the positive definite matrix and , or its complement , which we refer to in this paper as the working set.

2.1 Low-Rank Hessian Approximation

We make use of Theorem 1 from Nocedal and Wright (2006), which gives a specific form of the low-rank Hessian estimates, which we denote by . is essentially a low-rank approximation of the Hessian of through the well-known limited-memory BFGS method, which allows the capture of the curvature information to help achieve a faster local convergence.

Theorem 1.

Let be symmetric and positive definite, and assume that the vector pairs satisfy and . Let be obtained by applying BFGS updates with these vector pairs to , using the formula:

(5)

We then have that

(6)

where and are the matrices defined by

(7)

while and are the matrices

(8)
(9)

For large-scale problems, BFGS method is often used in the limited-memory setting, known as the L-BFGS method Nocedal and Wright (2006). Note that the matrices and are augmented by one column every iteration according to (7), and the update (6) will soon become inefficient if all the previous pairs are used.

In the limited-memory setting, we maintain the set with the most recent correction pairs by removing the oldest pair and adding a newly generated pair. Hence when the number of iteration exceeds , the representation of the matrices needs to be slightly modified to reflect the changing nature of , and are maintained as the matrices. Also, and are now the matrices.

In the L-BFGS algorithm, the basic matrix may vary from iteration to iteration. A popular choice in practice is to set the basic matrix at -th iteration to , where

(10)

which is proved effective in ensuring that the search direction is well-scaled so that less time is spent on line search. With this particular choice of , we define and to be

(11)
(12)

and the formula to update becomes

(13)
(14)

where is given by (10). Hence, the work to compute only requires the matrix which is a collection of previous iterates and gradient differences, and that is a by nonsingular (as long as ) matrix whose inverse is thus easy to compute given is a small constant.

2.2 Greedy Active-set Selection

An obvious choice of would be , taking into account all the coordinates in subproblem minimization. But as we said earlier, this can be inefficient because there will potentially be many non-progressive steps where in (24) ends up being zero. For example, if , turns into zero and (24) becomes

(15)

which is equal to zero if the th entry of the subgradient , defined below, is zero

(16)

Hence, the possibly worst choice of would be letting , which will result in nothing but all zeros in coordinate descent update.

Let us now consider two potentially good choice of

(17)

used by Yuan et al. (2011) and Hsieh et al. (2011) and

(18)

considered by Wright (2012) and Olsen et al. (2012). Note that , because in practice can only be exactly zero if according to (16), so we have that if then . Also note that the two sets will both converge to the optimal active set when the correct non-zeros have been identified in and is close enough to the optimal , because in which case the violations in the dual constraints of those zero entries in will be zero such that . The above two observations can also be understood by representing as the union of two sets and , defined by

(19)
(20)

and we have that in general, and that only when .

Now let us introduce our rule to select the working set . Particularly, Let us use to denote the coordinate that has the th largest violations in the dual constraints. We then choose the working set for -th iteration by

(21)

where is a small integer chosen as a fraction of . Note that our is largely based on , whose size, as discussed above, is smaller than as used by Yuan et al. (2011) and Hsieh et al. (2011). This, of course, enables us to solve a smaller subproblem (2). However, we also note that choosing in our case is a bad idea because it does not allow zero elements of to become nonzero, so the method may fail to converge. To ensure convergence, we have to let every coordinate enter or leave our working set , which is one purpose of the union of the set .

2.3 Line Search and Convergence Analysis

After the step is computed, a line search procedure needs to be employed in order for the convergence analysis to follow from the framework by Tseng and Yun (2009). In this work, we adapt the Armijo rule, choosing the step size to be the largest element from satisfying

(22)

where , and . The convergence analysis from Tseng and Yun (2009) also requires the positive definiteness of the matrix . Since we obtain using LBFGS update, it is guaranteed to be positive definite as long as the product . We note that when in (1) is strongly convex, then holds at any two points, otherwise if might be non-convex, then damped BFGS updating needs to be employed Nocedal and Wright (2006).

1:  Choose an initial iterate , the LBFGS parameter , the working set parameter
2:  Set to empty matrix
3:  Set
4:  Set the iteration counter
5:  Compute the gradients and
6:  while  optimality test returns false  do
7:     Set and
8:     Compute the working set
9:     for  each coordinate  do
10:        Compute
11:     end for
12:     for  each coordinate  do
13:        Compute
14:        Compute according to (24)
15:        Update ,
16:     end for
17:     Compute the step size according to (22)
18:     Set
19:     Compute the gradients and
20:     Set and
21:     Update according to Theorem 1
22:     Compute according to Theorem 1
23:  end while
Algorithm 2.1 LHAC: LOW RANK HESSIAN APPROXIMATION IN ACTIVE-SET COORDINATE DESCENT

3 The inner problem

At -th iteration given the current iterate , we apply coordinate descent method to the piecewise quadratic subproblem (2) to obtain the direction . Suppose th coordinate in is updated, hence ( is the -th vector of the identity). Then is obtained by solving the following one-dimensional problem

(23)

which is well known to have the following closed-form solution that can be obtained by one soft-thresholding step Donoho (1995); Hsieh et al. (2011):

(24)

with chosen to be and .

As mentioned above, the special low-rank update of provides us an opportunity to accelerate the coordinate descent process. To see how, let us recall that , where , and chosen as a small constant. Clearly we do not need to explicitly store or compute . Instead, since is only used through and when applying soft-thresholding steps to updating each coordinate , we can only store the diagonal elements of and compute on the fly whenever it is needed. Specifically,

(25)

where is the th row of the matrix and is the th column vector of the matrix . To compute , we maintain a dimensional vector , then

(26)

which takes flops, instead of if we multiply by the th row of . Notice though, that after taking the soft-thresholding step has to be updated each time by . This update requires little effort given that is a vector with dimensions where is often chosen to be between and . However, we need to use extra memory for caching and which takes space. With the other space for storing the diagonal of , and , altogether we need space, which can be written as when .

4 Experiments

4.1 Sparse Logistic Regression

The objective function of sparse logistic regression is given by

where is the average logistic loss function and is the training set. The number of instances in the training set and the number of features are denoted by and respectively. Note that the evaluation of requires flops and to compute the Hessian requires flops. Hence, we choose such training sets for our experiment that and are large enough to test the scalability of the algorithms and yet small enough to be able to run on a workstation.

We test the algorithms on both sparse and dense data sets. The statistics of those data sets used in the experiments are summarized in Table 1. In particular, one data set, RCV1 Lewis et al. (2004), is a text categorization test collection made available from Reuters Corpus Volume I (RCV1), an archive of over 800,000 manually categorized newswire stories. Each RCV1 document used in our experiment has been tokenized, stopworded, stemmed and represented by 47,236 features in a final form as weighted vectors. Another data set is GISETTE, originally a handwriting digit recognition problem from NIPS 2003 feature selection challenge. Its feature set of size 5000 has been constructed in order to discriminate between two confusable handwritten digits: the four and the nine.

We compare LHAC with the first order method FISTA Beck and Teboulle (2009), and GLMNET Friedman et al. (2010); Yuan et al. (2011) which uses Hessian information. GLMNET in particular, originally proposed by Friedman et al. (2010) and later improved specifically for sparse logistic regression by Yuan et al. (2011) has been shown by Yuan et al. (2010) and Yuan et al. (2011) to be the state-of-the-art for sparse logistic regression (their experiments include algorithm such as OWL-QN Andrew and Gao (2007), TRON Lin and Moré (1999), SCD Shalev-Shwartz and Tewari (2009), BBR Genkin et al. (2007), etc.), so we compare LHAC with GLMNET. We implemented all three algorithms, for fair comparison, in MATLAB, and all the experiments were executed on AMD Opteron 2.0 GHz machines with 32G RAM and Linux OS.

We have generated eight training sets, with different number of training instances, from the two data sets RCV1 and GISETTE, four for each one. For RCV1, the training size increases from 500 to 2500; for GISETTE, it ranges from 500 to 5000. In all the experiments, we terminate the algorithm when the current objective subgradient is detected to be less than the precision times the initial subgradient. For each training set, we solve the problem twice using each algorithm, one with a large epsilon, e.g., , to test an algorithm’s ability to quickly obtain a useful solution, the other one with a small epsilon, e.g., , to see whether an algorithm can achieve an accurate solution. We report the runtime of each algorithm for all 16 experiments in Table 1. We also illustrate the results in Figure 1.

(a) GISETTE dense. (#Instances = 2000).
(b) RCV1 sparse. (#Instances = 2500).
(c) GISETTE dense. #Instances from 500 to 5000.
(d) RCV1 sparse. #Instances from 500 to 2500.
Figure 1: Convergence plots in 1(a) and 1(b) (the y-axes on log scale). Scalability plots in 1(c) and 1(d).

As can be seen from the results, the advantage of LHAC really lies in the fact that it absorbs the benefits from both the first order and the second order methods, such that it iterates fast, with low per iteration cost, and that it converges fast like other quasi-Newton methods, by taking advantage of the objective curvature information. For example, let us look at how FISTA compares with GLMNET in Table 1, which provides a good case to study the well-known trade-offs between first order and second order methods. Particularly, it can be seen that in most cases when is set to , FISTA takes up much less time to terminate than GLMNET. However, FISTA falls short when high precision is demanded, e.g., set to , in which case GLMNET almost always terminates faster than FISTA. Notably, LHAC is able to perform well in both cases. In fact, it is overwhelmingly faster than the other two in all but one experiment (the runtime difference between LHAC and GLMNET is close when the training size is sufficiently small), which brings us to another interesting aspect about LHAC. That is, the larger the size of the training set is, the larger the margin becomes between LHAC and GLMNET. We illustrate this observation in Figure 1(c) and 1(d), where the runtime of each algorithm () is plotted against the training size. Note that the complexity of LHAC scales almost linearly with the problem size (with a much flatter rate than FISTA), while that of GLMNET increases nonlinearly. Figure 1(a) and 1(b) plot the change of objective subgradient against elapsed cputime in one experiment. Again, it can be observed that LHAC iterates as efficient as FISTA in the beginning, and while FISTA gradually slows down near the optimality, it continues to work as GLMNET until reaching the optimality tolerance .



Data set
cputime(in seconds)
fista glmnet lhac
4.54 6.39 2.68
358.72 12.04 8.09
14.28 6.18 3.53
572.41 18.13 14.13
15.28 97.87 6.13
980.72 188.48 33.05
17.71 170.13 9.95
1212.92 351.60 52.00
52.91 10.93 3.23
1009.66 35.95 54.90
85.41 269.26 35.02
1686.69 1195.48 229.67
101.52 507.65 40.17
2241.95 2021.40 332.83
102.78 1758.98 112.11
2661.95 5935.72 752.87
Table 1: CPU time comparisons on real world classification data sets. Data set has features and non-zeros; Data set has features and non-zeros. denotes that the training set is and training samples are used. indicates the optimization tolerance on the objective subgradient. The results show that LHAC is significantly faster than other methods.

4.2 Sparse Inverse Covariance Matrix Estimation

In this section we compare our algorithm LHAC with QUIC by Hsieh et al. (2011), a specialized solver that has been shown by Olsen et al. (2012) and Hsieh et al. (2011) to be the state of the art for sparse covariance selection problem defined below

(27)

where the optimization variables are in a matrix form that is required to be positive definite.


Figure 2: Comparing two strategies - the strategy used by QUIC and the one used by LHAC - of choosing the working set on ER data set. Note that the working set maintained by LHAC approaches the optimal size from below while that of QUIC from above.

For this experiment, we are interested in comparing the total complexity of the two algorithms. CPU time, as often used, will not be a reasonable complexity measure in this case, because QUIC is implemented in C++ and LHAC - in MATLAB. Instead, we decide to count the number of flops required by each algorithm to solve problem (27). Here we note that the work required by both algorithm consists mainly two parts – one for solving the subproblems and the other for computing Cholesky factorizations of during line search, and that the time either algorithm spends on the first part – solving the subproblems – generally takes around of the total elapsed time, as observed in the experiments. Hence we focus our comparisons on the first part of the work – the one required by applying coordinate descent to solving subproblems, and we note that although LHAC generally requires more work in line search than QUIC (due to more outer iterations), it adds little to the total work for reasons stated above.



Data set
Complexity()
quic lhac
Lymph 587 47 13
105 45
ER 692 218 78
353 217
Arabidopsis 834 559 329
1001 804
Leukemia 1255 2101 574
3028 1326
Hereditary 1869 16558 8333
18519 16770
Table 2: Total complexity comparisons. stands for dimension. indicates the optimization tolerance on the objective subgradient.

Now let us describe how we compute the total complexity. Let us use to denote the number of outer iteration and to denote the number of inner coordinate sweep. Let stand for the number of flops one coordinate descent step takes. Then we define the total complexity by

(28)

In theory, the two algorithms have similar because they both apply coordinate descent to a lasso subproblem, but different and . Particularly, of QUIC is smaller than that of LHAC whereas of LHAC is smaller than that of QUIC. The reason is that QUIC uses the actual Hessian matrix of the smooth part rather than the low-rank Hessian approximation as in LHAC, which results in a different convergence rate () and also a different complexity () in computing the coordinate descent step (24). As we discussed earlier, the special structure of the low-rank matrix enables LHAC to accelerate that step to flops with a constant number (chosen as 10 for this experiment). Whereas in the case of QUIC, one coordinate step takes problem-dependent complexity where is the dimension of , but it achieves quadratic convergence when close to the optimality. The above observations make it interesting to see in practice how the , defined in (28), will compare for the two algorithms since it is not obvious which one is better through theoretical analysis alone.

The results are presented in Table 2, and in Figure 2 where we compare our active-set strategy with the one used by QUIC. The way we compute the is by counting the number of flops in the coordinate descent step in each algorithm. For QUIC, we use their C++ implementation Hsieh et al. (2011) and add a counter directly in their code; for LHAC, we use our own MATLAB implementation. We report the results on real world data from gene expression networks preprocessed by Li and Toh (2010). We set the regularization parameter for all the experiments as suggested in Li and Toh (2010). Similarly to the sparse logistic regression experiments, we solve each problem twice with different precision and . Note that in all experiments QUIC consistently requires more flops to solve the problem than LHAC does. In one case where low precision is used it takes QUIC nearly three times more flops; even in the case QUIC is best at, where high precision is demanded, it can take QUIC 1.5 times more flops to achieve the same precision as LHAC did.

5 Conclusions

We have presented a general algorithm LHAC for efficiently using second-order information in training large-scale -regularized convex models. We tested the algorithm on two instances of sparse logistic regression and five instances of sparse inverse covariance selection, and found that the algorithm is faster (sometimes overwhelmingly) than other specialized solvers on both models. The efficiency gains are due to two factors: the exploitation of the special structure present in the low-rank Hessian approximation matrix, and the greedy active-set strategy, which correctly identifies the non-zero features of the optimal solution.

References

  • Andrew and Gao [2007] G. Andrew and J. Gao. Scalable training of l1-regularized log-linear models. ICML, pages 33–40, 2007.
  • Beck and Teboulle [2009] A. Beck and M. Teboulle. A Fast Iterative Shrinkage-Thresholding Algorithm for Linear Inverse Problems. SIAM Journal on Imaging Sciences, 2(1):183–202, 2009.
  • Bixby et al. [1992] R. E. Bixby, J. W. Gregory, I. J. Lustig, R. E. Marsten, and D. F. Shanno. Very large-scale linear programming: A case study in combining interior point and simplex methods. Operations Research, 40(5):pp. 885–897, 1992.
  • Byrd et al. [2012] R. Byrd, G. Chin, J. Nocedal, and F. Oztoprak. A family of second-order methods for convex l1-regularized optimization. tech. rep., 2012.
  • Donoho [1995] D. Donoho. De-noising by soft-thresholding. Information Theory, IEEE Transactions on, 41(3):613 –627, may 1995.
  • Friedman et al. [2008] J. Friedman, T. Hastie, and R. Tibshirani. Sparse inverse covariance estimation with the graphical lasso. Biostatistics Oxford England, 9(3):432–41, 2008.
  • Friedman et al. [2010] J. Friedman, T. Hastie, and R. Tibshirani. Regularization paths for generalized linear models via coordinate descent. Journal of Statistical Software, 33:1–22, 2010.
  • Genkin et al. [2007] A. Genkin, D. D. Lewis, and D. Madigan. Large-scale bayesian logistic regression for text categorization. Technometrics, 49(3):291–304, 2007.
  • Hsieh et al. [2011] C.-J. Hsieh, M. Sustik, I. Dhilon, and P. Ravikumar. Sparse inverse covariance matrix estimation using quadratic approximation. NIPS, 2011.
  • Lewis et al. [2004] D. D. Lewis, Y. Yang, T. Rose, and F. Li. Rcv1: A new benchmark collection for text categorization research. JMLR, 5:361–397, 2004.
  • Li and Toh [2010] L. Li and K.-C. Toh. An inexact interior point method for L1-regularized sparse covariance selection. Mathematical Programming, 2:291–315, 2010.
  • Lin and Moré [1999] C.-J. Lin and J. Moré. Newtonʼs method for large bound-constrained optimization problems. SIAM Journal on Optimization, 9(4):1100, 1999.
  • Nesterov [2007] Y. Nesterov. Gradient methods for minimizing composite objective function. CORE report, 2007.
  • Nocedal and Wright [2006] J. Nocedal and S. J. Wright. Numerical Optimization. Springer Series in Operations Research. Springer, New York, NY, USA, 2nd edition, 2006.
  • Olsen et al. [2012] P. A. Olsen, F. Oztoprak, J. Nocedal, and S. J. Rennie. Newton-Like Methods for Sparse Inverse Covariance Estimation, 2012.
  • Scheinberg [2006] K. Scheinberg. An efficient implementation of an active set method for SVMs. JMLR, 7:2237–2257, 2006.
  • Scheinberg and Rish [2009] K. Scheinberg and I. Rish. SINCO - a greedy coordinate ascent method for sparse inverse covariance selection problem. tech. rep., 2009.
  • Scheinberg et al. [2010] K. Scheinberg, S. Ma, and D. Goldfarb. Sparse inverse covariance selection via alternating linearization methods. NIPS, 2010.
  • Shalev-Shwartz and Tewari [2009] S. Shalev-Shwartz and A. Tewari. Stochastic methods for l1 regularized loss minimization. ICML, pages 929–936, 2009.
  • Tibshirani [1996] R. Tibshirani. Regression shrinkage and selection via the lasso. Journal of the Royal Statistical Society Series B Methodological, 58(1):267–288, 1996.
  • Tseng and Yun [2009] P. Tseng and S. Yun. A coordinate gradient descent method for nonsmooth separable minimization. Mathematical Programming, 117(1-2):387–423, Aug. 2009.
  • Wright [2012] S. Wright. Accelerated block-coordinate relaxation for regularized optimization. SIAM Journal on Optimization, 22(1):159–186, 2012.
  • Wright et al. [2009] S. J. Wright, R. D. Nowak, and M. A. T. Figueiredo. Sparse reconstruction by separable approximation. Trans. Sig. Proc., 57(7):2479–2493, July 2009.
  • Yuan et al. [2010] G.-X. Yuan, K.-W. Chang, C.-J. Hsieh, and C.-J. Lin. A comparison of optimization methods and software for large-scale l1-regularized linear classification. JMLR, 11:3183–3234, 2010.
  • Yuan et al. [2011] G.-X. Yuan, C.-H. Ho, and C.-J. Lin.

    An improved GLMNET for l1-regularized logistic regression and support vector machines.

    National Taiwan University, 2011.