# Dual Extrapolation for Sparse Generalized Linear Models

Generalized Linear Models (GLM) form a wide class of regression and classification models, where prediction is a function of a linear combination of the input variables. For statistical inference in high dimension, sparsity inducing regularizations have proven to be useful while offering statistical guarantees. However, solving the resulting optimization problems can be challenging: even for popular iterative algorithms such as coordinate descent, one needs to loop over a large number of variables. To mitigate this, techniques known as screening rules and working sets diminish the size of the optimization problem at hand, either by progressively removing variables, or by solving a growing sequence of smaller problems. For both techniques, significant variables are identified thanks to convex duality arguments. In this paper, we show that the dual iterates of a GLM exhibit a Vector AutoRegressive (VAR) behavior after sign identification, when the primal problem is solved with proximal gradient descent or cyclic coordinate descent. Exploiting this regularity, one can construct dual points that offer tighter certificates of optimality, enhancing the performance of screening rules and helping to design competitive working set algorithms.

## Authors

• 5 publications
• 7 publications
• 43 publications
• 21 publications
• ### Dual Extrapolation for Faster Lasso Solvers

Convex sparsity-inducing regularizations are ubiquitous in high-dimensio...

02/21/2018 ∙ by Massias Mathurin, et al. ∙ 0

• ### GAP Safe screening rules for sparse multi-task and multi-class models

High dimensional regression benefits from sparsity promoting regularizat...

06/11/2015 ∙ by Eugene Ndiaye, et al. ∙ 0

• ### Gap Safe screening rules for sparsity enforcing penalties

In high dimensional regression settings, sparsity enforcing penalties ha...

11/17/2016 ∙ by Eugene Ndiaye, et al. ∙ 0

• ### From safe screening rules to working sets for faster Lasso-type solvers

Convex sparsity-promoting regularizations are ubiquitous in modern stati...

03/21/2017 ∙ by Mathurin Massias, et al. ∙ 0

• ### Screening Rules for Lasso with Non-Convex Sparse Regularizers

Leveraging on the convexity of the Lasso problem , screening rules help ...

02/16/2019 ∙ by Alain Rakotomamonjy, et al. ∙ 18

• ### Distributed Coordinate Descent for Generalized Linear Models with Regularization

Generalized linear model with L_1 and L_2 regularization is a widely use...

11/07/2016 ∙ by Ilya Trofimov, et al. ∙ 0

• ### Online Dual Coordinate Ascent Learning

The stochastic dual coordinate-ascent (S-DCA) technique is a useful alte...

02/24/2016 ∙ by Bicheng Ying, et al. ∙ 0

##### This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

## 1 Introduction

Since the introduction of the Lasso (Tibshirani, 1996), similar to the Basis Pursuit denoising (Chen and Donoho, 1995)

in signal processing, sparsity inducing penalties have had a tremendous impact on machine learning

(Bach et al., 2012)

. They have been applied to a variety of statistical estimators, both for regression and classification tasks: sparse logistic regression

(Koh et al., 2007), Group Lasso (Yuan and Lin, 2006), Sparse Group Lasso (Simon et al., 2013; Ndiaye et al., 2016), multitask Lasso (Obozinski et al., 2010), Square-Root Lasso (Belloni et al., 2011). All of these estimators fall under the framework of Generalized Linear Models (McCullagh and Nelder, 1989), where the output is assumed to follow an exponential family distribution whose mean is a linear combination of the input variables. The key property of

regularization is that it allows to jointly perform feature selection and prediction, which is particularly useful in high dimensional settings. Indeed, it can drastically reduce the number of variables needed for prediction, thus improving model interpretability and computation time for prediction. Amongst the algorithms proposed to solve these, coordinate descent

111throughout the paper, this means cyclic and proximal coordinate descent unless specified otherwise (Tseng, 2001; Friedman et al., 2007) is the most popular in machine learning scenarios (Fan et al., 2008; Friedman et al., 2010; Richtárik and Takáč, 2014; Fercoq and Richtárik, 2015; Perekrestenko et al., 2017; Karimireddy et al., 2018). It consists in updating the vector of parameters one coefficient at a time, looping over all the predictors until convergence.

Since only a fraction of the coefficients are non-zero in the optimal parameter vector, a recurring idea to speed up solvers is to limit the size of the optimization problem by ignoring features which are not included in the solution. To do so, two approaches can be distinguished:

• screening rules, introduced by El Ghaoui et al. (2012) and later developed by Ogawa et al. (2013); Wang et al. (2012); Xiang et al. (2016); Bonnefoy et al. (2014); Fercoq et al. (2015); Ndiaye et al. (2017), progressively remove features from the problems in a backward approach,

• working sets techniques (Fan and Lv, 2008; Roth and Fischer, 2008; Kowalski et al., 2011; Tibshirani et al., 2012; Johnson and Guestrin, 2015) solve a sequence of smaller problems restricted to a growing number of features.

One common idea between the current state-of-art methods for screening (Gap Safe rules Fercoq et al. 2015; Ndiaye et al. 2017) and working sets (, Johnson and Guestrin 2015, 2018) is to rely heavily on a dual point to identify useful features. The quality of such a dual point for the dual problem is critical here as it has a direct impact on performance. However, although a lot of attention has been devoted to creating a sequence of primal iterates that converge fast to the optimum (Fercoq and Richtárik, 2015), the construction of dual iterates has not been scrutinized, and the standard approach to obtain dual iterates from primal ones (Mairal, 2010), although converging, is crude.

In this paper, we propose a principled way to construct a sequence of dual points that converges faster than the standard approach proposed by Mairal (2010). Based on an extrapolation procedure inspired by Scieur et al. (2016), it comes with no significant extra computational costs, while retaining convergence guarantees of the standard approach. This construction was first introduced for non smooth optimization by Massias et al. (2018) for the Lasso case only, while we generalize it here to any Generalized Linear Model (GLM). More precisely, we properly define, quantify and prove the asymptotic Vector AutoRegressive (VAR) behavior of dual iterates for sparse GLMs solved with proximal gradient descent or cyclic coordinate descent. The resulting new construction:

• provides a tighter control of optimality through duality gap evaluation,

• improves the performance of Gap safe rules,

• improves the working set policy proposed in Massias et al. (2017), thanks to better feature identification,

• is easy to implement and combine with other solvers.

The article proceeds as follows. We introduce the framework of -regularized GLMs and duality in Section 2. As a seminal example, we present our results on dual iterates regularity and dual extrapolation for the Lasso in Section 3. We generalize it to a variety of problems in Sections 5 and 4. Results of Section 6 demonstrate a systematic improvement in computing time when dual extrapolation is used together with Gap Safe rules or working set policies.

##### Notation

For any integer , we denote by the set . The design matrix is composed of observations stored row-wise, and whose -th column is ; the vector (resp. ) is the response vector for regression (resp. binary classification). The support of is , of cardinality . For , and are and restricted to features in . As much as possible, exponents between parenthesis () denote iterates and subscripts () denote vector entries or matrix columns. The sign function is with the convention

. The sigmoid function is

. The soft-thresholding of at level is . Applied to vectors, , and (for ) act element-wise. Element-wise product between vectors of same length is denoted by . The vector of size whose entries are all equal to 0 (resp. 1) is denoted by (resp. ). On square matrices, is the spectral norm (and the standard Euclidean norm for vectors read ), is the -norm. For a symmetric positive definite matrix , is the -weighted inner product, whose associated norm is denoted . We extend the small- notation to vector valued functions in the following way: for and , if and only if , tends to 0 when tends to 0. For a convex and proper function , its Fenchel-Legendre conjugate is defined by , and its subdifferential at is .

## 2 GLMs, Vector AutoRegressive sequences and sign identification

We consider the following optimization problem: [Sparse Generalized Linear Model] ∈_β∈^p ⏟∑_i = 1^n f_i( β^⊤x_i) + λβ_1_(β) , where all are convex222by that we mean close, convex and proper following the framework in Bauschke and Combettes (2011)., differentiable functions with -Lipschitz gradients. The parameter is a non-negative scalar, controlling the trade-off between data fidelity and regularization.

Two popular instances of Section 2 are the Lasso (, ) and Sparse Logistic regression (, ).

Note that we could use a more complex regularizer in Section 2, to handle group penalties for example. For the sake of clarity we rather remain specific, and generalize to other penalties when needed in Section 4.2.

[Duality for sparse GLMs] A dual formulation of Section 2 reads:

 \thetaopt=\argmaxθ∈ΔX(−n∑i=1f∗i(−λθi))\cD(θ), (1)

where . The dual solution is unique because the ’s are -strongly convex (Hiriart-Urruty and Lemaréchal, 1993, Thm 4.2.1 p. 82) and the KKT conditions read:

 ∀i∈[n], \thetaopti=−f′i(\betaopt⊤xi)/λ (link equation) (2) ∀j∈[p], x⊤j\thetaopt∈∂\absin⋅(\betaoptj) (subdifferential inclusion) (3)

For any , one has , and . The duality gap can thus be used as an upper bound for the sub-optimality of a primal vector : for any , any , and any feasible :

 \cP(β)−\cD(θ)≤ϵ⇒\cP(β)−\cP(\betaopt)≤ϵ. (4)

These results holds because Slater’s condition is met: Section 2 is unconstrained and the objective function has domain (Boyd and Vandenberghe, 2004, §5.2.3), therefore strong duality holds. The latter shows that even though is unknown in practice and the sub-optimality gap cannot be evaluated, creating a dual feasible point allows to construct an upper bound which can be used as a tractable stopping criterion.

In high dimension, solvers such as proximal gradient descent (PG) and coordinate descent (CD) are slowed down due to the large number of features. However, by design of the penalty, is expected to be sparse, especially for large values of . Thus, a key idea to speed up these solvers is to identify the support of so that features outside of it can be safely ignored. Doing this leads to a smaller problem that is faster to solve. Removing features when it is guaranteed that they are not in the support of the solution is at the heart of the so-called Gap Safe Screening rules (Fercoq et al., 2015; Ndiaye et al., 2017):

[Gap Safe Screening rule, (Ndiaye et al., 2017, Thm. 6)] The Gap Safe screening rule for Section 2 reads:

 ∀j∈[p],∀β∈\bbRp,∀θ∈ΔX,\absinx⊤jθ<1−\norminxj√2γλ2(\cP(β)−\cD(θ))⟹\betaoptj=0. (5)

Therefore, while running an iterative solver and computing the duality gap at iteration , the criterion (5) can be tested for all features , and the features guaranteed to be inactive at optimum can be ignored.

Equations 5 and 4 do not require a specific choice of , provided it is in . It is up to the user and so far it has not attracted much attention in the literature. Thanks to the link equation , a natural way to construct a dual feasible point at iteration , when only a primal vector is available, is:

 \thetaresiduals(t)\eqdef−∇F(Xβ(t))/max(λ,\norminX⊤∇F(Xβ(t))∞). (6)

This was coined residuals rescaling (Mairal, 2010) following the terminology used for the Lasso case where is equal to the residuals, .

To improve the control of sub-optimality, and to better identify useful features, the aim of our proposed dual extrapolation is to obtain a better dual point (closer to the optimum ). The idea is to do it at a low computational cost by exploiting the structure of the sequence of dual iterates ; we explain what is meant by “structure”, and how to exploit it, in the following: [Vector AutoRegressive sequence] We say that is a Vector AutoRegressive (VAR) sequence (of order 1) if there exists and such that for :

 r(t+1)=Ar(t)+b. (7)

We also say that the sequence , converging to , is an asymptotic VAR sequence if there exist and such that for :

 r(t+1)−Ar(t)−b=o(r(t)−^r). (8)

[Extrapolation for VAR sequences (Scieur, 2018, Thm 3.2.2)] Let be a VAR sequence in , satisfying with a symmetric positive definite matrix such that , and . Assume that for , the family is linearly independent and define

 U(t) \eqdef[r(t−K)−r(t−K+1),…,r(t−1)−r(t)]∈\bbRn×K, (9) (c1,…,cK) \eqdef(U(t)⊤U(t))−11K1⊤K(U(t)⊤U(t))−11K∈\bbRK, (10) r\extr \eqdefK∑k=1ckr(t−K−1+k)∈\bbRn. (11)

Then, satisfies

 \normAr\extr−b−r\extr≤\cO(ρK), (12)

where . The justification for this extrapolation procedure is the following: since , converges, say to . For , we have . Let be the coefficients of ’s characteristic polynomial. By Cayley-Hamilton’s theorem, . Given that ,

is not an eigenvalue of

and , so we can normalize these coefficients to have . For , we have:

 n∑k=0ak(r(t−n+k)−^r) =(n∑k=0akAk)(r(t−n)−^r)=0, (13) and son∑k=0akr(t−n+k) =n∑k=0ak^r=^r. (14)

Hence, .

Therefore, it is natural to seek to approximate as an affine combination of the last iterates . Using iterates might be costly for large , so one might rather consider only a smaller number , find such that approximates . Since is a fixed point of , should be one too. Under the normalizing condition , this means that

 K∑k=1ckr(t−K−1+k)−AK∑k=1ckr(t−K−1+k)−b =K∑k=1ckr(t−K−1+k)−K∑k=1ck(r(t−K+k)−b)−b =K∑k=1ck(r(t−K−1+k)−r(t−K+k)) (15)

should be as close to as possible; this leads to solving:

 ^c=\argminc∈\bbRKc⊤1K=1\normK∑k=1ck(r(t−K+k)−r(t−K−1+k)), (16)

 ^c=(U(t)⊤U(t))−11K1⊤K(U(t)⊤U(t))−11K, (17)

where . In practice, the next proposition shows that when does not have full column rank, it is theoretically sound to use a lower value for the number of extrapolation coefficients : If is not invertible, then .

###### Proof.

Let be such that , with (the proof is similar if , etc.). Then and, setting , . Since , it follows that

 r(t+1) =Ar(t)+b =1xKK∑k=1(xk−xk−1)(Ar(t−K+k−1)+b) =1xKK∑k=1(xk−xk+1)r(t−K+k)∈\Span(r(t−1),…,r(t−K)), (18)

and subsequently for all . By going to the limit, . ∎

Finally, we will now state the results on sign identification, which implies support identification. For these results, which connect sparse GLMs to VAR sequences and extrapolation, we need to make the following assumption:

The solution of Section 2 is unique. Section 2 may seem stringent, as whenever

is not strictly convex and several global minima may exist. However, following earlier results by Rosset et al. (2004), Tibshirani (2013) showed that when the entries of are sampled from a continuous distribution, Section 2 is satisfied almost surely. It is also worth noting that other works on support identification (Nutini et al., 2017; Sun et al., 2019; Poon et al., 2018) involve a non-degeneracy condition which boils down to Section 2 in our case. Motivated by practical applications on real-life datasets, we will therefore use Section 2.

In the following, we extend results by Hale et al. (2008) about sign identification from proximal gradient to coordinate descent. [Sign identification for proximal gradient and coordinate descent] Let Section 2 hold. Let be the sequence of iterates converging to and produced by proximal gradient descent or coordinate descent when solving Section 2 (reminded in lines 1 and 1 of Algorithm 1).

There exists such that:

. The smallest epoch

for which this holds is when sign identification is achieved.

###### Proof.

For lighter notation in this proof, we denote and . For , the subdifferential inclusion (3) reads:

 −x⊤j∇F(X\betaopt)λ∈⎧⎪⎨⎪⎩{1}, if \betaoptj>0,{−1}, if \betaoptj<0,, if \betaoptj=0. (19)

Motivated by these conditions, the equicorrelation set introduced by Tibshirani (2013) is:

 E\eqdef\condsetinj∈[p]|x⊤j∇F(X\betaopt)|=λ=\condsetinj∈[p]|x⊤j\thetaopt|=1. (20)

We introduce the saturation gap associated to Section 2:

 ^δ \eqdefmin{λlj(1−|x⊤j∇F(X\betaopt)|λ):j∉E}=min{λlj(1−|x⊤j\thetaopt|):j∉E}>0. (21)

As is unique, is well-defined, and strictly positive by definition of . By Equation 19, the support of any solution is included in the equicorrelation set and we have equality for almost every since we assumed that the solution is unique (Tibshirani, 2013, Lemma 13) – the non equality holding only at the kinks of the Lasso path, which we exclude from our analysis.

Because of Section 2, we only need to show that the coefficients outside the equicorrelation eventually vanish. The proof requires to study the primal iterates after each update (instead of after each epoch), hence we use the notation for the primal iterate after the -th update of coordinate descent. This update only modifies the -th coordinate, with :

 ~β(s+1)j=\ST(hj(~β(s)),λlj). (22)

Note that at optimality, for every , one has:

 \betaoptj=\ST(hj(\betaopt),λlj). (23)

Let us consider an update of coordinate descent such that the updated coordinate verifies and , hence, . Then:

 |~β(s+1)j−\betaoptj| =∣∣∣\ST(hj(~β(s)),λlj)−\ST(hj(\betaopt),λlj)∣∣∣ (24)

where we used the following inequality (Hale et al., 2008, Lemma 3.2):

 \ST(x,ν)≠0,\ST(y,ν)=0⟹|\ST(x,ν)−\ST(y,ν)|≤|x−y|−(ν−|y|). (25)

Now notice that by definition of the saturation gap (21), and since :

 λlj(1−|x⊤j∇F(X\betaopt)|λ)≥^δ, thatis, λlj−\absinhj(\betaopt)≥^δ (using \betaoptj=0). (26)

Combining Equations 26 and 2 yields

 |~β(s+1)j−\betaoptj| ≤∣∣hj(~β(s))−hj(\betaopt)∣∣−^δ. (27)

This can only be true for a finite number of updates, since otherwise taking the limit would give , and (Eq. (21)). Therefore, after a finite number of updates, for . For , by Section 2, so has the same sign eventually since it converges to .

The proof for proximal gradient descent is a result of Hale et al. (2008, Theorem 4.5), who provide the bound . ∎

## 3 A seminal example: the Lasso case

Dual extrapolation was originally proposed for the Lasso in the algorithm (Massias et al., 2018). As the VAR model holds exactly in this case, we first devote special attention to it. We will make use of asymptotic VAR models and generalize to all sparse GLMs in Section 4.

Using the identification property of coordinate descent and proximal gradient descent, we can formalize the VAR behavior of dual iterates:

When is obtained by a cyclic coordinate descent or proximal gradient descent applied to the Lasso problem, is a VAR sequence after sign identification.

###### Proof.

First let us recall that in the Lasso case. Let denote an epoch after sign identification. The respective updates of proximal gradient descent and coordinate descent are reminded in lines 1 and 1 of Algorithm 1.

##### Coordinate descent:

Let be the indices of the support of , in increasing order. As the sign is identified, coefficients outside the support are 0 and remain 0. We decompose the -th epoch of coordinate descent into individual coordinate updates:

Let denote the initialization (the beginning of the epoch, ), the iterate after coordinate has been updated, etc., up to after coordinate has been updated, at the end of the epoch ().

Let , then and are equal everywhere, except at coordinate :

 ~β(s)js =\ST(~β(s−1)js+1\norminxjs2x⊤js(y−X~β(s−1)),λ\norminxjs2) =~β(s−1)js+1\norminxjs2x⊤js(y−X~β(s−1))−λ\sign(\betaoptjs)\norminxjs2, (28)

where we have used sign identification: . Therefore

 X~β(s)−X~β(s−1) =xjs(~β(s)js−~β(s−1)js) =xjs⎛⎜⎝x⊤js(y−X~β(s−1))−λ\sign(\betaoptjs)\norminxjs2⎞⎟⎠ =1\norminxjs2xjsx⊤js(y−X~β(s−1))−λ\sign(\betaoptjs)\norminxjs2xjs. (29)

This leads to the following linear recurrent equation:

 X~β(s) =(\Idn−1\norminxjs2xjsx⊤js)As∈\bbRn×nX~β(s−1)+x⊤jsy−λ\sign(\betaoptjs)\norminxjs2xjsbs∈\bbRn. (30)

Hence, one gets recursively

 X~β(S) =ASX~β(S−1)+bS =ASAS−1X~β(S−2)+ASbS−1+bS =AS…A1AX~β(0)+AS…A2b1+⋯+ASbS−1+bSb. (31)

We can thus write the following VAR equations for at the end of each epoch coordinate descent:

 Xβ(t+1) =AXβ(t)+b, (32) Xβ(t+1)−X\betaopt =A(Xβ(t)−X\betaopt). (33)

Let , and denote respectively , and restricted to features in the support . Notice that since we are in the identified sign regime, . With , a proximal gradient descent update reads:

 β(t+1)\cS =\ST(β(t)\cS−1LX⊤\cS(X\cSβ(t)\cS−y),λL) =β(t)\cS−1LX⊤\cS(X\cSβ(t)\cS−y)−λL\sign(\betaopt\cS) =(\IdS−1LX⊤\cSX\cS)β(t)\cS+1LX⊤\cSy−λL\sign(\betaopt\cS). (34)

Hence the equivalent of Equation 32 for proximal gradient descent is:

 Xβ(t+1)=(\Idn−1LX\cSX⊤\cS)Xβ(t)+1LX\cSX⊤\cSy−λLX\cS\sign(\betaopt\cS). (35)

Figure 1 represents the Lasso dual for a toy problem and illustrates the VAR nature of . As highlighted in Tibshirani (2017), the iterates correspond to the iterates of Dykstra’s algorithm to project onto . During the first updates, the dual iterates do not have a regular trajectory. However, after a certain number of updates (corresponding to sign identification), they alternate in a geometric fashion between two hyperplanes. In this regime, it becomes beneficial to use extrapolation to obtain a point closer to . Equation 31 shows why we combine extrapolation with cyclic coordinate descent: if the coefficients are not always updated in the same order (see Massias et al. 2018, Figure 1(c-d)), the matrix depends on the epoch, and the VAR structure may no longer hold. Having highlighted the VAR behavior of , we can introduce our proposed dual extrapolation: [Extrapolated dual point for the Lasso] For a fixed number of proximal gradient descent or coordinate descent epochs, let denote the residuals at epoch of the algorithm. We define the extrapolated residuals

 r(t)accel=⎧⎪ ⎪⎨⎪ ⎪⎩r(t),if t≤K,K∑k=1ckr(t+1−k),if t>K. (36)

where is defined as in (17) with .Then, we define the extrapolated dual point as:

 \thetaccel(t)\eqdefr(t)accel/max(λ,\norminX⊤r(t)accel∞). (37)

In practice, we use and do not compute if cannot be inverted. Additionally, to impose monotonicity of the dual objective, and guarantee a behavior at least as good at , we use as dual point at iteration :

 θ(t)=\argmaxθ∈{θ(t−1),\thetaccel(t),\thetaresiduals(t)}\cD(θ). (38)

There are two reasons why the results of Equation 8 cannot be straightforwardly applied to Equation 37:

1. the analysis by Scieur et al. (2016) requires to be symmetrical, which is the case for proximal gradient descent but not for cyclic coordinate descent (as and only commute if and are collinear). To circumvent this issue, we can make symmetrical: instead of considering cyclic updates, we could consider that iterates are produced by a cyclic pass over the coordinates, followed by a cyclic pass over the coordinates in reverse order. The matrix of the VAR in this case is no longer , but (the ’s are symmetrical). A more recent result from Bollapragada et al. (2018) indicates that the bound still holds for a non-symmetric (coherent with the practical results from Section 6), at the price of a much more complex analysis.

2. for both proximal gradient and coordinate descent we have instead of as soon as : if the support of is of size smaller than (), 1 is an eigenvalue of . Indeed, for coordinate descent, if , there exists a vector , orthogonal to the vectors . The matrix being the orthogonal projection onto , we therefore have for every , hence . For proximal gradient descent, is not invertible when , hence 1 is an eigenvalue of