1 Introduction
Sparse classification algorithms have gained much popularity in the context of supervised learning, thanks to their ability to discard irrelevant features during the training stage. Such algorithms aim at learning a weighted linear combination of basis functions that fits the training data, while encouraging as many weights as possible to be equal to zero. This amounts to solving an optimization problem that involves a loss function plus a sparse regularization term. Different types of classifiers arise by varying the loss function, the most popular being the hinge and the logistic losses
Rosasco2004 ; Bartlett2006 .In the context of supervised learning, sparse regularization traces back to the work of Bradley and Mangasarian Bradley1998 , who showed that the
-norm can efficiently perform feature selection by shrinking small coefficients to zero. Other forms of regularization have also been studied, such as the
-norm Weston2002 , the -norm with Liu2007 , the -norm Zou2008 , and other nonconvex terms Laporte2014 . Mixed-norms have been investigated as well, due to their ability to impose a more structured form of sparsity Krishnapuram2005 ; Meier2008 ; Duchi2009 ; Obozinski2010 ; Yuan2010 ; Rosasco2013 .Many efficient learning algorithms exist in the case of quadratic regularization, by benefiting from the advantages brought by Lagrangian duality Blondel2014
. This is unfortunately not true for sparse regularization, because the dual formulation is as difficult to solve as the primal one. Consequently, sparse linear classifiers are usually trained through the direct resolution of the primal optimization problem. Among the possible approaches, one can resort to linear programming
Wang2007 , gradient-like methods Krishnapuram2005 ; Mairal2013 , proximal algorithms Laporte2014 ; Chierchia2015 ; Barlaud2015 , and other optimization techniques Tan2010 .Nowadays, it is well known that random updates can significantly reduce the computational time when a quadratic regularization is used Hsieh2008 ; Lacoste2013 . Therefore, a great deal of attention has been paid recently to stochastic approaches capable of handling a sparse regularization Pereyra2016 . The list of investigated techniques includes block-coordinate descent strategies Shalev2011 ; Blondel2013 ; Fercoq2015 ; Lu2015 , stochastic forward-backward iterations Langford2009 ; Xiao2010 ; Richtarik2014 ; Rosasco2014 ; Combettes2016 , random Douglas-Rachford splitting methods Combettes2015 , random primal-dual proximal algorithms Pesquet2015 , and stochastic majorization-minimization methods Mairal2013b ; Chouzenoux2015 .
In this paper, we propose a random-sweeping block-coordinate Douglas-Rachford splitting method. In addition to the stochastic behavior, it presents three distinctive features with respect to related approaches Briceno2011 ; Bot2013 ; Combettes2015 . Firstly, the matrix to be inverted at the initial step is block-diagonal, while in the concurrent approaches, it did not present any specific structure. The block diagonal property implies that the inversion step actually amounts to inverting a set of smaller size matrices. Secondly, the proposed algorithm can take advantage explicitly from a strong convexity property possibly fulfilled by some of the functions involved in the optimization problem. Finally, the dual variables appear explicitly in the proposed scheme, making it possible to use clever block-coordinate descent strategies Perekrestenko2017 .
Moreover, the proposed algorithm appears to be well tailored to binary logistic regression with sparse regularization. In contrast to gradient-like methods, our approach deals with the logistic loss through its proximity operator. This results in an algorithm that is not tied up to the Lipschitz constant of the loss function, possibly leading to larger updates per iteration. In this regard, our second contribution is to show that the proximity operator of the binary logistic loss can be expressed in closed form using the generalized Lambert W function Mezo2014 ; Maignan2016 . We also provide comparisons with state-of-the-art stochastic methods using benchmark datasets.
The paper is organized as follows. In Section 2, we derive the proposed Douglas-Rachford algorithm. In Section 3, we introduce sparse logistic regression, along with the proximal operator of the logistic loss. In Section 4, we evaluate our approach on standard datasets, and compare it to three sparse classification algorithms proposed in the literature Xiao2010 ; Rosasco2014 ; Chierchia2016 . Finally, conclusions are drawn in Section 5.
Notation: denotes the set of proper, lower semicontinuous, convex functions from a real Hilbert space to . Let . For every , the subdifferential of at is , the proximity operator of at is , and the conjugate of is in . The adjoint of a bounded linear operator from to a real Hilbert space is denoted by Let
be the underlying probability space, the
-algebra generated by a familyof random variables is denoted by
.2 Optimization method
Throughout this section, , are separable real Hilbert spaces. In addition, denotes the Hilbertian sum of
. Any vector
can thus be uniquely decomposed as where, for every , . In the following, a similar notation will be used to denote vectors in any product space (in bold) and their components.We will now aim at solving the following problem.
Problem 1
For every and for every , let , let be a differentiable convex function with -Lipschitz gradient, for some , and let be a linear bounded operator from to . The problem is to
under the assumption that the set of solutions is nonempty.
In order to address Problem 1, we propose to employ the random-sweeping block-coordinate version of the Douglas-Rachford splitting method with stochastic errors described in Algorithm 1. Let us define
(1) |
where and are the positive constants introduced in Algorithm 1.
The next result establishes the convergence of the proposed algorithm.
Proposition 1
For every , let , and be -valued random variables and, for every , let and be -valued random variables and let be -valued random variables. In addition, let be identically distributed -valued random variables and in Algorithm 1 assume that
-
and are independent;
-
;
-
;
-
and .
Then, the sequence generated by Algorithm 1 converges weakly -a.s. to an -valued random variable.
Proof.
Problem 1 can be reformulated as minimizing where
(2) | ||||
(3) | ||||
(4) | ||||
(5) |
and denotes a generic element of with for every . Since , from (Bauschke2017, , Theorem 16.47(i)), Problem 1 is equivalent to
(6) |
which, from (Briceno2011, , Proposition 2.8) is also equivalent to
(7) |
where is maximally monotone and
is a skewed linear operator. Note that
and, from (2), (4) and (Bauschke2017, , Proposition 13.30 and Proposition 16.9), and . Since, for every , is convex differentiable with a Lipschitzian gradient , it follows from Baillon-Haddad theorem (Bauschke2017, , Corollary 18.17) that is cocoercive and, hence, is strongly monotone. Therefore, for every , is strongly convex. By defining(8) |
it follows from (Bauschke2017, , Proposition 10.8) that, for every , and, hence, is maximally monotone. Consequently, Problem 1 can be rewritten equivalently as
(9) |
which, for strictly positive constants and , is equivalent to
(10) |
where
(11) |
Since and are linear and monotone operators in and , respectively, the operator
is maximally monotone in . Therefore, by defining the strongly positive diagonal linear operator
(12) |
where and , the operator
(13) |
is maximally monotone in , where
(14) |
Note that the renormed product space is the Hilbert sum where, for every and , and are endowed by the norm and , respectively. Therefore, since and are maximally monotone in and , respectively, we conclude that (10) is a particular case of the primal inclusion in (Combettes2015, , Proposition 5.1).
Now we write Algorithm 1 as a particular case of the random block-coordinate Douglas-Rachford splitting proposed in (Combettes2015, , Proposition 5.1) applied to (10) in . Given , let . It follows from (13) that
(15) |
which leads to
(16) |
In order to derive an explicit formula for the matrix inversion in (16), set . We have and, since (3) and is diagonal, we obtain
and, hence,
(17) |
Therefore, (16) can be written equivalently as
(18) |
where
(19) |
Moreover, from (15), we deduce that
(20) |
and, hence, we have , where
(21) |
Now, it follows from (Bauschke2017, , Proposition 16.44) that, for every and , and and, for every and , we have
(22) |
Therefore, if we have that (22) is equivalent to
(23) |
and, from Moreau’s decomposition formula (Bauschke2017, , Theorem 14.13(ii)), we obtain
(24) |
Noting that, for every , , from (Bauschke2017, , Proposition 24.14), (22) and (24) we deduce that
(25) |
and, hence,
(26) |
Therefore, by defining, for every and , via
(27) |
we deduce that Algorithm 1 can be written equivalently as
For
Defining, for every , , we have
(28) |
where the last inequality follows from (ii), (iii), (27) and
(29) |
Altogether, since operator is weakly sequentially continuous because it is continuous and linear, the result follows from (Combettes2015, , Proposition 5.1) when the error term in the computation of is zero. ∎∎
Remark 1
-
In Proposition 1
, the binary variables
and signal whether the variables and are activated or not at iteration . Assumption (iv) guarantees that each of the latter variables is activated with a nonzero probability at each iteration. In particular, it must be pointed out that the variables and only need to be computed when . -
Note that Algorithm 1 may look similar to the stochastic approach proposed in (Combettes2015, , Corollary 5.5) (see also (Briceno2011, , Remark 2.9), and Bot2013 for deterministic variants). It exhibits however three key differences. Most importantly, the operator inversions performed at the initial step amount to inverting a set of positive definite self-adjoint operators defined on the spaces . We will see in our application that this reduces to invert a set of small size symmetric positive definite matrices. Another advantage is that the smoothness of the functions is taken into account, and a last one is that the dual variables appear explicitly in the iterations.
-
If, for every , and , Algorithm 1 simplifies to Algorithm 2, where unnecessary indices have been dropped and we have set
(30) In this case,
(31) When , it turns out this algorithm is exactly the same as the one resulting from a direct application of (Combettes2015, , Corollary 5.5)Chierchia2017 .
-
The situation when, for a given , is not Lipschitz-differentiable can be seen as the limit case when . It can then be shown that Algorithm 1 remains valid by setting .
3 Sparse logistic regression
The proposed algorithm can be applied in the context of binary linear classification. A binary linear classifier can be modeled as a function that predicts the output associated to a given input . This prediction is defined through a linear combination of the input components, yielding the decision variable
(32) |
where
is the weight vector to be estimated. In supervised learning, this weight vector is determined from a set of input-output pairs
(33) |
which is called training set. More precisely, the learning task can be defined as the trade-off between fitting the training data and reducing the model complexity, leading to an optimization problem expressed as
(34) |
where is a regularization function and stands for the loss function. In the context of sparse learning, a popular choice for the regularization is the -norm. Although many choices for the loss function are possible, we are primarily interested in the logistic loss, which is detailed in the next section.
3.1 Logistic regression
Logistic regression aims at maximizing the posterior probability density function of the weights given the training data, here assumed to be a realization of statistically independent input-output random variables. This leads us to
(35) |
where
is the weight prior probability density function and, for every
, is the conditional data likelihood of the -th input knowing the weight values, while is the conditional probability of the -th output knowing the-th input and the weights. Let us model this conditional probability with the sigmoid function defined as
(36) |
and assume that the inputs and the weights are statistically independent and that is log-concave. Then, the negative-logarithm of the energy in (35) yields an instance of Problem (34) in which
(37) |
and, for every , . (The term can be discarded since the inputs and the weights are assumed statistically independent.) The function in (37) is called logistic loss. For completeness, note that other loss functions, leading to different kinds of classifiers, are the hinge loss Cortes1995
(38) |
with , and the Huber loss Martins2016