1 Introduction
Many problems in machine learning, statistics and signal processing may be cast as convex optimization problems. In largescale situations, simple gradientbased algorithms with potentially many cheap iterations are often preferred over methods, such as Newton’s method or interiorpoint methods, that rely on fewer but more expensive iterations. The choice of a firstorder method depends on the structure of the problem, in particular (a) the smoothness and/or strong convexity of the objective function, and (b) the computational efficiency of certain operations related to the nonsmooth parts of the objective function, when it is decomposable in a smooth and a nonsmooth part.
In this paper, we consider two classical algorithms, namely (a) subgradient descent and its mirror descent extension [29, 24, 4], and (b) conditional gradient algorithms, sometimes referred to as FrankWolfe algorithms [16, 13, 15, 14, 19].
Subgradient algorithms are adapted to nonsmooth unstructured situations, and after steps have a convergence rate of in terms of objective values. This convergence rate improves to when the objective function is strongly convex [22]. Conditionalgradient algorithms are tailored to the optimization of smooth functions on a compact convex set, for which minimizing linear functions is easy (but where orthogonal projections would be hard, so that proximal methods [26, 5] cannot be used efficiently). They also have a convergence rate of [15]. The main results of this paper are (a) to show that for common situations in practice, these two sets of methods are in fact equivalent by convex duality, (b) to recover a previously proposed extension of the conditional gradient method which is more generally applicable [10], and (c) provide explicit convergence rates for primal and dual iterates. We also review in Appendix A the nonstrongly convex case and show that both primal and dual suboptimalities then converge at rate .
More precisely, we consider a convex function defined on , a convex function defined on , both potentially taking the value , and a matrix . We consider the following minimization problem, which we refer to as the primal problem:
(1) 
Throughout this paper, we make the following assumptions regarding the problem:

is Lipschitzcontinuous and finite on , i.e., there exists a constant such that for all , , where denotes the Euclidean norm. Note that this implies that the domain of the Fenchel conjugate is bounded. We denote by the bounded domain of . Thus, for all , . In many situations, is also closed but this is not always the case (in particular, when tends to infinity when tends to the boundary of ).
Note that the boundedness of the domain of is crucial and allows for simpler proof techniques with explicit constants (see a generalization in [10]).
Moreover, we assume that the following quantities may be computed efficiently:

Subgradient of : for any , a subgradient of is any maximizer of .

Gradient of : for any , may be computed and is equal to the unique maximizer of .
The values of the functions , , and will be useful to compute duality gaps but are not needed to run the algorithms. As shown in Section 2, there are many examples of pairs of functions with the computational constraints described above. If other operations are possible, in particular , then proximal methods [5, 26] applied to the dual problem converge at rate . If and are smooth, then gradient methods (accelerated [25, Section 2.2] or not) have linear convergence rates.
We denote by the primal objective in Eq. (1). It is the sum of a Lipschitzcontinuous convex function and a strongly convex function, potentially on a restricted domain . It is thus well adapted to the subgradient method [29].
We have the following primal/dual relationships (obtained from Fenchel duality [8]):
This leads to the dual maximization problem:
(2) 
We denote by the dual objective. It has a smooth part defined on and a potentially nonsmooth part , and the problem is restricted onto a bounded set . When is linear (and more generally smooth) on its support, then we are exactly in the situation where conditional gradient algorithms may be used [16, 13].
Given a pair of primaldual candidates , we denote by the duality gap:
It is equal to zero if and only if (a) is a Fencheldual pair for and (b) is a Fencheldual pair for . This quantity serves as a certificate of optimality, as
The goal of this paper is to show that for certain problems ( linear and quadratic), the subgradient method applied to the primal problem in Eq. (1) is equivalent to the conditional gradient applied to the dual problem in Eq. (2); when relaxing the assumptions above, this equivalence is then between mirror descent methods and generalized conditional gradient algorithms.
2 Examples
The nonsmooth strongly convex optimization problem defined in Eq. (1) occurs in many applications in machine learning and signal processing, either because they are formulated directly in this format, or their dual in Eq. (2) is (i.e., the original problem is the minimization of a smooth function over a compact set).
2.1 Direct formulations
Typical cases for (often the regularizer in machine learning and signal processing) are the following:

Squared Euclidean norm: , which is strongly convex.

Squared Euclidean norm with convex constraints: , with the indicator function for a closed convex set, which is strongly convex.
Typical cases for (often the data fitting terms in machine learning and signal processing) are functions of the form :

Leastabsolutedeviation: , with . Note that the square loss is not Lipschitzcontinuous on (although it is Lipschitzcontinuous when restricted to a bounded set).

Logistic regression: , with . Here is not linear in its support, and is not smooth, since it is a sum of negative entropies (and the secondorder derivative is not bounded). This extends to any “logsumexp” functions which occur as a negative loglikelihood from the exponential family (see, e.g., [32] and references therein). Note that is then smooth and proximal methods with an exponential convergence rate may be used (which correspond to a constant step size in the algorithms presented below, instead of a decaying step size) [26, 5].

Support vector machine: , with . Here is linear on its domain (this is a situation where subgradient and conditional gradient methods are exactly equivalent). This extends to more general “maxmargin” formulations [31, 30]
: in these situations, a combinatorial object (such as a full chain, a graph, a matching or vertices of the hypercube) is estimated (rather than an element of
) and this leads to functionswhose Fenchelconjugates are linear and have domains which are related to the polytopes associated to the linear programming relaxations of the corresponding combinatorial optimization problems. For these polytopes, often, only linear functions can be maximized, i.e., we can compute a subgradient of
but typically nothing more.
Other examples may be found in signal processing; for example, totalvariation denoising, where the loss is strongly convex but the regularizer is nonsmooth [11], or submodular function minimization cast through separable optimization problems [2]. Moreover, many proximal operators for nonsmooth regularizers are of this form, with and is a norm (or more generally a gauge function).
2.2 Dual formulations
Another interesting set of examples for machine learning are more naturally described from the dual formulation in Eq. (2): given a smooth loss term (this could be leastsquares or logistic regression), a typically nonsmooth penalization or constraint is added, often through a norm . Thus, this corresponds to functions of the form , where is a convex nondecreasing function ( is then convex).
Our main assumption is that a subgradient of may be easily computed. This is equivalent to being able to maximize functions of the form for . If one can compute the dual norm of , , and in particular a maximizer in the unitball of , then one can compute simply the subgradient of . Only being able to compute the dual norm efficiently is a common situation in machine learning and signal processing, for example, for structured regularizers based on submodularity [2], all atomic norms [12], and norms based on matrix decompositions [1]. See additional examples in [19].
Our assumption regarding the compact domain of translates to the assumption that has compact domain. This includes indicator functions which corresponds to the constraint . We may also consider , which corresponds to jointly penalizing and constraining the norm; in practice, may be chosen so that the constraint is not active at the optimum and we get the solution of the penalized problem . See [17, 34, 1] for alternative approaches.
3 Mirror descent for strongly convex problems
We first assume that the function is essentially smooth (i.e., differentiable at any point in the interior of , and so that the norm of gradients converges to when approaching the boundary of ); then is a bijection from to , where is the domain of (see, e.g., [28, 18]). We consider the Bregman divergence
It is always defined on , and is nonnegative. If , then if and only if . Moreover, since is assumed strongly convex, we have . See more details in [4]. For example, when , we have .
Subgradient descent for square Bregman divergence
We first consider the common situation where ; the primal problem then becomes:
The projected subgradient method starts from any , and iterates the following recursion:
where is any subgradient of at . The step size is .
The recursion may be rewritten as
which is equivalent to being the unique minimizer of
(3) 
which is the traditional proximal step, with step size .
Mirror descent
We may interpret the last formulation in Eq. (3) for the square regularizer as the minimization of
with solution defined through (note that is a bijection from to ):
This leads to the following definition of the mirror descent recursion:
(4) 
The following proposition proves the convergence of mirror descent in the strongly convex case with rate —previous results were considering the convex case, with convergence rate [24, 4].
Proposition 1 (Convergence of mirror descent in the strongly convex case)
Assume that (a) is Lipschitzcontinuous and finite on , with the domain of , (b) is essentially smooth and strongly convex. Consider and . Denoting by the unique minimizer of , after iterations of the mirror descent recursion of Eq. (4), we have:
Proof We follow the proof of [4] and adapt it to the strongly convex case. We have, by reordering terms and using the optimality condition :
In order to upperbound the two terms in Eq. (3), we first consider the following bound (obtained by convexity of and the definition of ):
which may be rewritten as:
which implies
(6) 
Moreover, by definition of ,
with . The function is strongly convex, and its Fenchel conjugate is thus smooth. This implies that is smooth. Since and , . Moreover, . Since (because is a convex combination of such elements), then .
Overall, combining Eq. (6) and into Eq. (3), this implies that
that is,
With , we obtain
Thus, by summing from to , we obtain
that is,
This implies that , i.e., the iterates converges. Moreover, using the convexity of ,
i.e., the objective functions at an averaged iterate converges, and
i.e., one of the iterates has an objective that converges.
Averaging
Note that with the step size , we have
which implies
By summing these equalities, we obtain , i.e.,
that is, is a weighted average of subgradients (with more weights on later iterates).
For , then, we the same techniques, we would obtain a convergence rate proportional to for the average iterate , thus with an additional factor (see a similar situation in the stochastic case in [20]). We would then have , and this is exactly a form dual averaging method [27], which also comes with primaldual guarantees.
Generalization to nonsmooth
The previous result does not require to be essentially smooth, i.e., it may be applied to where is a closed convex set strictly included in . In the mirror descent recursion,
there may then be multiple choices for . If we choose for at iteration , the subgradient of obtained at the previous iteration, i.e., such that , then the proof of Prop. 1 above holds.
Note that when , the algorithm above is not equivalent to classical projected gradient descent. Indeed, the classical algorithm has the iteration
and corresponds to the choice in the mirror descent recursion, which, when is on the boundary of , is not the choice that we need for the equivalence in Section 4.
4 Conditional gradient method and extensions
In this section, we first review the classical conditional gradient algorithm, which corresponds to the extra assumption that is linear in its domain.
Conditional gradient method
Given a maximization problem of the following form (i.e., where is linear on its domain, or equal to zero by a simple change of variable):
the conditional gradient algorithm consists in the following iteration (note that below is the gradient of the objective function and that we are maximizing the firstorder Taylor expansion to obtain a candidate towards which we make a small step):
It corresponds to a linearization of and its maximization over the bounded convex set . As we show later, the choice of may be done in different ways, through a fixed step size of by (approximate) line search.
Generalization
Following [10], the conditional gradient method can be generalized to problems of the form
with the following iteration:
(7) 
The previous algorithm may be interpreted as follows: (a) perform a firstorder Taylor expansion of the smooth part , while leaving the other part intact, (b) minimize the approximation, and (c) perform a small step towards the maximizer. Note the similarity (and dissimilarity) with proximal methods which would add a proximal term proportional to , leading to faster convergences, but with the extra requirement of solving the proximal step [26, 5].
Note that here may be expressed as a convex combination of all , :
and that when we chose , it simplifies to:
When is essentially smooth (and thus is essentially strictly convex), it can be reformulated with as follows:
which is exactly the mirror descent algorithm described in Eq. (4). This leads to the following proposition:
Proposition 2 (Equivalence between mirror descent and generalized conditional gradient)
When is not essentially smooth, then with a particular choice of subgradient (see end of Section 3), the two algorithms are also equivalent. We now provide convergence proofs for the two versions (with adaptive and nonadaptive step sizes); similar rates may be obtained without the boundedness assumptions [10], but our results provide explicit constants and primaldual guarantees. We first have the following convergence proof for generalized conditional gradient with no line search (the proof of dual convergence uses standard arguments from [13, 15], while the convergence of gaps is due to [19] for the regular conditional gradient):
Proposition 3 (Convergence of extended conditional gradient  no line search)
Assume that (a) is Lipschitzcontinuous and finite on , with the domain of , (b) is strongly convex. Consider and . Denoting by any maximizer of on , after iterations of the generalized conditional gradient recursion of Eq. (7), we have:
Proof We have (using convexity of and smoothness of ):
Comments
There are no comments yet.