Incremental Majorization-Minimization Optimization with Application to Large-Scale Machine Learning

02/18/2014 ∙ by Julien Mairal, et al. ∙ Inria 0

Majorization-minimization algorithms consist of successively minimizing a sequence of upper bounds of the objective function. These upper bounds are tight at the current estimate, and each iteration monotonically drives the objective function downhill. Such a simple principle is widely applicable and has been very popular in various scientific fields, especially in signal processing and statistics. In this paper, we propose an incremental majorization-minimization scheme for minimizing a large sum of continuous functions, a problem of utmost importance in machine learning. We present convergence guarantees for non-convex and convex optimization when the upper bounds approximate the objective up to a smooth error; we call such upper bounds "first-order surrogate functions". More precisely, we study asymptotic stationary point guarantees for non-convex problems, and for convex ones, we provide convergence rates for the expected objective function value. We apply our scheme to composite optimization and obtain a new incremental proximal gradient algorithm with linear convergence rate for strongly convex functions. In our experiments, we show that our method is competitive with the state of the art for solving machine learning problems such as logistic regression when the number of training samples is large enough, and we demonstrate its usefulness for sparse estimation with non-convex penalties.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

The principle of successively minimizing upper bounds of the objective function is often called majorization-minimization [(35)] or successive upper-bound minimization [(48)]. Each upper bound is locally tight at the current estimate, and each minimization step decreases the value of the objective function. Even though this principle does not provide any theoretical guarantee about the quality of the returned solution, it has been very popular and widely used because of its simplicity. Various existing approaches can indeed be interpreted from the majorization-minimization point of view. This is the case of many gradient-based or proximal methods [(3), (14), (28), (45), (54)]

, expectation-maximization (EM) algorithms in statistics

(dempster, ; neal, ), difference-of-convex (DC) programming (horst, ), boosting (collins, ; pietra, ), some variational Bayes techniques used in machine learning (wainwright2, ), and the mean-shift algorithm for finding modes of a distribution tomasi . Majorizing surrogates have also been used successfully in the signal processing literature about sparse estimation (candes4, ; daubechies, ; gasso, ), linear inverse problems in image processing ahn ; erdogan2 , and matrix factorization (lee2, ; mairal7, ).

In this paper, we are interested in making the majorization-minimization principle scalable for minimizing a large sum of functions:

(1)

where the functions are continuous, and is a convex subset of . When  is non-convex, exactly solving (1) is intractable in general, and when is also non-smooth, finding a stationary point of (1) can be difficult. The problem above when is large can be motivated by machine learning applications, where  represents some model parameters and each function measures the adequacy of the parameters  to an observed data point indexed by . In this context, minimizing  amounts to finding parameters  that explain well some observed data. In the last few years, stochastic optimization techniques have become very popular in machine learning for their empirical ability to deal with a large number of training points bottou2 ; duchi2 ; shalev2 ; xiao . Even though these methods have inherent sublinear convergence rates for convex and strongly convex problems lan ; nemirovski , they typically have a cheap computational cost per iteration, enabling them to efficiently find an approximate solution. Recently, incremental algorithms have also been proposed for minimizing finite sums of functions blatt ; defazio2 ; defazio ; schmidt2 ; shalev2 . At the price of a higher memory cost than stochastic algorithms, these incremental methods enjoy faster convergence rates, while also having a cheap per-iteration computational cost.

Our paper follows this approach: in order to exploit the particular structure of problem (1), we propose an incremental scheme whose cost per iteration is independent of , as soon as the upper bounds of the objective are appropriately chosen. We call the resulting scheme “MISO” (Minimization by Incremental Surrogate Optimization). We present convergence results when the upper bounds are chosen among the class of “first-order surrogate functions”, which approximate the objective function up to a smooth error—that is, differentiable with a Lipschitz continuous gradient. For non-convex problems, we obtain almost sure convergence and asymptotic stationary point guarantees. In addition, when assuming the surrogates to be strongly convex, we provide convergence rates for the expected value of the objective function. Remarkably, the convergence rate of MISO is linear for minimizing strongly convex composite objective functions, a property shared with two other incremental algorithms for smooth and composite convex optimization: the stochastic average gradient method (SAG) of Schmidt, Le Roux and Bach schmidt2 , and the stochastic dual coordinate ascent method (SDCA) of Shalev-Schwartz and Zhang shalev2 . Our scheme MISO is inspired in part by these two works, but yields different update rules than SAG or SDCA, and is also appropriate for non-convex optimization problems.

In the experimental section of this paper, we show that MISO can be useful for solving large-scale machine learning problems, and that it matches cutting-edge solvers for large-scale logistic regression (beck, ; schmidt2, ). Then, we show that our approach provides an effective incremental DC programming algorithm, which we apply to sparse estimation problems with nonconvex penalties candes4 .

The paper is organized as follows: Section 2 introduces the majorization-minimization principle with first-order surrogate functions. Section 3 is devoted to our incremental scheme MISO. Section 4 presents some numerical experiments, and Section 5 concludes the paper. Some basic definitions are given in Appendix A.

2 Majorization-minimization with first-order surrogate functions

In this section, we present the generic majorization-minimization scheme for minimizing a function  without exploiting its structure—that is, without using the fact that is a sum of functions. We describe the procedure in Algorithm 1 and illustrate its principle in Figure 1. At iteration , the estimate  is obtained by minimizing a surrogate function  of . When uniformly majorizes  and when , it is clear that the objective function value monotonically decreases.

0:   (initial estimate); (number of iterations).
1:  for   do
2:     Compute a surrogate function of near ;
3:     Minimize the surrogate and update the solution:
4:  end for
4:   (final estimate);
Algorithm 1 Basic majorization-minimization scheme.
[linecolor=blue, linewidth=2pt]-54 2.0 1.0 2.7183 x -4.0 mul exp 1.0 add div x abs 0.2 mul add mul -0.7 add [linecolor=red, linewidth=2pt]-4.21.7 x 1.3 add x 1.3 add mul 0.3 mul 0.3 add
Figure 1: Illustration of the basic majorization-minimization principle. We compute a surrogate  of  near the current estimate . The new estimate  is a minimizer of . The function  is the approximation error that is made when replacing by .

For this approach to be effective, intuition tells us that we need functions  that are easy to minimize and that approximate well the objective . Therefore, we measure the quality of the approximation through the smoothness of the error , which is a key quantity arising in the convergence analysis. Specifically, we require  to be -smooth for some constant  in the following sense:

[-smooth functions] A function is called -smooth when it is differentiable and when its gradient is -Lipschitz continuous.

With this definition in hand, we now introduce the class of “first-order surrogate functions”, which will be shown to have good enough properties for analyzing the convergence of Algorithm 1 and other variants.

[First-order surrogate functions] A function is a first-order surrogate function of near  in  when

for all minimizers of over . When the more general condition holds, we say that is a majorizing surrogate;

the approximation error is -smooth, , and . We denote by  the set of first-order surrogate functions and by  the subset of -strongly convex surrogates.

First-order surrogates are interesting because their approximation error—the difference between the surrogate and the objective—can be easily controlled. This is formally stated in the next lemma, which is a building block of our analysis: [Basic properties of first-order surrogate functions] Let  be a surrogate function in for some in . Define the approximation error , and let be a minimizer of over . Then, for all  in ,

  • ;

  • .

Assume that is -strongly convex, i,e., is in . Then, for all in ,

  • .

The first inequality is a direct application of a classical result (Lemma 1.2.3 of nesterov4 ) on quadratic upper bounds for -smooth functions, when noticing that and . Then, for all in , we have and we obtain the second inequality from the first one.

When is -strongly convex, we use the following classical lower bound (see nesterov ):

Since by Definition 1 and , the third inequality follows from the first one.

We now proceed with a convergence analysis including four main results regarding Algorithm 1 with first-order surrogate functions . More precisely, we show in Section 2.1 that, under simple assumptions, the sequence of iterates asymptotically satisfies a stationary point condition. Then, we present a similar result with relaxed assumptions on the surrogates  when is a composition of two functions, which occur in practical situations as shown in Section 2.3. Finally, we present non-asymptotic convergence rates when  is convex in Section 2.2. By adapting convergence proofs of proximal gradient methods (nesterov, ) to our more general setting, we recover classical sublinear rates and linear convergence rates for strongly convex problems.

2.1 Non-convex convergence analysis

For general non-convex problems, proving convergence to a global (or local) minimum is impossible in general, and classical analysis studies instead asymptotic stationary point conditions (see, e.g., bertsekas ). To do so, we make the following mild assumption when is non-convex:

  • is bounded below and for all in , the directional derivative of at  in the direction exists.

The definitions of directional derivatives and stationary points are provided in Appendix A. A necessary first-order condition for  to be a local minimum of is to have for all  in  (see, e.g., borwein ). In other words, there is no feasible descent direction  and is a stationary point. Thus, we consider the following condition for assessing the quality of a sequence for non-convex problems: [Asymptotic stationary point] Under assumption (A), a sequence satisfies the asymptotic stationary point condition if

(2)

Note that if is differentiable on and , , and the condition (2) implies that the sequence converges to . As noted, we recover the classical definition of critical points for the smooth unconstrained case. We now give a first convergence result about Algorithm 1.

[Non-convex analysis for Algorithm 1] Assume that (A) holds and that the surrogates from Algorithm 1 are in and are either majorizing or strongly convex. Then, monotonically decreases, and  satisfies the asymptotic stationary point condition. The fact that is non-increasing and convergent because bounded below is clear: for all , , where the first inequality and the last equality are obtained from Definition 1. The second inequality comes from the definition of .

Let us now denote by the limit of the sequence and by the approximation error function at iteration , which is -smooth by Definition 1 and such that . Then, , and

Thus, the non-negative sequence necessarily converges to zero. Then, we have two possibilities (according to the assumptions made in the proposition).

  • If the functions are majorizing , we define , and we use the following classical inequality for -smooth functions nesterov4 :

    Therefore, we may use the fact that because , and

  • If instead the functions are -strongly convex, the last inequality of Lemma 1 with and  gives us

    By summing over , we obtain that converges to zero, and

    since according to Definition 1.

We now consider the directional derivative of at and a direction , where and is in ,

Note that minimizes on and therefore . Therefore,

by Cauchy-Schwarz’s inequality. By minimizing over and taking the infimum limit, we finally obtain

This proposition provides convergence guarantees for a large class of existing algorithms, including cases where is non-smooth. In the next proposition, we relax some of the assumptions for objective functions that are compositions , where is the composition operator. In other words, for all in . [Non-convex analysis for Algorithm 1 - composition] Assume that (A) holds and that the function is a composition , where is -Lipschitz continuous for some constant , and . Assume that the function  in Algorithm 1 is defined as , where is a majorizing surrogate in . Then, the conclusions of Proposition 2.1 hold. We follow the same steps as the proof of Proposition 2.1. First, it is easy to show that monotonically decreases and that converges to zero when grows to infinity. Note that since we have made the assumptions that and that , the function can be written as , where is -smooth. Proceeding as in the proof of Proposition 2.1, we can show that converges to zero.

Let us now fix and consider such that is in . We have

where

is a vector whose

-norm is bounded by a universal constant  because the function  is Lipschitz continuous. Since is -smooth, we also have

Plugging this simple relation with , for some and in , into the definition of the directional derivative , we obtain the relation

and since , and ,

In this proposition, is an upper bound of , where the part  is Lipschitz continuous but is not -smooth. This extension of Proposition 2.1 is useful since it provides convergence results for classical approaches that will be described later in Section 2.3. Note that convergence results for non-convex problems are by nature weak, and our non-convex analysis does not provide any convergence rate. This is not the case when  is convex, as shown in the next section.

2.2 Convex analysis

The next proposition is based on a proof technique from Nesterov nesterov , which was originally designed for the proximal gradient method. By adapting it, we obtain the same convergence rates as in (nesterov, ).

[Convex analysis for ] Assume that is convex, bounded below, and that there exists a constant such that

(3)

where is a minimizer of on . When the functions  in Algorithm 1 are in , we have for all ,

where . Assume now that is -strongly convex. Regardless of condition (3), we have for all ,

where if or otherwise. We successively prove the two parts of the proposition.
Non-strongly convex case: 
Let us consider the function at iteration . By Lemma 1,

Then, following a similar proof technique as Nesterov in nesterov ,

(4)

where the minimization over is replaced by a minimization over the line segment . Since the sequence is monotonically decreasing we may use the bounded level set assumption and we obtain

(5)

To simplify, we introduce the notation , and we consider two cases:

  • first case: if , then the optimal value in (5) is  and we consequently have ;

  • second case: otherwise and . Thus, , where the second inequality comes from the convexity inequality for .

We now apply recursively the previous inequalities, starting with . If , we are in the first case and then ; Then, we will subsequently be in the second case for all and thus . Otherwise, if , we are always in the second case and , which is sufficient to obtain the first part of the proposition.

-strongly convex case: 
Let us now assume that is -strongly convex, and let us drop the bounded level sets assumption. The proof again follows nesterov for computing the convergence rate of proximal gradient methods. We start from (4). We use the strong convexity of  which implies that , and we obtain

At this point, it is easy to show that if , the previous binomial is minimized for , and if , then we have . This yields the desired result.

The result of Proposition 2.2 is interesting because it does not make any strong assumption about the surrogate functions, except the ones from Definition 1. The next proposition shows that slightly better rates can be obtained with additional strong convexity assumptions. [Convex analysis for ] Assume that is convex, bounded below, and let be a minimizer of  on . When the surrogates  of Algorithm 1 are in with , we have for all ,

where . When is -strongly convex, we have for all ,

As before, we successively prove the two parts of the proposition.

Non-strongly convex case: 
From Lemma 1 (with , , , ), we have for all ,

(6)

After summation,

where the first inequality comes from the inequalities for all . This is sufficient to prove the first part. Note that proving convergence rates for first-order methods by finding telescopic sums is a classical technique (see, e.g.,beck ).

-strongly convex case: 
Let us now assume that is -strongly convex. The strong convexity implies that for all . Combined with (6), this yields

and thus

Even though the constants obtained in the rates of Proposition 2.2 are slightly better than the ones of Proposition 2.2, the condition in with is much stronger than the simple assumption that  is in . It can indeed be shown that  is necessarily -strongly convex if , and convex if . In the next section, we give some examples where such a condition holds.

2.3 Examples of first-order surrogate functions

We now present practical first-order surrogate functions and links between Algorithm 1 and existing approaches. Even though our generic analysis does not always bring new results for each specific case, its main asset is to provide a unique theoretical treatment to all of them.

2.3.1 Lipschitz gradient surrogates

When is -smooth, it is natural to consider the following surrogate:

The function  is an upper bound of , which is a classical result nesterov4 . It is then easy to see that  is -strongly convex and -smooth. As a consequence, the difference  is -smooth (as a sum of two -smooth functions), and thus is in .

When is convex, it is also possible to show by using Lemma A that is in fact in , and when is -strongly convex, is in . We remark that minimizing  amounts to performing a gradient descent step: .

2.3.2 Proximal gradient surrogates

Let us now consider a composite optimization problem, meaning that splits into two parts , where is -smooth. Then, a natural surrogate of  is the following function:

The function  majorizes  and the approximation error is the same as in Section 2.3.1. Thus, in in  or in when is convex. Moreover,

  • when is convex, is in . If is also convex, is in ;

  • when is -strongly convex, is in . If is also convex, is in .

Minimizing  amounts to performing one step of the proximal gradient algorithm beck ; nesterov ; wright . It is indeed easy to show that the minimum of —assuming it is unique—can be equivalently obtained as follows:

which is often written under the form , where “Prox” is called the “proximal operator” moreau . In some cases, the proximal operator can be computed efficiently in closed form, for example when is the -norm; it yields the iterative soft-thresholding algorithm for sparse estimation (daubechies, ). For a review of proximal operators and their computations, we refer the reader to (bach8, ; combettes2005signal, ).

2.3.3 Linearizing concave functions and DC programming

Assume that , where is concave and -smooth. Then, the following function is a majorizing surrogate in :

Such a surrogate appears in DC (difference of convex) programming horst . When is convex, is indeed the difference of two convex functions. It is also used in sparse estimation for dealing with some non-convex penalties bach8 . For example, consider a cost function of the form , where is the -th entry in . Even though the functions are not differentiable, they can be written as the composition of a concave smooth function on , and a Lipschitz function . By upper-bounding the logarithm function by its linear approximation, Proposition 2.1 justifies the following surrogate:

(7)

and minimizing amounts to performing one step of a reweighted- algorithm (see candes4 and references therein). Similarly, other penalty functions are adapted to this framework. For instance, the logarithm can be replaced by any smooth concave non-decreasing function, or group-sparsity penalties turlach ; yuan can be used, such as , where is a partition of and records the entries of  corresponding to the set . Proposition 2.1 indeed applies to this setting.

2.3.4 Variational surrogates

Let us now consider a real-valued function defined on . Let and be two convex sets. Minimizing  over  is equivalent to minimizing the function  over  defined as . Assume now that

  • is -strongly convex for all in ;

  • is differentiable for all ;

  • is -Lipschitz with respect to  and -Lipschitz with respect to .111The notation denotes the gradient with respect to .

Let us fix in . Then, the following function is a majorizing surrogate in :

with . We can indeed apply Lemma A, which ensures that is differentiable with and for all . Moreover, is -smooth and is -smooth according to Lemma A, and thus is -smooth. Note that a better constant can be obtained when  is convex, as noted in the appendix of mairal17 .

The surrogate  leads to an alternate minimization algorithm; it is then interesting to note that Proposition 2.2 provides similar convergence rates as another recent analysis beck2013convergence , which makes slightly different assumptions on the function . Variational surrogates might also be useful for problems of a single variable 

. For instance, consider a regression problem with a Huber loss function 

defined for all in  as

where  is a positive constant.222To simplify the notation, we present a shifted version of the traditional Huber loss, which usually satisfies . The Huber loss can be seen as a smoothed version of the -norm when  is small, or simply a robust variant of the squared loss that asymptotically grows linearly. Then, it is easy to show that

Consider now a regression problem with training data points represented by vectors  in , associated to real numbers , for . The robust regression problem with the Huber loss can be formulated as the minimization over  of

where is the parameter vector of a linear model. The conditions described at the beginning of this section can be shown to be satisfied with a Lipschitz constant proportional to ; the resulting algorithm is the iterative reweighted least-square method, which appears both in the literature about robust statistics lange2 , and about sparse estimation where the Huber loss is used to approximate the -norm (bach8, ).

2.3.5 Jensen surrogates

Jensen’s inequality also provides a natural mechanism to obtain surrogates for convex functions. Following the presentation of Lange, Hunger and Yang lange2 , we consider a convex function , a vector in , and define as for all . Let be a weight vector in such that and whenever . Then, we define for any in :

When is -smooth, and when , is in with

  • for ;

  • for ;

  • for .

To the best of our knowledge, non-asymptotic convergence rates have not been studied before for such surrogates, and thus we believe that our analysis may provide new results in the present case. Jensen surrogates are indeed quite uncommon; they appear nevertheless in a few occasions. In addition to the few examples given in lange2 , they are used for instance in machine learning by Della Pietra pietra for interpreting boosting procedures through the concept of auxiliary functions.

Jensen’s inequality is also used in a different fashion in EM algorithms dempster ; neal . Consider non-negative functions , and, for some  in , define some weights . By exploiting the concavity of the logarithm, and assuming hat for all  to simplify, Jensen’s inequality yields

(8)

The relation (8) is key to EM algorithms minimizing a negative log-likelihood. The right side of this equation can be interpreted as a majorizing surrogate of the left side since it is easy to show that both terms are equal for . Unfortunately the resulting approximation error functions are not -smooth in general and these surrogates do not follow the assumptions of Definition 1. As a consequence, our analysis may apply to some EM algorithms, but not to all of them.

2.3.6 Quadratic surrogates

When is twice differentiable and admits a matrix  such that is always positive definite, the following function is a first-order majorizing surrogate: