Semi-Stochastic Gradient Descent Methods

12/05/2013 ∙ by Jakub Konečný, et al. ∙ 0

In this paper we study the problem of minimizing the average of a large number (n) of smooth convex loss functions. We propose a new method, S2GD (Semi-Stochastic Gradient Descent), which runs for one or several epochs in each of which a single full gradient and a random number of stochastic gradients is computed, following a geometric law. The total work needed for the method to output an ε-accurate solution in expectation, measured in the number of passes over data, or equivalently, in units equivalent to the computation of a single gradient of the loss, is O((κ/n)(1/ε)), where κ is the condition number. This is achieved by running the method for O((1/ε)) epochs, with a single gradient evaluation and O(κ) stochastic gradient evaluations in each. The SVRG method of Johnson and Zhang arises as a special case. If our method is limited to a single epoch only, it needs to evaluate at most O((κ/ε)(1/ε)) stochastic gradients. In contrast, SVRG requires O(κ/ε^2) stochastic gradients. To illustrate our theoretical results, S2GD only needs the workload equivalent to about 2.1 full gradient evaluations to find an 10^-6-accurate solution for a problem with n=10^9 and κ=10^3.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Many problems in data science (e.g., machine learning, optimization and statistics) can be cast as loss minimization problems of the form

(1)

where

(2)

Here typically denotes the number of features / coordinates, the number of examples, and is the loss incurred on example . That is, we are seeking to find a predictor minimizing the average loss . In big data applications, is typically very large; in particular, .

Note that this formulation includes more typical formulation of -regularized objectives — We hide the regularizer into the function for the sake of simplicity of resulting analysis.

1.1 Motivation

Let us now briefly review two basic approaches to solving problem (1).

  1. Gradient Descent. Given , the gradient descent (GD) method sets

    where is a stepsize parameter and is the gradient of at . We will refer to by the name full gradient. In order to compute , we need to compute the gradients of functions. Since is big, it is prohibitive to do this at every iteration.

  2. Stochastic Gradient Descent (SGD). Unlike gradient descent, stochastic gradient descent [6, 17] instead picks a random (uniformly) and updates

    Note that this strategy drastically reduces the amount of work that needs to be done in each iteration (by the factor of ). Since

    we have an unbiased estimator of the full gradient. Hence, the gradients of the component functions

    will be referred to as stochastic gradients. A practical issue with SGD is that consecutive stochastic gradients may vary a lot or even point in opposite directions. This slows down the performance of SGD. On balance, however, SGD is preferable to GD in applications where low accuracy solutions are sufficient. In such cases usually only a small number of passes through the data (i.e., work equivalent to a small number of full gradient evaluations) are needed to find an acceptable . For this reason, SGD is extremely popular in fields such as machine learning.

In order to improve upon GD, one needs to reduce the cost of computing a gradient. In order to improve upon SGD, one has to reduce the variance of the stochastic gradients. In this paper we propose and analyze a

Semi-Stochastic Gradient Descent (S2GD) method. Our method combines GD and SGD steps and reaps the benefits of both algorithms: it inherits the stability and speed of GD and at the same time retains the work-efficiency of SGD.

1.2 Brief literature review

Several recent papers, e.g., Richtárik & Takáč [8], Le Roux, Schmidt & Bach [11, 12], Shalev-Shwartz & Zhang [13] and Johnson & Zhang [3] proposed methods which achieve such a variance-reduction effect, directly or indirectly. These methods enjoy linear convergence rates when applied to minimizing smooth strongly convex loss functions.

The method in [8] is known as Random Coordinate Descent for Composite functions (RCDC), and can be either applied directly to (1)—in which case a single iteration requires work for a dense problem, and iterations in total—or to a dual version of (1), which requires work per iteration and iterations in total. Application of a coordinate descent method to a dual formulation of (1) is generally referred to as Stochastic Dual Coordinate Ascent (SDCA) [2]. The algorithm in [13] exhibits this duality, and the method in [14] extends the primal-dual framework to the parallel / mini-batch setting. Parallel and distributed stochastic coordinate descent methods were studied in [9, 1, 10].

Stochastic Average Gradient (SAG) [11] is one of the first SGD-type methods, other than coordinate descent methods, which were shown to exhibit linear convergence. The method of Johnson and Zhang [3], called Stochastic Variance Reduced Gradient (SVRG), arises as a special case in our setting for a suboptimal choice of a single parameter of our method. The Epoch Mixed Gradient Descent (EMGD) method [16] is similar in spirit to SVRG, but achieves a quadratic dependence on the condition number instead of a linear dependence, as is the case with SAG, SVRG and with our method.

For classical work on semi-stochastic gradient descent methods we refer333We thank Zaid Harchaoui who pointed us to these papers a few days before we posted our work to arXiv. the reader to the papers of Murti and Fuchs [4, 5].

1.3 Outline

We start in Section 2 by describing two algorithms: S2GD, which we analyze, and S2GD+, which we do not analyze, but which exhibits superior performance in practice. We then move to summarizing some of the main contributions of this paper in Section 3. Section 4

is devoted to establishing expectation and high probability complexity results for S2GD in the case of a strongly convex loss. The results are generic in that the parameters of the method are set arbitrarily. Hence, in Section 

5 we study the problem of choosing the parameters optimally, with the goal of minimizing the total workload (# of processed examples) sufficient to produce a result of sufficient accuracy. In Section 6 we establish high probability complexity bounds for S2GD applied to a non-strongly convex loss function. Finally, in Section 7 we perform very encouraging numerical experiments on real and artificial problem instances. A brief conclusion can be found in Section 8.

2 Semi-Stochastic Gradient Descent

In this section we describe two novel algorithms: S2GD and S2GD+. We analyze the former only. The latter, however, has superior convergence properties in our experiments.

We assume throughout the paper that the functions are convex and -smooth.

Assumption 1.

The functions have Lipschitz continuous gradients with constant (in other words, they are -smooth). That is, for all and all ,

(This implies that the gradient of is Lipschitz with constant , and hence satisfies the same inequality.)

In one part of the paper (Section 4) we also make the following additional assumption:

Assumption 2.

The average loss is -strongly convex, . That is, for all ,

(3)

(Note that, necessarily, .)

2.1 S2gd

Algorithm 1 (S2GD) depends on three parameters: stepsize , constant limiting the number of stochastic gradients computed in a single epoch, and a , where is the strong convexity constant of . In practice, would be a known lower bound on . Note that the algorithm works also without any knowledge of the strong convexity parameter — the case of .

parameters: = max # of stochastic steps per epoch, = stepsize, = lower bound on
for  do
     
     
     Let with probability for
     for  to  do
         Pick , uniformly at random
         
     end for
     
end for
Algorithm 1 Semi-Stochastic Gradient Descent (S2GD)

The method has an outer loop, indexed by epoch counter , and an inner loop, indexed by . In each epoch , the method first computes —the full gradient of at . Subsequently, the method produces a random number of steps, following a geometric law, where

(4)

with only two stochastic gradients computed in each step444It is possible to get away with computinge only a single stochastic gradient per inner iteration, namely , at the cost of having to store in memory for . This, however, will be impractical for big .. For each , the stochastic gradient is subtracted from , and is added to , which ensures that, one has

where the expectation is with respect to the random variable

.

Hence, the algorithm is stochastic gradient descent – albeit executed in a nonstandard way (compared to the traditional implementation described in the introduction).

Note that for all , the expected number of iterations of the inner loop, , is equal to

(5)

Also note that , with the lower bound attained for , and the upper bound for .

2.2 S2gd+

We also implement Algorithm 2, which we call S2GD+. In our experiments, the performance of this method is superior to all methods we tested, including S2GD. However, we do not analyze the complexity of this method and leave this as an open problem.

parameters: (e.g., )
1. Run SGD for a single pass over the data (i.e., iterations); output
2. Starting from , run a version of S2GD in which for all
Algorithm 2 S2GD+

In brief, S2GD+ starts by running SGD for 1 epoch (1 pass over the data) and then switches to a variant of S2GD in which the number of the inner iterations, , is not random, but fixed to be or a small multiple of .

The motivation for this method is the following. It is common knowledge that SGD is able to progress much more in one pass over the data than GD (where this would correspond to a single gradient step). However, the very first step of S2GD is the computation of the full gradient of . Hence, by starting with a single pass over data using SGD and then switching to S2GD, we obtain a superior method in practice.555Using a single pass of SGD as an initialization strategy was already considered in [11]. However, the authors claim that their implementation of vanilla SAG did not benefit from it. S2GD does benefit from such an initialization due to it starting, in theory, with a (heavy) full gradient computation.

3 Summary of Results

In this section we summarize some of the main results and contributions of this work.

  1. Complexity for strongly convex . If is strongly convex, S2GD needs

    (6)

    work (measured as the total number of evaluations of the stochastic gradient, accounting for the full gradient evaluations as well) to output an -approximate solution (in expectation or in high probability), where is the condition number. This is achieved by running S2GD with stepsize , epochs (this is also equal to the number of full gradient evaluations) and (this is also roughly equal to the number of stochastic gradient evaluations in a single epoch). The complexity results are stated in detail in Sections 4 and 5 (see Theorems 4, 5 and 6; see also (27) and (26)).

  2. Comparison with existing results. This complexity result (6) matches the best-known results obtained for strongly convex losses in recent work such as [11], [3] and [16]. Our treatment is most closely related to [3], and contains their method (SVRG) as a special case. However, our complexity results have better constants, which has a discernable effect in practice. In Table 1 we compare our results in the strongly convex case with other existing results for different algorithms.

    Algorithm Complexity/Work
    Nesterov’s algorithm
    EMGD
    SAG
    SDCA
    SVRG
    S2GD
    Table 1: Comparison of performance of selected methods suitable for solving (1). The complexity/work is measured in the number of stochastic gradient evaluations needed to find an -solution.

    We should note that the rate of convergence of Nesterov’s algorithm [7] is a deterministic result. EMGD and S2GD results hold with high probability. The remaining results hold in expectation. Complexity results for stochastic coordinate descent methods are also typically analyzed in the high probability regime [8].

  3. Complexity for convex . If is not strongly convex, then we propose that S2GD be applied to a perturbed version of the problem, with strong convexity constant . An -accurate solution of the original problem is recovered with arbitrarily high probability (see Theorem 8 in Section 6). The total work in this case is

    that is, , which is better than the standard rate of SGD.

  4. Optimal parameters. We derive formulas for optimal parameters of the method which (approximately) minimize the total workload, measured in the number of stochastic gradients computed (counting a single full gradient evaluation as evaluations of the stochastic gradient). In particular, we show that the method should be run for epochs, with stepsize and . No such results were derived for SVRG in [3].

  5. One epoch. In the case when S2GD is run for 1 epoch only, effectively limiting the number of full gradient evaluations to 1, we show that S2GD with needs

    work only (see Table 2). This compares favorably with the optimal complexity in the case (which reduces to SVRG), where the work needed is

    For two epochs one could just say that we need decrease in iach epoch, thus having complexity of . This is already better than general rate of SGD

    Parameters Method Complexity
    ,
    &
    Optimal S2GD
    GD
    SVRG [3]
    , , Optimal SVRG with 1 epoch
    , , Optimal S2GD with 1 epoch
    Table 2: Summary of complexity results and special cases. Condition number: if is -strongly convex and if is not strongly convex and .
  6. Special cases. GD and SVRG arise as special cases of S2GD, for and , respectively.666While S2GD reduces to GD for , our analysis does not say anything meaningful in the case - it is too coarse to cover this case. This is also the reason behind the empty space in the “Complexity” box column for GD in Table 2.

  7. Low memory requirements. Note that SDCA and SAG, unlike SVRG and S2GD, need to store all gradients (or dual variables) throughout the iterative process. While this may not be a problem for a modest sized optimization task, this requirement makes such methods less suitable for problems with very large .

  8. S2GD+. We propose a “boosted” version of S2GD, called S2GD+, which we do not analyze. In our experiments, however, it performs vastly superior to all other methods we tested, including GD, SGD, SAG and S2GD. S2GD alone is better than both GD and SGD if a highly accurate solution is required. The performance of S2GD and SAG is roughly comparable, even though in our experiments S2GD turned to have an edge.

4 Complexity Analysis: Strongly Convex Loss

For the purpose of the analysis, let

(7)

be the -algebra generated by the relevant history of S2GD. We first isolate an auxiliary result.

Lemma 3.

Consider the S2GD algorithm. For any fixed epoch number , the following identity holds:

(8)
Proof.

By the tower property of expectations and the definition of in the algorithm, we obtain

We now state and prove the main result of this section.

Theorem 4.

Let Assumptions 1 and 2 be satisfied. Consider the S2GD algorithm applied to solving problem (1). Choose , , and let be sufficiently large so that

(9)

Then we have the following convergence in expectation:

(10)

Before we proceed to proving the theorem, note that in the special case with , we recover the result of Johnson and Zhang [3] (with a minor improvement in the second term of where is replaced by ), namely

(11)

If we set , then can be written in the form (see (4))

(12)

Clearly, the latter is a major improvement on the former one. We shall elaborate on this further later.

Proof.

It is well-known [7, Theorem 2.1.5] that since the functions are -smooth, they necessarily satisfy the following inequality:

By summing these inequalities for , and using we get

(13)

Let be the direction of update at iteration in the outer loop and iteration in the inner loop. Taking expectation with respect to , conditioned on the -algebra (7), we obtain777For simplicity, we supress the notation here.

(14)

Above we have used the bound and the fact that

(15)

We now study the expected distance to the optimal solution (a standard approach in the analysis of gradient methods):

(16)

By rearranging the terms in (16) and taking expectation over the -algebra , we get the following inequality:

(17)

Finally, we can analyze what happens after one iteration of the outer loop of S2GD, i.e., between two computations of the full gradient. By summing up inequalities (17) for , with inequality multiplied by , we get the left-hand side

and the right-hand side

Since , we finally conclude with

Since we have established linear convergence of expected values, a high probability result can be obtained in a straightforward way using Markov inequality.

Theorem 5.

Consider the setting of Theorem 4. Then, for any , and

(18)

we have

(19)
Proof.

This follows directly from Markov inequality and Theorem 4:

This result will be also useful when treating the non-strongly convex case.

5 Optimal Choice of Parameters

The goal of this section is to provide insight into the choice of parameters of S2GD; that is, the number of epochs (equivalently, full gradient evaluations) , the maximal number of steps in each epoch , and the stepsize . The remaining parameters () are inherent in the problem and we will hence treat them in this section as given.

In particular, ideally we wish to find parameters , and solving the following optimization problem:

(20)

subject to

(21)

Note that is the expected work, measured by the number number of stochastic gradient evaluations, performed by S2GD when running for epochs. Indeed, the evaluation of is equivalent to stochastic gradient evaluations, and each epoch further computes on average stochastic gradients (see (5)). Since , we can simplify and solve the problem with set to the conservative upper estimate .

In view of (10), accuracy constraint (21) is satisfied if (which depends on and ) and satisfy

(22)

We therefore instead consider the parameter fine-tuning problem

(23)

In the following we (approximately) solve this problem in two steps. First, we fix and find (nearly) optimal and . The problem reduces to minimizing subject to by fine-tuning . While in the case it is possible to obtain closed form solution, this is not possible for .

However, it is still possible to obtain a good formula for leading to expression for good which depends on in the correct way. We then plug the formula for obtained this way back into (23), and study the quantity as a function of , over which we optimize optimize at the end.

Theorem 6 (Choice of parameters).

Fix the number of epochs , error tolerance , and let . If we run S2GD with the stepsize

(24)

and

(25)

then

In particular, if we choose , then , and hence , leading to the workload

(26)
Proof.

We only need to show that , where is given by (12) for and by (11) for . We denote the two summands in expressions for as and . We choose the and so that both and are smaller than , resulting in .

The stepsize is chosen so that

and hence it only remains to verify that . In the case, is chosen so that . In the case, holds for , where . We only need to observe that decreases as increases, and apply the inequality .

We now comment on the above result:

  1. Workload. Notice that for the choice of parameters , , and any , the method needs computations of the full gradient (note this is independent of ), and computations of the stochastic gradient. This result, and special cases thereof, are summarized in Table 2.

  2. Simpler formulas for . If , we can instead of (25) use the following (slightly worse but) simpler expressions for , obtained from (25) by using the bounds , and in appropriate places (e.g., , ):

    (27)
  3. Optimal stepsize in the case. Theorem 6 does not claim to have solved problem (23); the problem in general does not have a closed form solution. However, in the case a closed-form formula can easily be obtained:

    (28)

    Indeed, for fixed , (23) is equivalent to finding that minimizes subject to the constraint . In view of (11), this is equivalent to searching for maximizing the quadratic , which leads to (28).

    Note that both the stepsize and the resulting are slightly larger in Theorem 6 than in (28). This is because in the theorem the stepsize was for simplicity chosen to satisfy , and hence is (slightly) suboptimal. Nevertheless, the dependence of on is of the correct (optimal) order in both cases. That is, for and for .

  4. Stepsize choice. In cases when one does not have a good estimate of the strong convexity constant to determine the stepsize via (24), one may choose suboptimal stepsize that does not depend on and derive similar results to those above. For instance, one may choose .

In Table 3 we provide comparison of work needed for small values of , and different values of and Note, for instance, that for any problem with and , S2GD outputs a highly accurate solution () in the amount of work equivalent to evaluations of the full gradient of !

1.06n
2.03n
2.12n
3.48n
3.18n
5.32n
3.77n
6.39n
7.30n
12.7n
10.9n
19.1n
358n
1002n
717n
2005n
1076n
3008n
Table 3: Comparison of work sufficient to solve a problem with , and various values of and . The work was computed using formula (23), with as in (27). The notation indicates what parameter was used.

6 Complexity Analysis: Convex Loss

If is convex but not strongly convex, we define , for small enough (we shall see below how the choice of affects the results), and consider the perturbed problem

(29)

where

(30)

Note that is -strongly convex and