Fast Stochastic Variance Reduced Gradient Method with Momentum Acceleration for Machine Learning

03/23/2017 ∙ by Fanhua Shang, et al. ∙ The Chinese University of Hong Kong 0

Recently, research on accelerated stochastic gradient descent methods (e.g., SVRG) has made exciting progress (e.g., linear convergence for strongly convex problems). However, the best-known methods (e.g., Katyusha) requires at least two auxiliary variables and two momentum parameters. In this paper, we propose a fast stochastic variance reduction gradient (FSVRG) method, in which we design a novel update rule with the Nesterov's momentum and incorporate the technique of growing epoch size. FSVRG has only one auxiliary variable and one momentum weight, and thus it is much simpler and has much lower per-iteration complexity. We prove that FSVRG achieves linear convergence for strongly convex problems and the optimal O(1/T^2) convergence rate for non-strongly convex problems, where T is the number of outer-iterations. We also extend FSVRG to directly solve the problems with non-smooth component functions, such as SVM. Finally, we empirically study the performance of FSVRG for solving various machine learning problems such as logistic regression, ridge regression, Lasso and SVM. Our results show that FSVRG outperforms the state-of-the-art stochastic methods, including Katyusha.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

In this paper, we consider the following finite-sum composite convex optimization problem:

(1)

where is a convex function that is a finite average of convex functions , and is a “simple" possibly non-smooth convex function (referred to as a regularizer, e.g. , the -norm regularizer , and the elastic net regularizer , where are the regularization parameters). Such a composite problem (1

) naturally arises in many applications of machine learning and data mining, such as regularized empirical risk minimization (ERM) and eigenvector computation 

[29, 7]. As summarized in [1, 2], there are mainly four interesting categories of Problem (1) as follows:

  • Case 1: Each is -smooth and is -strongly convex (-SC). Examples: ridge regression and elastic net regularized logistic regression.

  • Case 2: Each is -smooth and is non-strongly convex (NSC). Examples: Lasso and -norm regularized logistic regression.

  • Case 3: Each is non-smooth (but Lipschitz continuous) and is

    -SC. Examples: linear support vector machine (SVM).

  • Case 4: Each is non-smooth (but Lipschitz continuous) and is NSC. Examples: -norm regularized SVM.

To solve Problem (1) with a large sum of component functions, computing the full (sub)gradient of (e.g.  for the smooth case) in first-order methods is expensive, and hence stochastic (sub)gradient descent (SGD), also known as incremental gradient descent, has been widely used in many large-scale problems [33, 39]

. SGD approximates the gradient from just one example or a mini-batch, and thus it enjoys a low per-iteration computational complexity. Moreover, SGD is extremely simple and highly scalable, making it particularly suitable for large-scale machine learning, e.g., deep learning 

[33]

. However, the variance of the stochastic gradient estimator may be large in practice 

[9, 40], which leads to slow convergence and poor performance. Even for Case 1, standard SGD can only achieve a sub-linear convergence rate [21, 30].

Recently, the convergence speed of standard SGD has been dramatically improved with various variance reduced methods, such as SAG [23], SDCA [27], SVRG [9], SAGA [6], and their proximal variants, such as [25], [28], [35] and [10]. Indeed, many of those stochastic methods use past full gradients to progressively reduce the variance of stochastic gradient estimators, which leads to a revolution in the area of first-order methods. Thus, they are also called the semi-stochastic gradient descent method [10] or hybrid gradient descent method [38]. In particular, these recent methods converge linearly for Case 1, and their overall complexity (total number of component gradient evaluations to find an -accurate solution) is , where is the Lipschitz constant of the gradients of , and is the strong convexity constant of . The complexity bound shows that those stochastic methods always converge faster than accelerated deterministic methods (e.g. FISTA [5][10]. Moreover, [3] and [22] proved that SVRG with minor modifications can converge asymptotically to a stationary point in the non-convex case. However, there is still a gap between the overall complexity and the theoretical bound provided in [34]. For Case 2, they converge much slower than accelerated deterministic algorithms, i.e., vs. .

More recently, some accelerated stochastic methods were proposed. Among them, the successful techniques mainly include the Nesterov’s acceleration technique [13, 14, 20], the choice of growing epoch length [16, 4], and the momentum acceleration trick [1, 8]. [14] presents an accelerating Catalyst framework and achieves a complexity of for Case 1. However, adding a dummy regularizer hurts the performance of the algorithm both in theory and in practice [4]. The methods [1, 8] attain the best-known complexity of for Case 2. Unfortunately, they require at least two auxiliary variables and two momentum parameters, which lead to complicated algorithm design and high per-iteration complexity.

Contributions: To address the aforementioned weaknesses of existing methods, we propose a fast stochastic variance reduced gradient (FSVRG) method, in which we design a novel update rule with the Nesterov’s momentum [17]. The key update rule has only one auxiliary variable and one momentum weight. Thus, FSVRG is much simpler and more efficient than [1, 8]. FSVRG is a direct accelerated method without using any dummy regularizer, and also works for non-smooth and proximal settings. Unlike most variance reduced methods such as SVRG, which only have convergence guarantee for Case 1, FSVRG has convergence guarantees for both Cases 1 and 2. In particular, FSVRG uses a flexible growing epoch size strategy as in [16] to speed up its convergence. Impressively, FSVRG converges much faster than the state-of-the-art stochastic methods. We summarize our main contributions as follows.

  • We design a new momentum accelerating update rule, and present two selecting schemes of momentum weights for Cases 1 and 2, respectively.

  • We prove that FSVRG attains linear convergence for Case 1, and achieves the convergence rate of and a complexity of for Case 2, which is the same as the best known result in [1].

  • Finally, we also extend FSVRG to mini-batch settings and non-smooth settings (i.e., Cases 3 and 4), and provide an empirical study on the performance of FSVRG for solving various machine learning problems.

2 Preliminaries

Throughout this paper, the norm is the standard Euclidean norm, and is the -norm, i.e., . We denote by the full gradient of if it is differentiable, or a sub-gradient of if is only Lipschitz continuous. We mostly focus on the case of Problem (1) when each is -smooth111In fact, we can extend all our theoretical results below for this case (i.e., when the gradients of all component functions have the same Lipschitz constant ) to the more general case, when some have different degrees of smoothness.. For non-smooth component functions, we can use the proximal operator oracle [2] or the Nesterov’s smoothing [19] and homotopy smoothing [36] techniques to smoothen them, and then obtain the smoothed approximations of all functions .

When the regularizer is non-smooth (e.g., ), the update rule of general SGD is formulated as follows:

(2)

where is the step size (or learning rate), and is chosen uniformly at random from . When , the update rule in (2) becomes . If each is non-smooth (e.g., the hinge loss), we need to replace in (2) with .

As the representative methods of stochastic variance reduced optimization, SVRG [9] and its proximal variant, Prox-SVRG [35], are particularly attractive because of their low storage requirement compared with [23, 27, 6, 28], which need to store all the gradients of the component functions (or dual variables), so that storage is required in general problems. At the beginning of each epoch of SVRG, the full gradient is computed at the snapshot point . With a constant step size , the update rules for the special case of Problem (1) (i.e., ) are given by

(3)

[4] proposed an accelerated SVRG method, SVRG++ , with doubling-epoch techniques. Moreover, Katyusha [1] is a direct accelerated stochastic variance reduction method, and its main update rules are formulated as follows:

(4)

where are two parameters, and is fixed to in [1] to eliminate the need for parameter tuning.

3 Fast SVRG with Momentum Acceleration

In this paper, we propose a fast stochastic variance reduction gradient (FSVRG) method with momentum acceleration for Cases 1 and 2 (e.g., logistic regression) and Cases 3 and 4 (e.g., SVM). The acceleration techniques of the classical Nesterov’s momentum and the Katyusha momentum in [1] are incorporated explicitly into the well-known SVRG method [9]. Moreover, FSVRG also uses a growing epoch size strategy as in [16] to speed up its convergence.

3.1 Smooth Component Functions

In this part, we consider the case of Problem (1) when each is smooth, and is SC or NSC (i.e., Case 1 or 2). Similar to existing stochastic variance reduced methods such as SVRG [1] and Prox-SVRG [35], we design a simple fast stochastic variance reduction algorithm with momentum acceleration for solving smooth objective functions, as outlined in Algorithm 1. It is clear that Algorithm 1 is divided into epochs (which is the same as most variance reduced methods, e.g., SVRG and Katyusha), and each epoch consists of stochastic updates, where is set to as in [16], where is a given initial value, and is a constant. Within each epoch, a full gradient is calculated at the snapshot point . Note that we choose to be the average of the past stochastic iterates rather than the last iterate because it has been reported to work better in practice [35, 4, 1]. Although our convergence guarantee for the SC case depends on the initialization of , the choices of and also work well in practice, especially for the case when the regularization parameter is relatively small (e.g., ), as suggested in [31].

0:  the number of epochs and the step size .
0:  , , , and .
1:  for  do
2:     , ;
3:     for  do
4:        Pick uniformly at random from ;
5:        ;
6:        ;
7:        ;
8:     end for
9:     , ;
10:  end for
10:  
Algorithm 1 FSVRG for smooth component functions

3.1.1 Momentum Acceleration

When the regularizer is smooth, e.g., the -norm regularizer, the update rule of the auxiliary variable is

(5)

When is non-smooth, e.g., the -norm regularizer, the update rule of is given as follows:

(6)

and the proximal operator is defined as

(7)

That is, we only need to replace the update rule (5) in Algorithm 1 with (7) for the case of non-smooth regularizers.

Inspired by the momentum acceleration trick for accelerating first-order optimization methods [17, 20, 1], we design the following update rule for :

(8)

where is the weight for the key momentum term. The first term of the right-hand side of (8) is the snapshot point of the last epoch (also called as the Katyusha momentum in [1]), and the second term plays a key role as the Nesterov’s momentum in deterministic optimization.

When and , Algorithm 1 degenerates to the accelerated SVRG method, SVRG++ [4]. In other words, SVRG++ can be viewed as a special case of our FSVRG method. As shown above, FSVRG only has one additional variable , while existing accelerated stochastic variance reduction methods, e.g., Katyusha [1], require two additional variables and , as shown in (4). In addition, FSVRG only has one momentum weight , compared with the two weights and in Katyusha [1]. Therefore, FSVRG is much simpler than existing accelerated methods [1, 8].

3.1.2 Momentum Weight

For the case of SC objectives, we give a selecting scheme for the momentum weight . As shown in Theorem 1 below, it is desirable to have a small convergence factor , implying fast convergence. The following proposition obtains the optimal , which can yield the smallest value.

Proposition 1

Given the appropriate learning rate , the optimal weight is given by

(9)

Using Theorem 1 below, we have

To minimize with given , we have .

In fact, we can fix to a constant for the case of SC objectives, e.g., as in accelerated SGD [24], which works well in practice. Indeed, larger values of can result in better performance for the case when the regularization parameter is relatively large (e.g., ).

Unlike the SC case, we initialize in each epoch for the case of NSC objectives. And the update rule of is defined as follows: , and for any ,

(10)

The above rule is the same as that in some accelerated optimization methods [18, 32, 15].

3.1.3 Complexity Analysis

The per-iteration cost of FSVRG is dominated by the computation of , , and or the proximal update (6), which is as low as that of SVRG [9] and SVRG++ [4]. For some ERM problems, we can save the intermediate gradients in the computation of , which requires additional storage in general. In addition, FSVRG has a much lower per-iteration complexity than other accelerated methods such as Katyusha [1], which have at least one more variable, as analyzed above.

3.2 Non-Smooth Component Functions

In this part, we consider the case of Problem (1) when each

is non-smooth (e.g., hinge loss and other loss functions listed in 

[37]), and is SC or NSC (i.e. Case 3 or 4). As stated in Section 2, the two classes of problems can be transformed into the smooth ones as in [19, 2, 36], which can be efficiently solved by Algorithm 1. However, the smoothing techniques may degrade the performance of the involved algorithms, similar to the case of the reduction from NSC problems to SC problems [2]. Thus, we extend Algorithm 1 to the non-smooth setting, and propose a fast stochastic variance reduced sub-gradient algorithm (i.e., Algorithm 2) to solve such problems directly, as well as the case of Algorithm 1 to directly solve the NSC problems in Case 2.

For each outer iteration and inner iteration , we denote by the stochastic sub-gradient , where , and denotes a sub-gradient of at . When the regularizer is smooth, the update rule of is given by

(11)

where denotes the orthogonal projection on the convex domain . Following the acceleration techniques for stochastic sub-gradient methods [21, 11, 30], a general weighted averaging scheme is formulated as follows:

(12)

where is the given weight, e.g., .

0:  the number of epochs and the step size .
0:  , , , , and .
1:  for  do
2:     , ;
3:     for  do
4:        Pick uniformly at random from ;
5:        ;
6:        ;
7:        ;
8:     end for
9:     , ;
10:  end for
10:  
Algorithm 2 FSVRG for non-smooth component functions

4 Convergence Analysis

In this section, we provide the convergence analysis of FSVRG for solving the two classes of problems in Cases 1 and 2. Before giving a key intermediate result, we first introduce the following two definitions.

Definition 1 (Smoothness)

A function is -smooth if its gradient is -Lipschitz, that is, for all .

Definition 2 (Strong Convexity)

A function is -strongly convex (-SC), if there exists a constant such that for any ,

(13)

If is non-smooth, we can revise the inequality (13) by simply replacing with an arbitrary sub-gradient .

Lemma 1

Suppose each component function is -smooth. Let be the optimal solution of Problem (1), and be the sequence generated by Algorithm 1. Then the following inequality holds for all :

(14)

The detailed proof of Lemma 1 is provided in APPENDIX. To prove Lemma 1, we first give the following lemmas, which are useful for the convergence analysis of FSVRG.

Lemma 2 (Variance bound, [1])

Suppose each function is -smooth. Then the following inequality holds:

Lemma 2 is essentially identical to Lemma 3.4 in [1]. This lemma provides a tighter upper bound on the expected variance of the variance-reduced gradient estimator than that of [35, 4], e.g., Corollary 3.5 in [35].

Lemma 3 (3-point property, [12])

Assume that is an optimal solution of the following problem,

where is a convex function (but possibly non-differentiable). Then for any , we have

4.1 Convergence Properties for Case 1

For SC objectives with smooth component functions (i.e., Case 1), we analyze the convergence property of FSVRG.

Theorem 1 (Strongly Convex)

Suppose each is -smooth, is -SC, is a constant for Case 1, and is sufficiently large222If is not sufficiently large, the first epoch can be viewed as an initialization step. so that

Then Algorithm 1 has the convergence in expectation:

(15)

Since is -SC, then there exists a constant such that for all

Since is the optimal solution, we have

(16)

Using the inequality in (16) and , we have

where the first inequality holds due to Lemma 1, and the second inequality follows from the inequality in (16).

From Theorem 1, it is clear that decreases as increases, i.e., . Therefore, there exists a positive constant such that for all . Then the inequality in (15) can be rewritten as , which implies that FSVRG attains linear (geometric) convergence.

4.2 Convergence Properties for Case 2

For NSC objectives with smooth component functions (i.e., Case 2), the following theorem gives the convergence rate and overall complexity of FSVRG.

Theorem 2 (Non-Strongly Convex)

Suppose each is -smooth. Then the following inequality holds:

In particular, choosing , Algorithm 1 achieves an -accurate solution, i.e., using at most iterations.

Using the update rule of in (10), it is easy to verify that

(17)

Dividing both sides of the inequality in (14) by , we have

for all . By and the inequality in (17), and summing the above inequality over , we have

Then

This completes the proof.

(a) IJCNN:
(b) Protein:
(c) Covtype:
(d) SUSY:
Figure 1: Comparison of SVRG [9], SVRG++ [3], Katyusha [1], and our FSVRG method for -norm (i.e., ) regularized logistic regression problems. The -axis represents the objective value minus the minimum, and the -axis corresponds to the number of effective passes (top) or running time (bottom).

From Theorem 2, we can see that FSVRG achieves the optimal convergence rate of and the complexity of for NSC problems, which is consistent with the best known result in [1, 8]. By adding a proximal term into the problem of Case 2 as in [14, 2], one can achieve faster convergence. However, this hurts the performance of the algorithm both in theory and in practice [4].

(a) IJCNN:
(b) Protein:
(c) Covtype:
(d) SUSY:
Figure 2: Comparison of Prox-SVRG [35], SVRG++ [3], Katyusha [1], and our FSVRG method for elastic net (i.e., ) regularized logistic regression problems.

4.3 Convergence Properties for Mini-Batch Settings

It has been shown in [26, 20, 10] that mini-batching can effectively decrease the variance of stochastic gradient estimates. So, we extend FSVRG and its convergence results to the mini-batch setting. Here, we denote by the mini-batch size and the selected random index set for each outer-iteration and inner-iteration . Then the stochastic gradient estimator becomes

(18)

And the momentum weight is required to satisfy for SC and NSC cases, where . The upper bound on the variance of in Lemma 2 is extended to the mini-batch setting as follows [15].

Corollary 1 (Variance bound of Mini-Batch)

It is easy to verify that , which implies that mini-batching is able to reduce the variance upper bound in Lemma 2. Based on the variance upper bound in Corollary 1, we further analyze the convergence properties of our algorithms for the mini-batch setting. Obviously, the number of stochastic iterations in each epoch is reduced from to . For the case of SC objective functions, the mini-batch variant of FSVRG has almost identical convergence properties to those in Theorem 1. In contrast, we need to initialize and update by the procedure in (10) for the case of NSC objective functions. Theorem 2 is also extended to the mini-batch setting as follows.

Corollary 2

Suppose is -smooth, and let and , then the following inequality holds:

(19)

Since

then we have

This completes the proof.

Remark 1

When , we have , and then Corollary 2 degenerates to Theorem 2. If (i.e., the batch setting), we have , and the second term on the right-hand side of (19) diminishes. In other words, FSVRG degenerates to the accelerated deterministic method with the optimal convergence rate of