Stop Wasting My Gradients: Practical SVRG

We present and analyze several strategies for improving the performance of stochastic variance-reduced gradient (SVRG) methods. We first show that the convergence rate of these methods can be preserved under a decreasing sequence of errors in the control variate, and use this to derive variants of SVRG that use growing-batch strategies to reduce the number of gradient calculations required in the early iterations. We further (i) show how to exploit support vectors to reduce the number of gradient computations in the later iterations, (ii) prove that the commonly-used regularized SVRG iteration is justified and improves the convergence rate, (iii) consider alternate mini-batch selection strategies, and (iv) consider the generalization error of the method.

READ FULL TEXT VIEW PDF

Authors

page 1

page 2

page 3

page 4

10/21/2017

A Novel Stochastic Stratified Average Gradient Method: Convergence Rate and Its Complexity

SGD (Stochastic Gradient Descent) is a popular algorithm for large scale...
02/21/2022

MSTGD:A Memory Stochastic sTratified Gradient Descent Method with an Exponential Convergence Rate

The fluctuation effect of gradient expectation and variance caused by pa...
02/08/2018

Mini-Batch Stochastic ADMMs for Nonconvex Nonsmooth Optimization

In the paper, we study the mini-batch stochastic ADMMs (alternating dire...
05/04/2019

Predict-and-recompute conjugate gradient variants

The standard implementation of the conjugate gradient algorithm suffers ...
06/20/2020

Asymptotically Optimal Exact Minibatch Metropolis-Hastings

Metropolis-Hastings (MH) is a commonly-used MCMC algorithm, but it can b...
09/05/2013

Semistochastic Quadratic Bound Methods

Partition functions arise in a variety of settings, including conditiona...
06/24/2020

Minimal Variance Sampling with Provable Guarantees for Fast Training of Graph Neural Networks

Sampling methods (e.g., node-wise, layer-wise, or subgraph) has become a...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

We consider the problem of optimizing the average of a finite but large sum of smooth functions,

(1)

A huge proportion of the model-fitting procedures in machine learning can be mapped to this problem. This includes classic models like least squares and logistic regression but also includes more advanced methods like conditional random fields and deep neural network models. In the high-dimensional setting (large

), the traditional approaches for solving (1) are: full gradient (FG) methods which have linear convergence rates but need to evaluate the gradient for all examples on every iteration, and stochastic gradient (SG) methods which make rapid initial progress as they only use a single gradient on each iteration but ultimately have slower sublinear convergence rates.

Le Roux et al. [1] proposed the first general method, stochastic average gradient (SAG), that only considers one training example on each iteration but still achieves a linear convergence rate. Other methods have subsequently been shown to have this property [2, 3, 4], but these all require storing a previous evaluation of the gradient or the dual variables for each . For many objectives this only requires space, but for general problems this requires space making them impractical.

Recently, several methods have been proposed with similar convergence rates to SAG but without the memory requirements [5, 6, 7, 8]. They are known as mixed gradient, stochastic variance-reduced gradient (SVRG), and semi-stochastic gradient methods (we will use SVRG). We give a canonical SVRG algorithm in the next section, but the salient features of these methods are that they evaluate two gradients on each iteration and occasionally must compute the gradient on all examples. SVRG methods often dramatically outperform classic FG and SG methods, but these extra evaluations mean that SVRG is slower than SG methods in the important early iterations. They also mean that SVRG methods are typically slower than memory-based methods like SAG.

In this work we first show that SVRG is robust to inexact calculation of the full gradients it requires (§3), provided the accuracy increases over time. We use this to explore growing-batch strategies that require fewer gradient evaluations when far from the solution, and we propose a mixed SG/SVRG method that may improve performance in the early iterations (§4). We next explore using support vectors to reduce the number of gradients required when close to the solution (§5), give a justification for the regularized SVRG update that is commonly used in practice (§6), consider alternative mini-batch strategies (§7), and finally consider the generalization error of the method (§8).

2 Notation and SVRG Algorithm

SVRG assumes is -strongly convex, each is convex, and each gradient is Lipschitz-continuous with constant

. The method begins with an initial estimate

, sets and then generates a sequence of iterates using

(2)

where is the positive step size, we set , and is chosen uniformly from . After every steps, we set for a random , and we reset with .

To analyze the convergence rate of SVRG, we will find it convenient to define the function

as it appears repeatedly in our results. We will use to indicate the value of when , and we will simply use for the special case when . Johnson & Zhang [6] show that if and are chosen such that , the algorithm achieves a linear convergence rate of the form

where is the optimal solution. This convergence rate is very fast for appropriate and . While this result relies on constants we may not know in general, practical choices with good empirical performance include setting , , and using rather than a random iterate.

Unfortunately, the SVRG algorithm requires gradient evaluations for every iterations of (2), since updating requires two gradient evaluations and computing require gradient evaluations. We can reduce this to if we store the gradients , but this is not practical in most applications. Thus, SVRG requires many more gradient evaluations than classic SG iterations of memory-based methods like SAG.

3 SVRG with Error

We first give a result for the SVRG method where we assume that is equal to up to some error . This is in the spirit of the analysis of [9], who analyze FG methods under similar assumptions. We assume that for all , which has been used in related work [10] and is reasonable because of the coercity implied by strong-convexity.

Proposition 1.

If and we set and so that , then the SVRG algorithm (2) with chosen randomly from satisfies

We give the proof in Appendix A. This result implies that SVRG does not need a very accurate approximation of in the crucial early iterations since the first term in the bound will dominate. Further, this result implies that we can maintain the exact convergence rate of SVRG as long as the errors decrease at an appropriate rate. For example, we obtain the same convergence rate provided that for any and some . Further, we still obtain a linear convergence rate as long as converges to zero with a linear convergence rate.

3.1 Non-Uniform Sampling

Xiao & Zhang [11] show that non-uniform sampling (NUS) improves the performance of SVRG. They assume each is -Lipschitz continuous, and sample

with probability

where . The iteration is then changed to

which maintains that the search direction is unbiased. In Appendix A, we show that if is computed with error for this algorithm and if we set and so that , then we have a convergence rate of

which can be faster since the average may be much smaller than the maximum value .

  Input: initial vector , update frequency m, learning rate .
  for  do
     Choose batch size
      = elements sampled without replacement from .
     
     =
     for  do
        Randomly pick
        
     end for
     option I: set
     option II: set for random
  end for
Algorithm 1 Batching SVRG

3.2 SVRG with Batching

There are many ways we could allow an error in the calculation of to speed up the algorithm. For example, if evaluating each involves solving an optimization problem, then we could solve this optimization problem inexactly. For example, if we are fitting a graphical model with an iterative approximate inference method, we can terminate the iterations early to save time.

When the are simple but is large, a natural way to approximate is with a subset (or ‘batch’) of training examples (chosen without replacement),

The batch size controls the error in the approximation, and we can drive the error to zero by increasing it to . Existing SVRG methods correspond to the special case where for all .

Algorithm 1 gives pseudo-code for an SVRG implementation that uses this sub-sampling strategy. If we assume that the sample variance of the norms of the gradients is bounded by for all ,

then we have that [12, Chapter 2]

So if we want , where is a constant for some , we need

(3)

If the batch size satisfies the above condition then

and the convergence rate of SVRG is unchanged compared to using the full batch on all iterations.

The condition (3) guarantees a linear convergence rate under any exponentially-increasing sequence of batch sizes, the strategy suggested by [13] for classic SG methods. However, a tedious calculation shows that (3) has an inflection point at , corresponding to . This was previously observed empirically [14, Figure 3], and occurs because we are sampling without replacement. This transition means we don’t need to increase the batch size exponentially.

4 Mixed SG and SVRG Method

An approximate can drastically reduce the computational cost of the SVRG algorithm, but does not affect the in the gradients required for SVRG iterations. This factor of is significant in the early iterations, since this is when stochastic methods make the most progress and when we typically see the largest reduction in the test error.

To reduce this factor, we can consider a mixed strategy: if is in the batch then perform an SVRG iteration, but if is not in the current batch then use a classic SG iteration. We illustrate this modification in Algorithm 2. This modification allows the algorithm to take advantage of the rapid initial progress of SG, since it predominantly uses SG iterations when far from the solution. Below we give a convergence rate for this mixed strategy.

  Replace (*) in Algorithm 1 with the following lines:
  if   then
     
  else
     
  end if
Algorithm 2 Mixed SVRG and SG Method
Proposition 2.

Let and we set and so that with . If we assume then Algorithm 2 has

We give the proof in Appendix B. The extra term depending on the variance is typically the bottleneck for SG methods. Classic SG methods require the step-size to converge to zero because of this term. However, the mixed SG/SVRG method can keep the fast progress from using a constant since the term depending on converges to zero as converges to one. Since implies that , this result implies that when is large compared to and that the mixed SG/SVRG method actually converges faster.

Sharing a single step size between the SG and SVRG iterations in Proposition 2 is sub-optimal. For example, if is close to and , then the SG iteration might actually take us far away from the minimizer. Thus, we may want to use a decreasing sequence of step sizes for the SG iterations. In Appendix B, we show that using for the SG iterations can improve the dependence on the error and variance .

5 Using Support Vectors

Using a batch decreases the number of gradient evaluations required when SVRG is far from the solution, but its benefit diminishes over time. However, for certain objectives we can further reduce the number of gradient evaluations by identifying support vectors. For example, consider minimizing the Huberized hinge loss (HSVM) with threshold  [15],

In terms of (1), we have

. The performance of this loss function is similar to logistic regression and the hinge loss, but it has the appealing properties of both: it is

differentiable like logistic regression meaning we can apply methods like SVRG, but it has support vectors like the hinge loss meaning that many examples will have and . We can also construct Huberized variants of many non-smooth losses for regression and multi-class classification.

If we knew the support vectors where , we could solve the problem faster by ignoring the non-support vectors. For example, if there are training examples but only support vectors in the optimal solution, we could solve the problem

times faster. While we typically don’t know the support vectors, in this section we outline a heuristic that gives large practical improvements by trying to identify them as the algorithm runs.

Our heuristic has two components. The first component is maintaining the list of non-support vectors at . Specifically, we maintain a list of examples where . When SVRG picks an example that is part of this list, we know that and thus the iteration only needs one gradient evaluation. This modification is not a heuristic, in that it still applies the exact SVRG algorithm. However, at best it can only cut the runtime in half.

The heuristic part of our strategy is to skip or if our evaluation of has been zero more than two consecutive times (and skipping it an exponentially larger number of times each time it remains zero). Specifically, for each example we maintain two variables, (for ‘skip’) and (for ‘pass’). Whenever we need to evaluate for some or , we run Algorithm 3 which may skip the evaluation. This strategy can lead to huge computational savings in later iterations if there are few support vectors, since many iterations will require no gradient evaluations.

  if   then
     compute .
     if  then
        . {Update the number of consecutive times was zero.}
        . {Skip exponential number of future evaluations if it remains zero.}
     else
        . {This could be a support vector, do not skip it next time.}
     end if
     return .
  else
     . {In this case, we skip the evaluation.}
     return 0.
  end if
Algorithm 3 Heuristic for skipping evaluations of at

Identifying support vectors to speed up computation has long been an important part of SVM solvers, and is related to the classic shrinking heuristic [16]. While it has previously been explored in the context of dual coordinate ascent methods [17], this is the first work exploring it for linearly-convergent stochastic gradient methods.

6 Regularized SVRG

We are often interested in the special case where problem (1) has the decomposition

(4)

A common choice of is a scaled -norm of the parameter vector, . This non-smooth regularizer encourages sparsity in the parameter vector, and can be addressed with the proximal-SVRG method of Xiao & Zhang [11]. Alternately, if we want an explicit we could set to the indicator function for a -norm ball containing . In Appendix C, we give a variant of Proposition 1 that allows errors in the proximal-SVRG method for non-smooth/constrained settings like this.

Another common choice is the -regularizer, . With this regularizer, the SVRG updates can be equivalently written in the form

(5)

where . That is, they take an exact gradient step with respect to the regularizer and an SVRG step with respect to the functions. When the are sparse, this form of the update allows us to implement the iteration without needing full-vector operations. A related update is used by Le Roux et al. to avoid full-vector operations in the SAG algorithm [1, §4]. In Appendix C, we prove the below convergence rate for this update.

Proposition 3.

Consider instances of problem (1) that can be written in the form (4) where is -Lipschitz continuous and each is -Lipschitz continuous, and assume that we set and so that with . Then the regularized SVRG iteration (5) has

Since , and strictly so in the case of -regularization, this result shows that for -regularized problems SVRG actually converges faster than the standard analysis would indicate (a similar result appears in Konečný et al. [18]). Further, this result gives a theoretical justification for using the update (5) for other functions where it is not equivalent to the original SVRG method.

7 Mini-Batching Strategies

Konečný et al. [18] have also recently considered using batches of data within SVRG. They consider using ‘mini-batches’ in the inner iteration (the update of ) to decrease the variance of the method, but still use full passes through the data to compute . This prior work is thus complimentary to the current work (in practice, both strategies can be used to improve performance). In Appendix D we show that sampling the inner mini-batch proportional to achieves a convergence rate of

where is the size of the mini-batch while

and we assume . This generalizes the standard rate of SVRG and improves on the result of Konečný et al. [18] in the smooth case. This rate can be faster than the rate of the standard SVRG method at the cost of a more expensive iteration, and may be clearly advantageous in settings where parallel computation allows us to compute several gradients simultaneously.

The regularized SVRG form (5) suggests an alternate mini-batch strategy for problem (1): consider a mini-batch that contains a ‘fixed’ set and a ‘random’ set . Without loss of generality, assume that we sort the based on their values so that . For the fixed we will always choose the values with the largest , . In contrast, we choose the members of the random set by sampling from proportional to their Lipschitz constants, with . In Appendix D, we show the following convergence rate for this strategy:

Proposition 4.

Let and . If we replace the SVRG update with

then the convergence rate is

where and .

If and with , then we get a faster convergence rate than SVRG with a mini-batch of size . The scenario where this rate is slower than the existing mini-batch SVRG strategy is when . But we could relax this assumption by dividing each element of the fixed set into two functions: and , where , then replacing each function in with and adding to the random set . This result may be relevant if we have access to a field-programmable gate array (FPGA) or graphical processing unit (GPU) that can compute the gradient for a fixed subset of the examples very efficiently. However, our experiments (Appendix F) indicate this strategy only gives marginal gains.

In Appendix F, we also consider constructing mini-batches by sampling proportional to or . These seemed to work as well as Lipschitz sampling on all but one of the datasets in our experiments, and this strategy is appealing because we have access to these values while we may not know the values. However, these strategies diverged on one of the datasets.

8 Learning efficiency

In this section we compare the performance of SVRG as a large-scale learning algorithm compared to FG and SG methods. Following Bottou & Bousquet [19], we can formulate the generalization error of a learning algorithm as the sum of three terms

where the approximation error measures the effect of using a limited class of models, the estimation error measures the effect of using a finite training set, and the optimization error measures the effect of inexactly solving problem (1). Bottou & Bousquet [19] study asymptotic performance of various algorithms for a fixed approximation error and under certain conditions on the distribution of the data depending on parameters or . In Appendix E, we discuss how SVRG can be analyzed in their framework. The table below includes SVRG among their results.

Algorithm Time to reach Time to reach Previous with
FG
SG
SVRG

In this table, the condition number is . In this setting, linearly-convergent stochastic gradient methods can obtain better bounds for ill-conditioned problems, with a better dependence on the dimension and without depending on the noise variance .

9 Experimental Results

Figure 1: Comparison of training objective (left) and test error (right) on the spam dataset for the logistic regression (top) and the HSVM (bottom) losses under different batch strategies for choosing (Full, Grow, and Mixed) and whether we attempt to identify support vectors (SV).

In this section, we present experimental results that evaluate our proposed variations on the SVRG method. We focus on logistic regression classification: given a set of training data where and , the goal is to find the solving

We consider the datasets used by [1], whose properties are listed in the supplementary material. As in their work we add a bias variable, normalize dense features, and set the regularization parameter to . We used a step-size of and we used which gave good performance across methods and datasets. In our first experiment, we compared three variants of SVRG: the original strategy that uses all examples to form (Full), a growing batch strategy that sets (Grow), and the mixed SG/SVRG described by Algorithm 2 under this same choice (Mixed). While a variety of practical batching methods have been proposed in the literature [13, 20, 21], we did not find that any of these strategies consistently outperformed the doubling used by the simple Grow strategy. Our second experiment focused on the -regularized HSVM on the same datasets, and we compared the original SVRG algorithm with variants that try to identify the support vectors (SV).

We plot the experimental results for one run of the algorithms on one dataset in Figure 1, while Appendix F reports results on the other datasets over different runs. In our results, the growing batch strategy (Grow) always had better test error performance than using the full batch, while for large datasets it also performed substantially better in terms of the training objective. In contrast, the Mixed strategy sometimes helped performance and sometimes hurt performance. Utilizing support vectors often improved the training objective, often by large margins, but its effect on the test objective was smaller.

10 Discussion

As SVRG is the only memory-free method among the new stochastic linearly-convergent methods, it represents the natural method to use for a huge variety of machine learning problems. In this work we show that the convergence rate of the SVRG algorithm can be preserved even under an inexact approximation to the full gradient. We also showed that using mini-batches to approximate gives a natural way to do this, explored the use of support vectors to further reduce the number of gradient evaluations, gave an analysis of the regularized SVRG update, and considered several new mini-batch strategies. Our theoretical and experimental results indicate that many of these simple modifications should be considered in any practical implementation of SVRG.

Acknowledgements

We would like to thank the reviewers for their helpful comments. This research was supported by the Natural Sciences and Engineering Research Council of Canada (RGPIN 312176-2010, RGPIN 311661-08, RGPIN-06068-2015). Jakub Konečný is supported by a Google European Doctoral Fellowship.

Appendix A Convergence Rate of SVRG with Error

We first give the proof of Proposition 1, which gives a convergence rate for SVRG with an error and uniform sampling. We then turn to the case of non-uniform sampling.

a.1 Proof of Proposition 1

We follow a similar argument to Johnson & Zhang [6], but propagating the error through the analysis. We begin by deriving a simple bound on the variance of the sub-optimality of the gradients.

Lemma 1.

For any ,

Proof.

Because each is -Lipschitz continuous, we have [22, Theorem 2.1.5]

Setting and summing this inequality times over all we obtain the result. ∎

In this section we’ll use to denote , to denote , and we’ll use to denote the search direction at iteration ,

Note that and the next lemma bounds the variance of this value.

Lemma 2.

In each iteration of the inner loop,

Proof.

By using the inequality and the property

we have

If we now use that

for any random variable

, we obtain the result by applying Lemma 1 to bound and . ∎

The following Lemma gives a bound on the distance to the optimal solution.

Lemma 3.

In every iteration of the inner loop,

Proof.

We expand the expectation and bound using Lemma 2 to obtain

The inequality above follows from convexity of . The result follows from applying Cauchy-Schwartz to the linear term in and that . ∎

To prove Proposition 1 from the main paper, we first sum the inequality in Lemma 3 for all and take the expectation with respect to the choice of to get

Re-arranging, and noting that and , we have that

where the last inequality uses strong-convexity and that . By dividing both sides by (which is positive due to the constraint implied by and ), we get

a.2 Non-Uniform Sampling

If we sample proportional to the individual Lipschitz constants , then we have the following analogue of Lemma 1.

Lemma 4.

For any ,

Proof.

Because each is -Lipschitz continuous, we have [22, Theorem 2.1.5]

Setting and summing this inequality times over all we have

With this modified lemma, we can derive the convergence rate under this non-uniform sampling scheme by following an identical sequence of steps but where each instance of is replaced by .

Appendix B Mixed SVRG and SG Method

We first give the proof of Proposition 2 in the paper, which analyzes a method that mixes SG and SVRG updates using a constant step size. We then consider a variant where the SG and SVRG updates use different step sizes.

b.1 Proof of Proposition 2

Recall that the SG update is

Using this in Lemma 3 and following a similar argument we have