Stochastic Recursive Gradient Algorithm for Nonconvex Optimization

by   Lam M. Nguyen, et al.
Lehigh University

In this paper, we study and analyze the mini-batch version of StochAstic Recursive grAdient algoritHm (SARAH), a method employing the stochastic recursive gradient, for solving empirical loss minimization for the case of nonconvex losses. We provide a sublinear convergence rate (to stationary points) for general nonconvex functions and a linear convergence rate for gradient dominated functions, both of which have some advantages compared to other modern stochastic gradient algorithms for nonconvex losses.


page 1

page 2

page 3

page 4


Mini-Batch Stochastic ADMMs for Nonconvex Nonsmooth Optimization

In the paper, we study the mini-batch stochastic ADMMs (alternating dire...

Stochastic Gradient Descent for Stochastic Doubly-Nonconvex Composite Optimization

The stochastic gradient descent has been widely used for solving composi...

SARAH: A Novel Method for Machine Learning Problems Using Stochastic Recursive Gradient

In this paper, we propose a StochAstic Recursive grAdient algoritHm (SAR...

A fast and recursive algorithm for clustering large datasets with k-medians

Clustering with fast algorithms large samples of high dimensional data i...

On the Convergence of Adaptive Gradient Methods for Nonconvex Optimization

Adaptive gradient methods are workhorses in deep learning. However, the ...

Gradient-Free Methods for Deterministic and Stochastic Nonsmooth Nonconvex Optimization

Nonsmooth nonconvex optimization problems broadly emerge in machine lear...

Global Convergence of Langevin Dynamics Based Algorithms for Nonconvex Optimization

We present a unified framework to analyze the global convergence of Lang...

1 Introduction

We are interested in the following finite-sum minimization problem


where each , , is smooth but can be nonconvex, and is also not necessarily convex. Throughout the paper, we assume that there exists a global optimal solution of (1); in other words, there exists a lower bound of (1), however we do not assume the knowledge of this bound and we do not seek convergence to , in general.

Problems of form (1

) cover a wide range of convex and nonconvex problems including but not limited to logistic regression, multi-kernel learning, conditional random fields, neural networks, etc. In many of these applications, the number

of individual components is very large, which makes the exact computation of and its derivatives and thus the use of gradient descent (GD) Nocedal and Wright (2006) to solve (1) expensive.

A traditional approach is to employ stochastic gradient descent (SGD) 

Robbins and Monro (1951); Shalev-Shwartz et al. (2011). Recently, a large number of improved variants of stochastic gradient algorithms have emerged, including SAG/SAGA Schmidt et al. (2016); Defazio et al. (2014a), MISO/FINITO Mairal (2013); Defazio et al. (2014b), SDCA Shalev-Shwartz and Zhang (2013), SVRG/S2GD Johnson and Zhang (2013); Konečný et al. (2016), SARAH Nguyen et al. (2017) 111Note that numerous modifications of stochastic gradient algorithms have been proposed, including non-uniform sampling, acceleration, repeated scheme and asynchronous parallelization. In this paper, we refrain from checking and analyzing those variants, and compare only the primary methods.. While, nonconvex problems of the form (1) are now widely used due to the recent interest in deep neural networks, the majority of methods are designed and analyzed for the convex/strongly convex cases. Limited results have been developed for the nonconvex problems Reddi et al. (2016); Allen-Zhu and Hazan (2016); Allen Zhu (2017), in particular, Reddi et al. (2016); Allen-Zhu and Hazan (2016) introduce nonconvex SVRG, and Natasha Allen Zhu (2017) is a new algorithm but a variant of SVRG for nonconvex optimization.

In this paper we develop convergence rate analysis of a mini-batch variant SARAH for nonconvex problems of the form (1). SARAH has been introduced in Nguyen et al. (2017) and shown to have a sublinear rate of convergence for general convex functions, and a linear rate of convergence for strongly convex functions. As the SVRG method, SARAH has an inner and an outer loop. It has been shown in  Nguyen et al. (2017) that, unlike the inner loop of SVRG, the inner loop of SARAH converges. Here we explore the properties of the inner loop of SARAH for general nonconvex functions and show that it converges at the same rate as SGD, but under weaker assumptions and with better constants in the convergence rate. We then analyze the full SARAH algorithm in the case of gradient dominated functions as a special class of nonconvex functions Polyak (1963); Nesterov and Polyak (2006); Reddi et al. (2016) for which we show linear convergence to a global minimum. We will provide the definition of a gradient dominated function in Section 3. We also note that this type of function includes the case where the objective function is strongly convex, but the component functions , , are not necessarily convex.

We now summarize the complexity results of SARAH and other existing methods for nonconvex functions in Table 1

. All complexity estimates are in terms of the number of calls to the

incremental first order oracle (IFO) defined in Agarwal and Bottou (2015), in other words computations of for some . The iteration complexity analysis aims to bound the number of iterations , which is needed to guarantee that . In this case we will say that is an -accurate solution. However, it is common practice for stochastic gradient algorithms to obtain the bound on the number of IFOs after which the algorithm can be terminated with the guaranteed the bound on the expectation, as follows,


It is important to note that for the stochastic algorithms discussed here, the output is not the last iterate computed by the algorithm, but a randomly selected iterate from the computed sequence.

Let us discuss the results in Table 1. The analysis of SGD in Ghadimi and Lan (2013) in performed under the assumption that , for all , for some fixed constant . This limits the applicability of the convergence results for SGD and adds dependence on which can be large. In contrast, convergence rate of SVRG only requires -Lipschitz continuity of the gradient as does the analysis of SARAH. Convergence of SVRG for general nonconvex functions is better than that of the inner loop of SARAH in terms of its dependence on , but it is worse in term of its dependence on . In addition the bound for SVRG includes an unknown universal constant , whose magnitude is not clear and can be quite small. Convergence rate of the full SARAH algorithm for general nonconvex functions remains an open question. In the case of -gradient dominated functions, full SARAH convergence rate dominates that of the other algorithms.

-Gradient Dominated
Table 1: Comparisons between different algorithms for nonconvex functions.

(GD (Nesterov (2004); Reddi et al. (2016)), SGD (Ghadimi and Lan (2013); Reddi et al. (2016)), SVRG (Reddi et al. (2016)))

Our contributions. In summary, in this paper we analyze SARAH with mini-batches for nonconvex optimization. SARAH originates from the idea of momentum SGD, SAG/SAGA, SVRG and L-BFGS and is initially proposed for convex optimization, and is now proven to be effective for minimizing finite-sum problems of general nonconvex functions. We summarize the key contributions of the paper as follows.

  • [noitemsep,nolistsep]

  • We study and extend SARAH framework Nguyen et al. (2017) with mini-batches to solving nonconvexloss functions, which cover the popular deep neural network problems. We are able to provide a sublinear convergence rate of the inner loop of SARAH for general nonconvex functions, under milder assumptions than that of SGD.

  • Like SVRG Reddi et al. (2016), SARAH algorithm is shown to enjoy linear convergence rate for -gradient dominated functions–a special class of possibly nonconvex functions Polyak (1963); Nesterov and Polyak (2006).

  • Similarly to SVRG, SARAH maintains a constant learning rate for nonconvex optimization, and a larger mini-batch size allows the use of a more aggressive learning rate and a smaller inner loop size.

  • Finally, we present numerical results, where a practical version of SARAH, introduced in Nguyen et al. (2017) is shown to be competitive on standard neural network training tasks.

2 Stochastic Recursive Gradient Algorithm

The pivotal idea of SARAH, like many existing algorithms, such as SAG, SAGA and BFGS Nocedal and Wright (2006), is to utilize past stochastic gradient estimates to improve convergence. In contrast with SAG, SAGA and BFGS Nocedal and Wright (2006), SARAH does not store past information thus significantly reducing storage cost. We present SARAH as a two-loop algorithm in Figure 1, with SARAH-IN in Figure 2 describing the inner loop.

  Input: , the learning rate , the batch size and the inner loop size .   Iterate:   for  do         end for   Output:

Figure 1: Algorithm SARAH

  Input: , the learning rate , the batch size and the inner loop size .   Evaluate the full gradient:   Take a gradient descent step:   Iterate:   for  do      Choose a mini-batch of size uniformly at random (without replacement)      Update the stochastic recursive gradient:

     Update the iterate:   end for    with chosen uniformly randomly from   Output:

Figure 2: Algorithm SARAH within a single outer loop: SARAH-IN()

Similarly to SVRG, in each outer iteration, SARAH proceeds with the evaluation of a full gradient followed by an inner loop of stochastic steps. SARAH requires one computation of the full gradient at the start of its inner loop and then proceeds by updating this gradient information using stochastic gradient estimates over inner steps. Hence, each outer iteration corresponds to a cost of component gradient evaluations (or IFOs). For simplicity let us consider the inner loop update for , as presented in Nguyen et al. (2017):


Note that unlike SVRG, which uses the gradient updates , SARAH’s gradient estimate iteratively includes all past stochastic gradients, however, SARAH consumes a memory of instead of in the cases of SAG/SAGA and BFGS, because this past information is simply averaged, instead of being stored.

With either or and , the algorithm SARAH recovers gradient descent (GD). We remark here that we also recover the convergence rate theoretically for GD with and In the following section, we analyze theoretical convergence properties of SARAH when applied to nonconvex functions.

3 Convergence Analysis

First, we will introduce the sublinear convergence of SARAH-IN for general nonconvex functions. Then we present the linear convergence of SARAH over a special class of gradient dominated functions Polyak (1963); Nesterov and Polyak (2006); Reddi et al. (2016). Before proceeding to the analysis, let us start by stating some assumptions.

Assumption 1 (-smooth).

Each , , is -smooth, i.e., there exists a constant such that


Assumption 1 implies that is also -smooth. Then, by the property of -smooth function (in Nesterov (2004)), we have


The following assumption will be made only when appropriate, otherwise, it will be dropped.

Assumption 2 (-gradient dominated).

is -gradient dominated, i.e., there exists a constant such that ,


where is a global minimizer of .

We can observe that every stationary point of the -gradient dominated function is a global minimizer. However, such a function needs not necessarily be convex. If is -strongly convex (but each , , is possibly nonconvex), then , . Thus, a -strongly convex function is also -gradient dominated.

The following two results - Lemmas 1 and 2 - are essentially the same as Lemmas 1 and 2 in Nguyen et al. (2017) with a slight modification to include the case when is not necessarily equal to . We present the proofs in the supplementary material for completeness.

Lemma 1.

Suppose that Assumption 1 holds. Consider SARAH-IN (SARAH within a single outer loop in Figure 2), then we have


where is a global minimizer of .

Lemma 2.

Suppose that Assumption 1 holds. Consider defined by (3) in SARAH-IN, then for any ,

With the above Lemmas, we can derive the following upper bound for .

Lemma 3.

Suppose that Assumption 1 holds. Consider defined by (3) in SARAH-IN. Then for any ,

The proof of Lemma 3 is provided in the supplementary material. Using the above lemmas, we are now able to obtain the following convergence rate result for SARAH-IN.

Theorem 1.

Suppose that Assumption 1 holds. Consider SARAH-IN (SARAH within a single outer loop in Figure 2) with


Then we have

where is a global minimizer of , and , where is chosen uniformly at random from .

This result shows a sublinear convergence rate for SARAH-IN with increasing . Consequently, with and , to obtain

it is sufficient to make . Hence, the total complexity to achieve an -accurate solution is . Therefore, we have the following conclusion for complexity bound.

Corollary 1.

Suppose that Assumption 1 holds. Consider SARAH within a single outer iteration with batch size and the learning rate where is the total number of iterations, then converges sublinearly in expectation with a rate of , and therefore, the total complexity to achieve an -accurate solution defined in (2) is .

Finally, we present the result for SARAH with multiple outer iterations in application to the class of gradient dominated functions defined in (7).

Theorem 2.

Suppose that Assumptions 1 and 2 hold. Consider SARAH (in Figure 1) with and such that

Then we have


Consider the case when and . We need to satisfy . To obtain

it is sufficient to have . This implies the total complexity to achieve an -accurate solution is and we can summarize the conclution as follows.

Corollary 2.

Suppose that Assumptions 1 and 2 hold. Consider SARAH with parameters from Theorem 2 with batch size and the learning rate , then the total complexity to achieve an -accurate solution defined in (2) is .

4 Discussions on the mini-batches sizes

Let us discuss two simple corollaries of Theorem 1.

The first corollary is obtained trivially by substituting the learning rate into the complexity bound in Theorem 1.

Corollary 3.

Suppose that Assumption 1 holds. Consider SARAH-IN (SARAH within a single outer loop in Figure 2) with


Then we have

where is a global minimizer of , and , where is chosen uniformly at random from .

Remark 1.

We can clearly observe that the rate of convergence for SARAH-IN depends on the size of . For a larger value of , we can use a more aggressive learning rate and it requires the smaller number of iterations to achieve an -accurate solution. In particular, when , SARAH-IN reduces to the GD method and its convergence rate becomes that of gradient descent,

and the total complexity to achieve an -accurate solution is . However, the total work in terms of IFOs increases with . When , the total complexity to achieve an -accurate solution is .

Let us set in Corollary 3, we can achieve the following result.

Corollary 4.

Suppose that Assumption 1 holds. Consider SARAH-IN with , and

Then we have

where is a global minimizer of , and , where is chosen uniformly at random from .

Remark 2.

For SARAH-IN with the number of iterations and the learning rate , we could achieve a convergence rate of . We can observe that the value of significantly affects the rate. For example, when , and , , the convergence rates become and , respectively.

5 Numerical Experiments

We now turn to the numerical study and conduct experiments on the multiclass classification problem with neural networks, which is the typical challegeing nonconvex problem in machine learning.

SARAH+ as a Practical Variant

Nguyen et al. (2017) proposes SARAH+ as a practical variant of SARAH. Now we propose SARAH+ for the nonconvex optimization by running Algorithm SARAH (Figure 1) with the following SARAH-IN algorithm (Figure 3). Notice that SARAH+ is different from SARAH in that the inner loop is terminated adaptively instead of using a fixed choice of the inner loop size . This is idea is based on the fact that the norm converges to zero expectation, which has been both proven theoretically and verified numerically for convex optimization in Nguyen et al. (2017). Under the assumption that similar behavior happens in the nonconvex case, instead of tuning the inner loop size for SARAH, we believe that a proper choice of the ratio below, the automatic loop termination can give superior or competitive performances.

  Input: , the learning rate , the batch size and the maximum inner loop size .   Evaluate the full gradient:   Take a gradient descent step:   Iterate:   while  and  do      Choose a mini-batch of size uniformly at random (without replacement)      Update the stochastic recursive gradient:

     Update the iterate and index: ;   end while    with chosen uniformly randomly from   Output:

Figure 3: Algorithm SARAH within a single outer loop: SARAH-IN()

Networks and Datasets

We perform numerical experiments with neural nets with one fully connected hidden layer of nodes, followed by a fully connected output layer which feeds into the softmax regression and cross entropy objective, with the weight decay regularizer (-regularizer) with parameter . We test the performance on the datasets MNIST Lecun et al. (1998) 222Available at and CIFAR10 Krizhevsky and Hinton (2009) 333Available at with 1e-04 and 1e-03, respectively. Both datasets have 10 classes, i.e. 10 softmax output nodes in the network, and are normalized to interval as a simple data pre-processing. This network of MNIST achieves the best performance for neural nets with a single hidden layer. Information on both datasets is also available in Table 2.

Optimization Details

We compare the efficiency of SARAH, SARAH+ Nguyen et al. (2017), SVRG Reddi et al. (2016), AdaGrad Duchi et al. (2011) and SGD-M (momentum SGD Polyak (1964); Sutskever et al. (2013)444While SARAH, SVRG, SGD have been proven effective for nonconvex optimization, as far as we know, the SGD variants AdaGrad and SGD-M do not have theoretical convergence for nonconvex optimization. numerically in terms of number of effective data passes, where the last two algorithms are efficient SGD variants available in the Google open-source library Tensorflow 555See As the choice of initialization for the weight parameters is very important, we apply a widely used mechanism called normalized initialization Glorot and Bengio (2010) where the weight parameters between layers and are sampled uniformly from In addition, we use mini-batch size in all the algorithms.

Dataset Number of Samples Dimensions () SARAH () SARAH+ () SVRG AdaGrad () SGD-M ()
MNIST (60,000, 10,000) 784 (0.1n, 0.08) 0.2 (0.4n, 0.08) (0.01, 0.1) (0.7, 0.01)
CIFAR10 (50,000, 10,000) 3072 (0.4n, 0.03) 0.02 (0.8n, 0.02) (0.05, 1.0) (0.7, 0.001)
Table 2: Summary of statistics and best parameters of all the algorithms for the two datasets.
Figure 4: An example of -regularized neural nets on MNIST and CIFAR10 training/testing datasets for SARAH, SARAH+, SVRG, AdaGrad and SGD-M.

Performance and Comparison

We present the optimal choices of optimization parameters for the mentioned algorithms in Table 2, as well as their performance in Figure 4. As for the optimization parameters we consistently use the ratio in SARAH+, while for all the others, we need to tune two parameters, including for optimal learning rates, for optimal inner loop size, for the optimal initial accumulator and for the optimal momentum. For the tuning of the parameters, reasonable ranges for the parameters have been scanned and we selected the best parameters in terms ofthe training error reduction.

Figure 4 compares the training losses (top) and the test errors (bottom), obtained by the tested algorithms on MNIST and CIFAR10, in terms of the number of effective passes through the data. On the MNIST dataset, which is deemed to be easier for traning, all the methods achieve similar performance in the end; however, SARAH(+) and SVRG stabilize faster than AdaGrad and SGD-M - the two of the most popular SGD variants; meanwhile, SARAH+ has shown superior performance in minimizing the training loss. For the other, more difficult, CIFAR10 dataset, SARAH(+) and SVRG improve upon the training accuracy considerably in comparison with AdaGrad and SGD-M, and as a result, a similar advantage can be seen in the test error reduction.

6 Conclusion

In this paper of work, we study and extend SARAH framework to nonconvex optimization, also admitting the practical variant, SARAH+. For smooth nonconvex functions, the inner loop of SARAH achieves the best sublinear convergence rate in the literature, while the full variant of SARAH has the same linear convergence rate and same as SVRG, for a special class of gradient dominated functions. In addition, we also analyze the dependence of the convergence of SARAH on the size of the mini-batches. In the end, we validate SARAH(+) numerically in comparison with SVRG, AdaGrad and SGD-M, with the popular nonconvex application of neural networks.


  • Agarwal and Bottou (2015) Alekh Agarwal and Leon Bottou. A lower bound for the optimization of finite sums. In ICML, pages 78–86, 2015.
  • Allen Zhu (2017) Zeyuan Allen Zhu. Natasha: Faster non-convex stochastic optimization via strongly non-convex parameter. arXiv preprint arXiv:1702.00763, 2017.
  • Allen-Zhu and Hazan (2016) Zeyuan Allen-Zhu and Elad Hazan. Variance reduction for faster non-convex optimization. In ICML, pages 699–707, 2016.
  • Defazio et al. (2014a) Aaron Defazio, Francis Bach, and Simon Lacoste-Julien. SAGA: A fast incremental gradient method with support for non-strongly convex composite objectives. In NIPS, pages 1646–1654, 2014a.
  • Defazio et al. (2014b) Aaron Defazio, Justin Domke, and Tibério Caetano. A faster, permutable incremental gradient method for big data problems. In ICML, pages 1125–1133, 2014b.
  • Duchi et al. (2011) John Duchi, Elad Hazan, and Yoram Singer. Adaptive subgradient methods for online learning and stochastic optimization. Journal of Machine Learning Research, 12:2121–2159, 2011.
  • Ghadimi and Lan (2013) Saeed Ghadimi and Guanghui Lan. Stochastic first- and zeroth-order methods for nonconvex stochastic programming. SIAM Journal on Optimization, 23(4):2341–2368, 2013. doi: 10.1137/120880811. URL
  • Glorot and Bengio (2010) Xavier Glorot and Yoshua Bengio. Understanding the difficulty of training deep feedforward neural networks. In AISTATS, 2010.
  • Johnson and Zhang (2013) Rie Johnson and Tong Zhang. Accelerating stochastic gradient descent using predictive variance reduction. In NIPS, pages 315–323, 2013.
  • Konečný et al. (2016) Jakub Konečný, Jie Liu, Peter Richtárik, and Martin Takáč. Mini-batch semi-stochastic gradient descent in the proximal setting. IEEE Journal of Selected Topics in Signal Processing, 10:242–255, 2016.
  • Krizhevsky and Hinton (2009) A. Krizhevsky and G. Hinton. Learning multiple layers of features from tiny images. Master’s thesis, Department of Computer Science, University of Toronto, 2009.
  • Lecun et al. (1998) Yann Lecun, Léon Bottou, Yoshua Bengio, and Patrick Haffner. Gradient-based learning applied to document recognition. In Proceedings of the IEEE, pages 2278–2324, 1998.
  • Mairal (2013) Julien Mairal. Optimization with first-order surrogate functions. In ICML, pages 783–791, 2013.
  • Nesterov (2004) Yurii Nesterov. Introductory lectures on convex optimization : a basic course. Applied optimization. Kluwer Academic Publ., Boston, Dordrecht, London, 2004. ISBN 1-4020-7553-7.
  • Nesterov and Polyak (2006) Yurii Nesterov and Boris T Polyak. Cubic regularization of newton method and its global performance. Mathematical Programming, 108(1):177–205, 2006.
  • Nguyen et al. (2017) Lam Nguyen, Jie Liu, Katya Scheinberg, and Martin Takáč. SARAH: A novel method for machine learning problems using stochastic recursive gradient. To appear in ICML, 2017.
  • Nocedal and Wright (2006) Jorge Nocedal and Stephen J. Wright. Numerical Optimization. Springer, New York, 2nd edition, 2006.
  • Polyak (1963) Boris T Polyak. Gradient methods for the minimisation of functionals. USSR Computational Mathematics and Mathematical Physics, 3(4):864–878, 1963.
  • Polyak (1964) Boris T. Polyak. Some methods of speeding up the convergence of iteration methods. USSR Computational Mathematics and Mathematical Physics, 4(5):1–17, 1964.
  • Reddi et al. (2016) Sashank J. Reddi, Ahmed Hefny, Suvrit Sra, Barnabás Póczos, and Alexander J. Smola. Stochastic variance reduction for nonconvex optimization. In ICML, pages 314–323, 2016.
  • Robbins and Monro (1951) Herbert Robbins and Sutton Monro. A stochastic approximation method. The Annals of Mathematical Statistics, 22(3):400–407, 1951.
  • Schmidt et al. (2016) Mark Schmidt, Nicolas Le Roux, and Francis Bach. Minimizing finite sums with the stochastic average gradient. Mathematical Programming, pages 1–30, 2016.
  • Shalev-Shwartz and Zhang (2013) Shai Shalev-Shwartz and Tong Zhang. Stochastic dual coordinate ascent methods for regularized loss. Journal of Machine Learning Research, 14(1):567–599, 2013.
  • Shalev-Shwartz et al. (2011) Shai Shalev-Shwartz, Yoram Singer, Nathan Srebro, and Andrew Cotter. Pegasos: Primal estimated sub-gradient solver for SVM. Mathematical Programming, 127(1):3–30, 2011.
  • Sutskever et al. (2013) Ilya Sutskever, James Martens, George Dahl, and Geoffrey Hinton.

    On the importance of initialization and momentum in deep learning.

    In ICML, pages 1139–1147, 2013.

Appendix A Technical Proofs

a.1 Proof of Lemma 1

By Assumption 1 and , we have

where the last equality follows from the fact for any .

By summing over , we have

which is equivalent to ():

where the last inequality follows since is a global minimizer of . (Note that is given.)

a.2 Proof of Lemma 2

Let be the -algebra generated by ; . Note that also contains all the information of as well as . For , we have

where the last equality follows from

By taking expectation for the above equation, we have

Note that . By summing over , we have

a.3 Proof of Lemma 3



We have

Hence, by taking expectation, we have

By Lemma 2, for ,

This completes the proof.

However, the result simply follows for the case when by the alternative proof. We have


Hence, by Lemma 2, we have

a.4 Proof of Theorem 1

By Lemma 3, we have

Note that . Hence, by summing over (), we have

We have