1 Introduction
Nonconvex optimization problems are ubiquitous in several fields of engineering such as control theory, signal processing, machine learning and so on. While these problems are NPhard in the worst case, simple heuristics are often very effective in practice. For example, one such heuristic,
stochastic gradient descent (SGD), has been quite successful for various problems in modern machine learning such as sparse recovery Blumensath and Davies (2009), recommender systems Koren et al. (2009), supervised or unsupervised learning via deep neural networks
Goodfellow et al. (2016), etc. Why is it that problems arising in practice can be solved efficiently by simple heuristics while these problems are NPhard in the worst case?A series of works, some theoretical and some empirical, have uncovered a nice structure in several problems of practical interest that seems to answer the above question. These works show that even though these nonconvex problems have an enormous number of bad saddle points, all local minima are good. More precisely, they show that, for a large class of interesting nonconvex problems, secondorder stationarity (i.e., and ), which is a weaker notion of local optimality, already guarantees (approximate) global optimality. Choromanska et al. (2014); Kawaguchi (2016) present such a result for learning multilayer neural networks, Bandeira et al. (2016); Mei et al. (2017) for synchronization and MaxCut, Boumal et al. (2016) for smooth semidefinite programs, Bhojanapalli et al. (2016) for matrix sensing, Ge et al. (2016) for matrix completion, and Ge et al. (2017)
for robust PCA. Since gradient descent (GD) is known to converge to secondorder stationary points with probability one
Lee et al. (2016), it seems reasonable that simple gradient based heuristics such as SGD perform quite well in practice.In order to make the above reasoning rigorous, one key aspect needs to be addressed: rate of convergence. More precisely, we need to quantify how many iterations of GD or SGD are required for finding an secondorder stationary point ( and ). This question has been investigated heavily starting from the work Ge et al. (2015). While initial results such as Ge et al. (2015); Levy (2016) gave good convergence rates, the dependence on the underlying dimension is at least cubic, which is impractical for high dimensional problems. For the case of GD without stochasticity, Jin et al. (2017a) addresses this issue and shows that if we add perturbation once in a while, convergence to secondorder stationary points requires iterations, with only polylogarithmic dependence on the dimension. While SGD is much more widely used compared to GD, obtaining such a result (convergence to secondorder stationary points with minimal dependence on dimension) for SGD has so far remained open. This paper considers perturbed stochastic gradient descent (PSGD) and provides its convergence analysis with a sharp dependence on dimension.
Our contributions.

This paper shows PSGD finds secondorder stationary points in iterations, giving the first convergence rate that depends linearly on the dimension.

Under the assumption of Lipschitz stochastic gradients, we show PSGD finds secondorder stationary point in iterations, further reducing the dimension dependence to polylogarithmic.

This paper also devises a transparent proof strategy which in addition to giving the above results, significantly simplifies the proof of previously known results (Jin et al., 2017a) for perturbed GD.
1.1 Related Work
In this section we review the related work which provides convergence guarantees to find secondorder stationary points. Classical algorithms for finding secondorder stationary points require access to Hessian of the function. Some of the most well known secondorder methods here are cubic regularization method Nesterov and Polyak (2006) and trust region methods Curtis et al. (2014). Owing to the size of Hessian matrices which scales quadratically with respect to dimension, these methods are extremely computational intensive especially for high dimensional problems. In the followings, we focus on the complexity of firstorder methods to find secondorder stationary points.
Full gradient setting.
The basic setting is when the algorithm has access to exact gradient without error. In this case, Jin et al. (2017a) shows that perturbed gradient descent escapes saddle points and finds secondorder stationary points in iterations. Carmon et al. (2016); Agarwal et al. (2017) and Jin et al. (2017b) use acceleration techniques, and obtain faster convergence rates .
Algorithm  Iterations  Iterations (with Assumption 3)  Simplicity 
Noisy GD (Ge et al., 2015)  singleloop  
CNCSGD (Daneshmand et al., 2018)  singleloop  
Natasha 2 (AllenZhu, 2018)  doubleloop  
Stochastic Cubic (Tripuraneni et al., 2018)  doubleloop  
SPIDER (Fang et al., 2018)  doubleloop  
SGD with averaging (Fang et al., 2019)  singleloop  
Perturbed SGD  singleloop 
Stochastic setting.
In this setting, the algorithm only has access to stochastic gradients. Most existing works assume that the stochastic gradients themselves are Lipschitz (or equivalently that the stochastic functions are gradientLipschitz). Under this assumption, and an additional Hessian vector product oracle, AllenZhu (2018); Tripuraneni et al. (2018) designed algorithms that have an iteration complexity of . Xu et al. (2018); AllenZhu and Li (2017)
obtain similar results without the requirement for Hessianvector product oracle. The sharpest rate in this category is
Fang et al. (2018), which shows that the iteration complexity can be further reduced to .In the general case without assuming Lipschitz stochastic gradients, Ge et al. (2015) provides the first polynomial result for firstorder algorithm showing noisy gradient descent finds secondorder stationary points in iterations. Daneshmand et al. (2018)
shows that assuming the variance of stochastic gradient along the escaping direction of saddle points is at least
for all saddle points, then CNCSGD finds SOSPs in iterations. We note that in general, scales as , which gives complexity . Our work is the first result in this setting achieving linear dimension dependence.While this work was under preparation, a manuscript Fang et al. (2019) was uploaded to arxiv which analyzes stochastic gradient descent with averaging, and obtains convergence rate for the special case where stochastic gradient is Lipschitz. We also note that Fang et al. (2019) makes additional structural assumptions on stochastic gradients which enables them to analyze SGD directly, without adding perturbation.
1.2 Paper Organization
In Section 2, we present the preliminaries and assumptions. In Section 3, we present our main results and in Section 4, we present the simplified proof for the performance of perturbed GD, which illustrates some of our key ideas. The proof of our result for perturbed SGD is presented in the appendix. We conclude in Section 5.
2 Preliminaries
In this paper, we are interested in solving
where is a smooth function which can be nonconvex. More concretely, we assume that has Lipschitz gradients and Lipschitz Hessians.
Definition 1.
A differentiable function is smooth (or gradient Lipschitz) if:
Definition 2.
A twicedifferentiable function is Hessian Lipschitz if:
Assumption 1.
Function is gradient Lipschitz and Hessian Lipschitz.
Since finding a global (or even local) optimum is NPhard, our goal will be to find points that satisfy secondorder optimality conditions. These points are also called secondorder stationary points.
Definition 3.
For a Hessian Lipschitz function , is an secondorder stationary point if:
We consider the stochastic approximation setting, where we may not access directly. Instead for any point , a gradient query will return , where are different functions and
is a random variable drawn from a distribution
. The key property satisfied by these stochastic gradients is that , i.e. the expection of stochastic gradient equals true gradient.A standard assumption on the stochastic gradients is that of bounded variance i.e.,
for some number . When we are interested in high probability bounds, one often makes the stronger assumption of subGaussian tails.
Assumption 2.
For any , stochastic gradient with satisfies:
We note this notion is more general than the standard notion of subGassuain random vector which assumes . The latter one requires disribution to be “isotropic” while our assumption does not. By Lemma 24 we know that both bounded random vector, and standard subGaussian random vector are special cases of our assumption.
In many applications in machine learning, the stochastic gradient is often realized as gradient of a stochastic function where the stochastic function itself can have better smoothness property, i.e. the stochastic gradient can be Lipschitz, which can help improve the convergence rate.
Assumption 3.
(Optional) For any , is Lipschitz.
3 Main Result
In this section, we present our main results on the efficiency of escaping saddle points. Section 3.1 presents the result for PGD when the algorithm has access to full gradient, and Section 3.2 presents the main result for PSGD and its minibatch version in the stochastic case.
3.1 Full Gradient Setting
In this setting, we are given an exact gradient oracle that we can query any point , and the oracle returns its gradient without any error. In this setting, we run perturbed gradient descent (Algorithm 1).
At each iteration, Algorithm 1 is almost the same as gradient descent, except it adds a small isotropic random Gaussian perturbation to the gradient. The perturbation is sampled from a zeromean Gaussian with covariance so that . We note that Algorithm 1 simplifies the original version in Jin et al. (2017a) which adds perturbation only when certain conditions hold.
We are now ready to present our main result, which says that if we pick in Algorithm 1, PGD will find secondorder stationary point in iterations polylogarithmic in dimensions.
3.2 Stochastic Setting
We are now ready to present our main result which guarantees the efficiency of PSGD (Algorithm 2) in finding a secondorder stationary point.
Theorem 5.
For any , if function satisfies Assumption 1, and stochastic gradient satisfies Assumption 2 (and 3 optionally), and we run PSGD (Algorithm 2) with parameter chosen as:
(1) 
Then, with probability at least , PSGD will visit secondorder stationary point at least once in the following number of iterations:
Remark 6.
Remark 7 (Output a secondorder stationary point).
Theorem 5 provides the number of iterations required for PSGD to visit at least one secondorder stationary point. It can be easily shown with the same proof that if we double the number of iterations, one half of the iterates will be secondorder stationary points. Therefore, if we output a iterate uniformly at random, then with at least constant probability, it will be an secondorder stationary point.
We also observe that when (i.e., full gradient case), Theorem 5 recovers Theorem 4. Finally, Theorem 5 can be easily extended to the minibatch setting.
Theorem 8 (Minibatch Version).
For any , if function satisfies Assumption 1, and stochastic gradient satisfies Assumption 2, (and 3 optionally), and we run minibatch PSGD (Algorithm 3) with parameter chosen as:
(2) 
Then, with probability at least , minibatch PSGD will visit an secondorder stationary point at least once in the following number of iterations:
4 Simplified Proof for Perturbed Gradient Descent
In this section, we present a simple proof of the iteration complexity of PGD. While it is possible to prove Theorem 4 in this context, the addition of perturbation in each step makes the analysis slightly more complicated than the version of PGD considered in Jin et al. (2017a), where perturbation is added only once in a while. In order to illustrate the proof ideas and make the proof transparent, we present a proof for the iteration complexity of Algorithm 4, which is the one considered in Jin et al. (2017a). Theorem 4 can be deduced as a special case of Theorem 5 (whose proof will be presented in Appendix A) by setting .
Algorithm 4 adds perturbation only when the norm of gradient at current iterate is small, and the algorithm has not added perturbation in previous iterations. Similar guarantees as Theorem 4 can be shown for this version of PGD as follows:
Theorem 9.
There is an absolute constant such that the following holds. If satisfies Assumption 1, and we run PGD (Variant) (Algorithm 4) with parameters chosen as Eq.(3) with , then with probability at least , in the following number of iterations, at least one half of iterations of PGD (Variant) will be second order stationary points.
where .
In order to prove this theorem, we first specify our choice of hyperparameter
, and two quantities which are frequently used:(3) 
Our highlevel proof strategy is to prove by contradiction: when the current iterate is not second order stationary point, it must either have large gradient or strictly negative Hessian, and we prove that in either case, PGD must decrease large amount of function value in a reasonable number of iterations. Finally since the function value can not decrease more than , we know that all iterates being nonsecond order stationary points can only last for a small number of iterations.
First, we show the decreasing speed when gradient is large.
Lemma 10 (Descent Lemma).
If satisfies Assumption 1 and , then the gradient descent sequence satisfies:
Proof.
According to the gradient Lipschitz assumption:, we have:
∎
Next is our key lemma, which shows if the starting point has strictly negative Hessian, then adding perturbation and following by gradient descent will decrease a large amount of function value in iterations.
Lemma 11 (Escaping Saddle).
If satisfies Assumption 1 and satisfies and . Then let and run gradient descent starting from :
where is the gradient descent iterate starting from .
In order to prove this, we need to prove two lemmas, and the major simplification over Jin et al. (2017a) comes from the following lemma which says that if function value does not decrease too much over iterations, then all the iterates will remain in a small neighborhood of .
Lemma 12 (Improve or Localize).
Under the setting of Lemma 10, for any :
Proof.
Recall gradient update , then for any :
where step (1) uses CauchySwartz inequality, and step (2) is due to Lemma 10. ∎
Second, we show that the stuck region (where GD will get stuck for at least iterations if initialized there) is thin. We show this by tracking any pair of points that differ only in escaping direction, and are at least far apart. We show that at least one sequence is guaranteed to escape the saddle point with high probability, so the stuck region along escaping direction has width at most .
Lemma 13 (Coupling Sequence).
Suppose satisfies Assumption 1 and satisfies . Let , be two gradient descent sequences which satisfy: (1) ; (2) , where
is the minimum eigenvector direction of
and . Then:Proof.
Assume the contrary, that is . Lemma 12 implies localization of both sequences around , that is for any
(4) 
where the last step is due to our choice of as in Eq.(3), and . On the other hand, we can write the update equations for the difference as:
where and . We note is the leading term which is due to initial difference , and is the error term which is the result of that function is not quadratic. Now we use induction to show that the error term is always small compared to the leading term. That is:
The claim is true for the base case as . Now suppose the induction claim is true till , we prove it is true for . Denote . First, note is in the minimum eigenvector direction of . Thus for any , we have:
By Hessian Lipschitz, we have , therefore:
where the second last inequality used . By our choice of hyperparameter as in Eq.(3), we have , which finishes the proof for induction.
Finally, the induction claim implies:
where step (1) uses the fact for any . This contradicts the localization fact Eq.(4), which finishes the proof. ∎
Proof of Lemma 11.
Proof of Theorem 9.
First, we set total iterations to be:
Next, we choose with large enough absolute constant so that:
Then, we argue with probability , algorithm 4 will add perturbation at most times. This is because otherwise, we can use Lemma 11 every time we add perturbation, and:
which can not happen. Finally, excluding those iterations that are within steps after adding perturbations, we still have steps left. They are either large gradient steps or second order stationary points. Within them, we know large gradient steps can not be more than . Because again otherwise, by Lemma 10:
which again can not happen. Therefore, we conclude at least iterations must be second order stationary points. ∎
5 Conclusion
In this paper, we considered the problem of finding second order stationary points with a stochastic gradient oracle, and presented the first result with linear dependence on dimension. In the special case where the stochastic gradients are Lipschitz, the linear dependence of dimension is improved to polylogarithmic. Further improvement over these bounds, especially in the dependence on accuracy is an interesting open problem.
References

Agarwal et al. [2017]
Naman Agarwal, Zeyuan AllenZhu, Brian Bullins, Elad Hazan, and Tengyu Ma.
Finding approximate local minima faster than gradient descent.
In
Proceedings of the 49th Annual ACM SIGACT Symposium on Theory of Computing
, pages 1195–1199. ACM, 2017.  AllenZhu [2018] Zeyuan AllenZhu. Natasha 2: Faster nonconvex optimization than sgd. In Advances in Neural Information Processing Systems, pages 2680–2691, 2018.
 AllenZhu and Li [2017] Zeyuan AllenZhu and Yuanzhi Li. Neon2: Finding local minima via firstorder oracles. arXiv preprint arXiv:1711.06673, 2017.
 Bandeira et al. [2016] Afonso S Bandeira, Nicolas Boumal, and Vladislav Voroninski. On the lowrank approach for semidefinite programs arising in synchronization and community detection. In Conference on Learning Theory, pages 361–382, 2016.
 Bhojanapalli et al. [2016] Srinadh Bhojanapalli, Behnam Neyshabur, and Nati Srebro. Global optimality of local search for low rank matrix recovery. In Advances in Neural Information Processing Systems, pages 3873–3881, 2016.
 Blumensath and Davies [2009] Thomas Blumensath and Mike E Davies. Iterative hard thresholding for compressed sensing. Applied and computational harmonic analysis, 27(3):265–274, 2009.
 Boumal et al. [2016] Nicolas Boumal, Vlad Voroninski, and Afonso Bandeira. The nonconvex BurerMonteiro approach works on smooth semidefinite programs. In Advances in Neural Information Processing Systems, pages 2757–2765, 2016.
 Carmon et al. [2016] Yair Carmon, John C Duchi, Oliver Hinder, and Aaron Sidford. Accelerated methods for nonconvex optimization. arXiv preprint arXiv:1611.00756, 2016.
 Choromanska et al. [2014] Anna Choromanska, Mikael Henaff, Michael Mathieu, Gérard Ben Arous, and Yann LeCun. The loss surface of multilayer networks. arXiv:1412.0233, 2014.
 Curtis et al. [2014] Frank E Curtis, Daniel P Robinson, and Mohammadreza Samadi. A trust region algorithm with a worstcase iteration complexity of for nonconvex optimization. Mathematical Programming, pages 1–32, 2014.
 Daneshmand et al. [2018] Hadi Daneshmand, Jonas Kohler, Aurelien Lucchi, and Thomas Hofmann. Escaping saddles with stochastic gradients. arXiv preprint arXiv:1803.05999, 2018.

Fang et al. [2018]
Cong Fang, Chris Junchi Li, Zhouchen Lin, and Tong Zhang.
Spider: Nearoptimal nonconvex optimization via stochastic pathintegrated differential estimator.
In Advances in Neural Information Processing Systems, pages 687–697, 2018.  Fang et al. [2019] Cong Fang, Zhouchen Lin, and Tong Zhang. Sharp analysis for nonconvex sgd escaping from saddle points. arXiv preprint arXiv:1902.00247, 2019.

Ge et al. [2015]
Rong Ge, Furong Huang, Chi Jin, and Yang Yuan.
Escaping from saddle points—online stochastic gradient for tensor decomposition.
InConference on Computational Learning Theory (COLT)
, 2015.  Ge et al. [2016] Rong Ge, Jason D Lee, and Tengyu Ma. Matrix completion has no spurious local minimum. In Advances in Neural Information Processing Systems, pages 2973–2981, 2016.
 Ge et al. [2017] Rong Ge, Chi Jin, and Yi Zheng. No spurious local minima in nonconvex low rank problems: A unified geometric analysis. arXiv preprint arXiv:1704.00708, 2017.
 Goodfellow et al. [2016] Ian Goodfellow, Yoshua Bengio, Aaron Courville, and Yoshua Bengio. Deep learning, volume 1. MIT press Cambridge, 2016.
 Jin et al. [2017a] Chi Jin, Rong Ge, Praneeth Netrapalli, Sham M Kakade, and Michael I Jordan. How to escape saddle points efficiently. In International Conference on Machine Learning (ICML), 2017a.
 Jin et al. [2017b] Chi Jin, Praneeth Netrapalli, and Michael I Jordan. Accelerated gradient descent escapes saddle points faster than gradient descent. arXiv preprint arXiv:1711.10456, 2017b.
 Jin et al. [2019] Chi Jin, Praneeth Netrapalli, Rong Ge, Sham M. Kakade, and Michael I. Jordan. A short note on concentration inequalities for random vectors with subgaussian norm. arXiv preprint arXiv:1902.03736, 2019.
 Kawaguchi [2016] Kenji Kawaguchi. Deep learning without poor local minima. In Advances In Neural Information Processing Systems, pages 586–594, 2016.
 Koren et al. [2009] Yehuda Koren, Robert Bell, and Chris Volinsky. Matrix factorization techniques for recommender systems. Computer, (8):30–37, 2009.
 Lee et al. [2016] Jason D Lee, Max Simchowitz, Michael I Jordan, and Benjamin Recht. Gradient descent only converges to minimizers. In Conference on Learning Theory, pages 1246–1257, 2016.
 Levy [2016] Kfir Y Levy. The power of normalization: Faster evasion of saddle points. arXiv preprint arXiv:1611.04831, 2016.
 Mei et al. [2017] Song Mei, Theodor Misiakiewicz, Andrea Montanari, and Roberto I Oliveira. Solving SDPs for synchronization and maxcut problems via the Grothendieck inequality. In Conference on Learning Theory (COLT), pages 1476–1515, 2017.
 Nesterov and Polyak [2006] Yurii Nesterov and Boris T Polyak. Cubic regularization of Newton method and its global performance. Mathematical Programming, 108(1):177–205, 2006.
 Tripuraneni et al. [2018] Nilesh Tripuraneni, Mitchell Stern, Chi Jin, Jeffrey Regier, and Michael I Jordan. Stochastic cubic regularization for fast nonconvex optimization. In Advances in Neural Information Processing Systems, pages 2904–2913, 2018.
 Xu et al. [2018] Yi Xu, Jing Rong, and Tianbao Yang. Firstorder stochastic algorithms for escaping from saddle points in almost linear time. In Advances in Neural Information Processing Systems, pages 5535–5545, 2018.
Appendix A Proof for Stochastic Case
In this section, we give proofs for Theorem 5.
a.1 Notation
Recall the update equation of Algorithm 2 is where . Across this section, we denote . For simplicity, we also denote , and . Then the update equation can be rewrite as . We also denote be the corresponding filteration up to time step . Recall our choice of parameters:
(5) 
where and log factor are defined as follows:
is a sufficiently large absolute constant to be determined later. Also we note in this sections are absolute constant that does not depends on our choice of . The value of may change from line to line.
a.2 Descent Lemma
Lemma 14 (Descent Lemma).
Proof.
Since Algorithm 2 is Markovian, the operations in each iterations does not depend on time step . Thus, it suffices to prove Lemma 14 for special case . Recall the update equation:
where . By assumption, we know is zeromean . Also comes from , and thus by Lemma 24 is zeromean for some absolute constant . By Taylor expansion, gradient Lipschitz and , we know:
Summing over the inequality above, we have following:
(6) 
For the second term in RHS, applying Lemma 30, there exists an absolute constant , with probability :
For the third term in RHS of Eq.(6), applying Lemma 29, with probability :
Substituting both above inequality into Eq.(6), and note the fact , we have with probability :
This finishes the proof. ∎
Lemma 15 (Improve or Localize).
Proof.
By similar arguement as in proof of Lemma 14, it suffices to prove Lemma 15 in special case . According to Lemma 14, with probability , for some absolute constant :
Therefore, for any fixed , with probability ,:
Where in step (1) we use CauchySwartz inequality and Lemma 27. Finally, applying union bound for all , we finishes the proof. ∎
a.3 Escaping Saddle Points
This entire subsection will be devoted to prove following lemma: