# SPIDER: Near-Optimal Non-Convex Optimization via Stochastic Path Integrated Differential Estimator

In this paper, we propose a new technique named Stochastic Path-Integrated Differential EstimatoR (SPIDER), which can be used to track many deterministic quantities of interest with significantly reduced computational cost. Combining SPIDER with the method of normalized gradient descent, we propose two new algorithms, namely SPIDER-SFO and SPIDER-SSO, that solve non-convex stochastic optimization problems using stochastic gradients only. We provide sharp error-bound results on their convergence rates. Specially, we prove that the SPIDER-SFO and SPIDER-SSO algorithms achieve a record-breaking Õ(ϵ^-3) gradient computation cost to find an ϵ-approximate first-order and (ϵ, O(ϵ^0.5))-approximate second-order stationary point, respectively. In addition, we prove that SPIDER-SFO nearly matches the algorithmic lower bound for finding stationary point under the gradient Lipschitz assumption in the finite-sum setting.

## Authors

• 8 publications
• 12 publications
• 76 publications
• 99 publications
• ### Second-Order Information in Non-Convex Stochastic Optimization: Power and Limitations

We design an algorithm which finds an ϵ-approximate stationary point (wi...
06/24/2020 ∙ by Yossi Arjevani, et al. ∙ 0

• ### Lower Bounds for Non-Convex Stochastic Optimization

We lower bound the complexity of finding ϵ-stationary points (with gradi...
12/05/2019 ∙ by Yossi Arjevani, et al. ∙ 0

• ### Optimal Finite-Sum Smooth Non-Convex Optimization with SARAH

The total complexity (measured as the total number of gradient computati...
01/22/2019 ∙ by Lam M. Nguyen, et al. ∙ 0

• ### Stochastic Bias-Reduced Gradient Methods

We develop a new primitive for stochastic optimization: a low-bias, low-...
06/17/2021 ∙ by Hilal Asi, et al. ∙ 0

• ### Robust estimation via generalized quasi-gradients

We explore why many recently proposed robust estimation problems are eff...
05/28/2020 ∙ by Banghua Zhu, et al. ∙ 0

• ### Distributed Learning in Non-Convex Environments – Part I: Agreement at a Linear Rate

Driven by the need to solve increasingly complex optimization problems i...
07/03/2019 ∙ by Stefan Vlaski, et al. ∙ 0

• ### D-SPIDER-SFO: A Decentralized Optimization Algorithm with Faster Convergence Rate for Nonconvex Problems

Decentralized optimization algorithms have attracted intensive interests...
11/28/2019 ∙ by Taoxing Pan, et al. ∙ 15

##### This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

## 1 Introduction

In this paper, we study the optimization problem

 minimizex∈Rd    f(x)≡E[F(x;ζ)] (1.1)

where the stochastic component

, indexed by some random vector

, is smooth and possibly non-convex. Non-convex optimization problem of form (1.1) contains many large-scale statistical learning tasks. Optimization methods that solve (1.1) are gaining tremendous popularity due to their favorable computational and statistical efficiencies (Bottou, 2010; Bubeck et al., 2015; Bottou et al., 2018). Typical examples of form (1.1

) include principal component analysis, estimation of graphical models, as well as training deep neural networks

(Goodfellow et al., 2016). The expectation-minimization structure of stochastic optimization problem (1.1) allows us to perform iterative updates and minimize the objective using its stochastic gradient as an estimator of its deterministic counterpart.

A special case of central interest is when the stochastic vector is finitely sampled. In such finite-sum (or offline) case, we denote each component function as and (1.1) can be restated as

 minimizex∈Rd    f(x)=1nn∑i=1fi(x) (1.2)

where is the number of individual functions. Another case is when is reasonably large or even infinite, running across of the whole dataset is exhaustive or impossible. We refer it as the online (or streaming) case. For simplicity of notations we will study the optimization problem of form (1.2) in both finite-sum and on-line cases till the rest of this paper.

One important task for non-convex optimization is to search for, given the precision accuracy , an -approximate first-order stationary point or . In this paper, we aim to propose a new technique, called the Stochastic Path-Integrated Differential EstimatoR (Spider), which enables us to construct an estimator that tracks a deterministic quantity with significantly lower sampling costs. As the readers will see, the Spider technique further allows us to design an algorithm with a faster rate of convergence for non-convex problem (1.2), in which we utilize the idea of Normalized Gradient Descent (NGD) (Nesterov, 2004; Hazan et al., 2015). NGD is a variant of Gradient Descent (GD) where the stepsize is picked to be inverse-proportional to the norm of the full gradient. Compared to GD, NGD exemplifies faster convergence, especially in the neighborhood of stationary points (Levy, 2016). However, NGD has been less popular due to its requirement of accessing the full gradient and its norm at each update. In this paper, we estimate and track the gradient and its norm via the Spider technique and then hybrid it with NGD. Measured by gradient cost which is the total number of computation of stochastic gradients, our proposed Spider-SFO algorithm achieves a faster rate of convergence in which outperforms the previous best-known results in both finite-sum (Allen-Zhu & Hazan, 2016)(Reddi et al., 2016) and on-line cases (Lei et al., 2017) by a factor of .

For the task of finding stationary points for which we already achieved a faster convergence rate via our proposed Spider-SFO algorithm, a follow-up question to ask is: is our proposed Spider-SFO algorithm optimal for an appropriate class of smooth functions? In this paper, we provide an affirmative answer to this question in the finite-sum case. To be specific, inspired by a counterexample proposed by Carmon et al. (2017b) we are able to prove that the gradient cost upper bound of Spider-SFO algorithm matches the algorithmic lower bound. To put it differently, the gradient cost of Spider-SFO cannot be further improved for finding stationary points for some particular non-convex functions.

Nevertheless, it has been shown that for machine learning methods such as deep learning, approximate stationary points that have at least one negative Hessian direction, including saddle points and local maximizers, are often

not sufficient and need to be avoided or escaped from (Dauphin et al., 2014; Ge et al., 2015). Specifically, under the smoothness condition for and an additional Hessian-Lipschitz condition for , we aim to find an -approximate second-order stationary point which is a point satisfying and (Nesterov & Polyak, 2006). As a side result, we propose a variant of our Spider-SFO algorithm, named Spider-SFO+ (Algorithm 2) for finding an approximate second-order stationary point, based a so-called Negative-Curvature-Search method. Under an additional Hessian-Lipschitz assumption, Spider-SFO+ achieves an -approximate second-order stationary point at a gradient cost of . In the on-line case, this indicates that our Spider-SFO algorithm improves upon the best-known gradient cost in the on-line case by a factor of (Allen-Zhu & Li, 2018). For the finite-sum case, the gradient cost of Spider is sharper than that of the state-of-the-art Neon+FastCubic/CDHS algorithm in Agarwal et al. (2017); Carmon et al. (2016) by a factor of when .111In the finite-sum case, when Spider-SFO has a slower rate of than the state-of-art rate achieved by Neon+FastCubic/CDHS (Allen-Zhu & Li, 2018). Neon+FastCubic/CDHS has exploited appropriate acceleration techniques, which has not been considered for Spider.

### 1.1 Related Works

In the recent years, there has been a surge of literatures in machine learning community that analyze the convergence property of non-convex optimization algorithms. Limited by space and our knowledge, we have listed all literatures that we believe are mostly related to this work. We refer the readers to the monograph by Jain et al. (2017) and the references therein on recent general and model-specific convergence rate results on non-convex optimization.

#### First- and Zeroth-Order Optimization and Variance Reduction

For the general problem of finding approximate stationary points, under the smoothness condition of

, it is known that vanilla Gradient Descent (GD) and Stochastic Gradient Descent (SGD), which can be traced back to

Cauchy (1847) and Robbins & Monro (1951) and achieve an -approximate stationary point with a gradient cost of (Nesterov, 2004; Ghadimi & Lan, 2013; Nesterov & Spokoiny, 2011; Ghadimi & Lan, 2013; Shamir, 2017).

Recently, the convergence rate of GD and SGD have been improved by the variance-reduction type of algorithms

(Johnson & Zhang, 2013; Schmidt et al., 2017). In special, the finite-sum Stochastic Variance-Reduced Gradient (SVRG) and on-line Stochastically Controlled Stochastic Gradient (SCSG), to the gradient cost of (Allen-Zhu & Hazan, 2016; Reddi et al., 2016; Lei et al., 2017).

#### First-order method for finding approximate stationary points

Recently, many literature study the problem of how to avoid or escape saddle points and achieve an approximate second-order stationary point at a polynomial gradient cost (Ge et al., 2015; Jin et al., 2017a; Xu et al., 2017; Allen-Zhu & Li, 2018; Hazan et al., 2015; Levy, 2016; Allen-Zhu, 2018; Reddi et al., 2018; Tripuraneni et al., 2018; Jin et al., 2017b; Lee et al., 2016; Agarwal et al., 2017; Carmon et al., 2016; Paquette et al., 2018). Among them, the group of authors Ge et al. (2015); Jin et al. (2017a) proposed the noise-perturbed variants of Gradient Descent (PGD) and Stochastic Gradient Descent (SGD) that escape from all saddle points and achieve an -approximate second-order stationary point in gradient cost of stochastic gradients. Levy (2016) proposed the noise-perturbed variant of NGD which yields faster evasion of saddle points than GD.

#### Online PCA and the NEON method

In late 2017, two groups Xu et al. (2017); Allen-Zhu & Li (2018) proposed a generic saddle-point-escaping method called Neon, a Negative-Curvature-Search method using stochastic gradients. Using such Neon method, one can convert a series of optimization algorithms whose update rules use stochastic gradients and Hessian-vector products (GD, SVRG, FastCubic/CDHS, SGD, SCSG, Natasha2, etc.) to the ones using only stochastic gradients without increasing the gradient cost. The idea of Neon was built upon Oja’s iteration for principal component estimation (Oja, 1982), and its global convergence rate was proved to be near-optimal (Li et al., 2017; Jain et al., 2016). Allen-Zhu & Li (2017) later extended such analysis to the rank- case as well as the gap-free case, the latter of which serves as the pillar of the Neon method.

#### Other concurrent works

As the current work is carried out in its final phase, the authors became aware that an idea of resemblance was earlier presented in an algorithm named the StochAstic Recursive grAdient algoritHm (SARAH) (Nguyen et al., 2017a, b). Both our Spider-type of algorithms and theirs adopt the recursive stochastic gradient update framework. Nevertheless, our techniques essentially differ from the works Nguyen et al. (2017a, b) in two aspects:

1. The version of SARAH proposed by Nguyen et al. (2017a, b) can be seen as a variant of gradient descent, while ours hybrids the Spider technique with a stochastic version of NGD.

2. Nguyen et al. (2017a, b) adopt a large stepsize setting (in fact their goal was to design a memory-saving variant of SAGA (Defazio et al., 2014)), while our algorithms adopt a small stepsize that is proportional to ;

Soon after the initial submission to NIPS and arXiv release of this paper, we became aware that similar convergence rate results for stochastic first-order method were also achieved independently by the so-called SNVRG algorithm (Zhou et al., 2018b, a).444To our best knowledge, the work by Zhou et al. (2018b, a) appeared on-line on June 20, 2018 and June 22, 2018, separately. SNVRG (Zhou et al., 2018b) obtains a gradient complexity of for finding an approximate first-order stationary point, and achieves gradient complexity for finding an approximate second-order stationary point (Zhou et al., 2018a) for a wide range of . By exploiting the third-order smoothness condition, SNVRG can also achieve an -approximate second-order stationary point in gradient costs.

### 1.2 Our Contributions

In this work, we propose the Stochastic Path-Integrated Differential Estimator (Spider) technique, which significantly avoids excessive access of stochastic oracles and reduces the time complexity. Such technique can be potential applied in many stochastic estimation problems.

1. As a first application of our Spider technique, we propose the Spider-SFO algorithm (Algorithm 1) for finding an approximate first-order stationary point for non-convex stochastic optimization problem (1.2), and prove the optimality of such rate in at least one case. Inspired by recent works Johnson & Zhang (2013); Carmon et al. (2016, 2017b) and independent of Zhou et al. (2018b, a), this is the first time that the gradient cost of in both upper and lower (finite-sum only) bound for finding first-order stationary points for problem (1.2) were obtained.

2. Following Carmon et al. (2016); Allen-Zhu & Li (2018); Xu et al. (2017), we propose Spider-SFO+ algorithm (Algorithm 2) for finding an approximate second-order stationary point for non-convex stochastic optimization problem. To best of our knowledge, this is also the first time that the gradient cost of achieved with standard assumptions.

3. As a second application of our Spider technique, we apply it to zeroth-order optimization for problem (1.2) and achieves individual function accesses of . To best of our knowledge, this is also the first time that using Variance Reduction technique (Schmidt et al., 2017; Johnson & Zhang, 2013) to reduce the individual function accesses for non-convex problems to the aforementioned complexity.

4. We propose a much simpler analysis for proving convergence to a stationary point. One can flexibly apply our proof techniques to analyze others algorithms, e.g. SGD, SVRG (Johnson & Zhang, 2013), and SAGA (Defazio et al., 2014).

Organization. The rest of this paper is organized as follows. §2 presents the core idea of stochastic path-integrated differential estimator that can track certain quantities with much reduced computational costs. §3 provides the Spider method for stochastic first-order methods and convergence rate theorems of this paper for finding approximate first-order stationary and second-order stationary points, and details a comparison with concurrent works. §4 provides the Spider method for stochastic zeroth-order methods and relevant convergence rate theorems. §5 concludes the paper with future directions. All the detailed proofs are deferred to the appendix in their order of appearance.

Notation. Throughout this paper, we treat the parameters and , to be specified later as global constants. Let denote the Euclidean norm of a vector or spectral norm of a square matrix. Denote for a sequence of vectors and positive scalars if there is a global constant such that , and such hides a poly-logarithmic factor of the parameters. Denote if there is a global constant such that . Let

denote the least eigenvalue of a real symmetric matrix

. For fixed , let denote the sequence . Let and denote the cardinality of a multi-set of samples (a generic set that allows elements of multiple instances). For simplicity, we further denote the averaged sub-sampled stochastic estimator and averaged sub-sampled gradient . Other notations are explained at their first appearance.

## 2 Stochastic Path-Integrated Differential Estimator: Core Idea

In this section, we present in detail the underlying idea of our Stochastic Path-Integrated Differential Estimator (Spider) technique behind the algorithm design. As the readers will see, such technique significantly avoids excessive access of the stochastic oracle and reduces the complexity, which is of independent interest and has potential applications in many stochastic estimation problems.

Let us consider an arbitrary deterministic vector quantity . Assume that we observe a sequence , and we want to dynamically track for Assume further that we have an initial estimate

, and an unbiased estimate

of such that for each

 E[ξk(^x0:k)∣^x0:k]=Q(^xk)−Q(^xk−1).

Then we can integrate (in the discrete sense) the stochastic differential estimate as

 ~Q(^x0:K):=~Q(^x0)+K∑k=1ξk(^x0:k). (2.1)

We call estimator the Stochastic Path-Integrated Differential EstimatoR, or Spider for brevity. We conclude the following proposition which bounds the error of our estimator

, in terms of both expectation and high probability:

###### Proposition 1.

We have

1. The martingale variance bound has

 E∥~Q(^x0:K)−Q(^xK)∥2=E∥~Q(^x0)−Q(^x0)∥2+K∑k=1E∥ξk(^x0:k)−(Q(^xk)−Q(^xk−1))∥2. (2.2)
2. Suppose

 ∥~Q(^x0)−Q(^x0)∥≤b0 (2.3)

and for each

 (2.4)

Then for any and a given we have with probability at least

 ∥∥~Q(^x0:k)−Q(^xk)∥∥≤2 ⎷k∑s=0b2s⋅log1γ. (2.5)

Proposition 1(i) can be easily concluded using the property of square-integrable martingales. To prove the high-probability bound in Proposition 1(ii), we need to apply an Azuma-Hoeffding-type concentration inequality (Pinelis, 1994). See §A in the Appendix for more details.

Now, let map any to a random estimate such that, conditioning on the observed sequence , we have for each ,

 E[Bi(xk)−Bi(xk−1)∣x0:k]=Vk−Vk−1. (2.6)

At each step let be a subset that samples elements in with replacement, and let the stochastic estimator satisfy

 E∥Bi(x)−Bi(y)∥2≤L2B∥x−y∥2, (2.7)

and for all . Finally, we set our estimator of as

 Vk=BS∗(xk)−BS∗(xk−1)+Vk−1.

Applying Proposition 1 immediately concludes the following lemma, which gives an error bound of the estimator

in terms of the second moment of

:

###### Lemma 1.

We have under the condition (2.7) that for all ,

 E∥Vk−B(xk)∥2≤kL2Bϵ21S∗+E∥V0−B(x0)∥2. (2.8)

It turns out that one can use Spider to track many quantities of interest, such as stochastic gradient, function values, zero-order estimate gradient, functionals of Hessian matrices, etc. Our proposed Spider-based algorithms in this paper take as the stochastic gradient and the zeroth-order estimate gradient, separately.

## 3 SPIDER for Stochastic First-Order Method

In this section, we apply Spider to the task of finding both first-order and second-order stationary points for non-convex stochastic optimization. The main advantage of Spider-SFO lies in using SPIDER to estimate the gradient with a low computation cots. We introduce the basic settings and assumptions in §3.1 and propose the main error-bound theorems for finding approximate first-order and second-order stationary points, separately in §3.2 and §3.3.

### 3.1 Settings and Assumptions

We first introduce the formal definition of approximate first-order and second-order stationary points, as follows.

###### Definition 1.

We call an -approximate first-order stationary point, or simply an FSP, if

 ∥∇f(x)∥≤ϵ. (3.1)

Also, call an -approximate second-order stationary point, or simply an SSP, if

 ∥∇f(x)∥≤ϵ,λmin(∇2f(x))≥−δ. (3.2)

The definition of an -approximate second-order stationary point generalizes the classical version where , see e.g. Nesterov & Polyak (2006). For our purpose of analysis, we also pose the following additional assumption:

###### Assumption 1.

We assume the following

1. The where is the global infimum value of ;

2. The component function has an averaged -Lipschitz gradient, i.e. for all ,

 E∥∇fi(x)−∇fi(y)∥2≤L2∥x−y∥2;
3. (For on-line case only) the stochastic gradient has a finite variance bounded by , i.e.

 E∥∇fi(x)−∇f(x)∥2≤σ2.

Alternatively, to obtain high-probability results using concentration inequalities, we propose the following more stringent assumptions:

###### Assumption 2.

We assume that Assumption 1 holds and, in addition,

1. (Optional) each component function has -Lipschitz continuous gradient, i.e. for all ,

 ∥∇fi(x)−∇fi(y)∥≤L∥x−y∥.

Note when is twice continuously differentiable, Assumption 1 (ii) is equivalent to for all and is weaker than the additional Assumption 2 (ii’), since the absolute norm squared bounds the variance for any random vector.

2. (For on-line case only) the gradient of each component function has finite bounded variance by (with probability ) , i.e. for all ,

 ∥∇fi(x)−∇f(x)∥2≤σ2.

Assumption 2 is common in applying concentration laws to obtain high probability result555In this paper, we use Azuma-Hoeffding-type concentration inequality to obtain high probability results like Xu et al. (2017); Allen-Zhu & Li (2018). By applying Bernstein inequality, under the Assumption 1, the parameters in the Assumption 2 are allowed to be larger without hurting the convergence rate..

For the problem of finding an -approximate second-order stationary point, we pose in addition to Assumption 1 the following assumption:

###### Assumption 3.

We assume that Assumption 2 (including (ii’)) holds and, in addition, each component function has -Lipschitz continuous Hessian, i.e. for all ,

 ∥∇2fi(x)−∇2fi(y)∥≤ρ∥x−y∥.

We emphasize that Assumptions 1, 2, and 3 are standard for non-convex stochastic optimization (Agarwal et al., 2017; Carmon et al., 2017b; Jin et al., 2017a; Xu et al., 2017; Allen-Zhu & Li, 2018).

### 3.2 First-Order Stationary Point

Recall that NGD has iteration update rule

 xk+1=xk−η∇f(xk)∥∇f(xk)∥, (3.3)

where is a constant step size. The NGD update rule (3.3) ensures being constantly equal to the stepsize , and might fastly escape from saddle points and converge to a second-order stationary point (Levy, 2016). We propose Spider-SFO in Algorithm 1, which is like a stochastic variant of NGD with the Spider

technique applied, so as to maintain an estimator in each epoch

at a higher accuracy under limited gradient budgets.

To analyze the convergence rate of Spider-SFO, let us first consider the on-line case for Algorithm 1. We let the input parameters be

 S1=2σ2ϵ2,S2=2σϵn0,η=ϵLn0,ηk=min(ϵLn0∥vk∥,12Ln0),q=σn0ϵ, (3.4)

where is a free parameter to choose.666When , the mini-batch size is , which is the largest mini-batch size that Algorithm 1 allows to choose. In this case, in Line 5 of Algorithm 1 is a Spider for . To see this, recall is the stochastic gradient drawn at step and

 (3.5)

Plugging in and in Lemma 1 of §2, we can use in Algorithm 1 as the Spider and conclude the following lemma that is pivotal to our analysis.

###### Lemma 2.

Set the parameters , , , and as in (3.4), and . Then under the Assumption 1, we have

 E[∥vk−∇f(xk)∥2∣x0:k0]≤ϵ2.

Here we compute the conditional expectation over the randomness of .

Lemma 2 shows that our Spider  of maintains an error of . Using this lemma, we are ready to present the following results for Stochastic First-Order (SFO) method for finding first-order stationary points of (1.2).

#### Upper Bound for Finding First-Order Stationary Points, in Expectation

###### Theorem 1 (First-Order Stationary Point, on-line setting, expectation).

For the on-line case, set the parameters , , , and as in (3.4), and . Then under the Assumption 1, for Algorithm 1 with OPTION , after iteration, we have

 E[∥∇f(~x)∥]≤5ϵ. (3.6)

The gradient cost is bounded by for any choice of . Treating , and as positive constants, the stochastic gradient complexity is .

The relatively reduced minibatch size serves as the key ingredient for the superior performance of Spider-SFO. For illustrations, let us compare the sampling efficiency among SGD, SCSG and Spider-SFO in their special cases. With some involved analysis of these algorithms, we can conclude that to ensure a sufficient function value decrease of at each iteration,

1. for SGD the choice of mini-batch size is ;

2. for SCSG (Lei et al., 2017) and Natasha2 (Allen-Zhu, 2018) the mini-batch size is ;

3. for our Spider-SFO only needs a reduced mini-batch size of

Turning to the finite-sum case, analogous to the on-line case we let

 S2=n1/2n0,η=ϵLn0,ηk=min(ϵLn0∥vk∥,12Ln0),q=n0n1/2, (3.7)

where . In this case, one computes the full gradient in Line 3 of Algorithm 1. We conclude our second upper-bound result:

###### Theorem 2 (First-Order Stationary Point, finite-sum setting).

In the finite-sum case, set the parameters , , and as in (3.7), and let , i.e. we obtain the full gradient in Line 3. The gradient cost is bounded by for any choice of . Treating , and as positive constants, the stochastic gradient complexity is .

#### Lower Bound for Finding First-Order Stationary Points

To conclude the optimality of our algorithm we need an algorithmic lower bound result (Carmon et al., 2017b; Woodworth & Srebro, 2016). Consider the finite-sum case and any random algorithm that maps functions to a sequence of iterates in , with

 [xk;ik]=Ak−1(ξ,∇fi0(x0),∇fi1(x1),…,∇fik−1(xk−1)),k≥1, (3.8)

where are measure mapping into , is the individual function chosen by at iteration , and is uniform random vector from . And , where is a measure mapping. The lower-bound result for solving (1.2) is stated as follows:

###### Theorem 3 (Lower bound for SFO for the finite-sum setting).

For any , , and , for any algorithm satisfying (3.8), there exists a dimension and a function satisfies Assumption 1 in the finite-sum case, such that in order to find a point for which , must cost at least stochastic gradient accesses.

Note the condition in Theorem 3 ensures that our lower bound , and hence our upper bound in Theorem 1 matches the lower bound in Theorem 3 up to a constant factor of relevant parameters, and is hence near-optimal. Inspired by Carmon et al. (2017b), our proof of Theorem 3 utilizes a specific counterexample function that requires at least stochastic gradient accesses. Note Carmon et al. (2017b) analyzed such counterexample in the deterministic case and we generalize such analysis to the finite-sum case .

###### Remark 1.

Note by setting the lower bound complexity in Theorem 3 can be as large as . We emphasize that this does not violate the upper bound in the on-line case [Theorem 1], since the counterexample established in the lower bound depends not on the stochastic gradient variance specified in Assumption 1(iii), but on the component number . To obtain the lower bound result for the on-line case with the additional Assumption 1(iii), with more efforts one might be able to construct a second counterexample that requires stochastic gradient accesses with the knowledge of instead of . We leave this as a future work.

#### Upper Bound for Finding First-Order Stationary Points, in High-Probability

We consider obtaining high-probability results. With Theorem 1 and Theorem 2 in hand, by Markov Inequality, we have with probability . Thus a straightforward way to obtain a high probability result is by adding an additional verification step in the end of Algorithm 1, in which we check whether satisfies (for the on-line case when are unaccessible, under Assumption 2 (iii’), we can draw samples to estimate in high accuracy). If not, we can restart Algorithm 1 (at most in times) until it find a desired solution. However, because the above way needs running Algorithm 1 in multiple times, in the following, we show with Assumption 2 (including (2)), original Algorithm 1 obtains a solution with an additional polylogarithmic factor under high probability.

###### Theorem 4 (First-Order Stationary Point, on-line setting, high probability).

For the on-line case, set the parameters , , and in (3.4). Set . Then under the Assumption 2 (including (ii’)), with probability at least , Algorithm 1 terminates before iterations and outputs an satisfying

 ∥vK∥≤2~ϵand∥∇f(xK)∥≤3~ϵ. (3.9)

The gradient costs to find a FSP satisfying (3.9) with probability are bounded by for any choice of of . Treating , and as constants, the stochastic gradient complexity is .

###### Theorem 5 (First-Order Stationary Point, finite-sum setting).

In the finite-sum case, set the parameters , , , and as (3.7). let , i.e. we obtain the full gradient in Line 3. Then under the Assumption 2 (including (ii’)), with probability at least , Algorithm 1 terminates before iterations and outputs an satisfying

 ∥vK∥≤2~ϵand∥∇f(xK)∥≤3~ϵ. (3.10)

where . So the gradient costs to find a FSP satisfying (3.10) with probability are bounded by with any choice of . Treating , and as constants, the stochastic gradient complexity is .

### 3.3 Second-Order Stationary Point

To find a second-order stationary point with (3.1), we can fuse our Spider-SFO in Algorithm 1 with a Negative-Curvature-Search (NC-Search) iteration that solves the following task: given a point , decide if or find a unit vector such that (for numerical reasons, one has to leave some room between the two bounds). For the on-line case, NC-Search can be efficiently solved by Oja’s algorithm (Oja, 1982; Allen-Zhu, 2018) and also by Neon (Allen-Zhu & Li, 2018; Xu et al., 2017) with the gradient cost of .777Recall that the NEgative-curvature-Originated-from-Noise method (or Neon method for short) proposed independently by Allen-Zhu & Li (2018); Xu et al. (2017) is a generic procedure that convert an algorithm that finds an approximate first-order stationary points to the one that finds an approximate second-order stationary point. When is found, one can set where is a random sign. Then under Assumption 3, Taylor’s expansion implies that (Allen-Zhu & Li, 2018)

 (3.11)

Taking expectation, one has This indicates that when we find a direction of negative curvature or Hessian, updating decreases the function value by in expectation. Our Spider-SFO algorithm fused with NC-Search is described in the following steps:

[style=exampledefault]

1. Run an efficient NC-Search iteration to find an -approximate negative Hessian direction using stochastic gradients, e.g. Neon2 (Allen-Zhu & Li, 2018).

2. If NC-Search find a , update in mini-steps, and simultaneously use Spider  to maintain an estimate of . Then Goto Step 1.

3. If not, run Spider-SFO for steps directly using the Spider  (without restart) in Step 2. Then Goto Step 1.

4. During Step 3, if we find , return .

The formal pseudocode of the algorithm described above, which we refer to as Spider-SFO+, is detailed in Algorithm 2888In our initial version, Spider-SFO+ first find a FSP and then run NC-search iteration to find a SSP, which also ensures competitive rate. Our newly Spider-SFO+ are easier to fuse momentum technique when is small. Please see the discussion later.. The core reason that Spider-SFO+ enjoys a highly competitive convergence rate is that, instead of performing a single large step at the approximate direction of negative curvature as in Neon2(Allen-Zhu & Li, 2018), we split such one large step into small, equal-length mini-steps in Step 2, where each mini-step moves the iteration by an distance. This allows the algorithm to successively maintain the Spider estimate of the current gradient in Step 3 and avoid re-computing the gradient in Step 1.

Our final result on the convergence rate of Algorithm 2 is stated as:

###### Theorem 6 (Second-Order Stationary Point).

Let Assumptions 3 hold. For the on-line case, set in (3.4), with any choice of , then with probability at least 999By multiple times (at most in times) of verification and restarting Algorithm 2 , one can also obtain a high-probability result., Algorithm 2 outputs an with , and satisfying

 ∥∇f(xk)∥≤~ϵandλmin(∇2f(xk))≥−3δ, (3.12)

with . The gradient cost to find a Second-Order Stationary Point with probability at least is upper bounded by

 ~O(ΔLσϵ3+ΔσLρϵ2δ2+ΔL2ρ2δ5+ΔL2ρϵδ3+σ2ϵ2+L2δ2+Lσδρϵ2).

Analogously for the finite-sum case, under the same setting of Theorem 2, set in (3.7), , , with probability , Algorithm 2 outputs an satisfying (3.12) in and with gradients cost of

 ~O(ΔLn1/2ϵ2+ΔρLn1/2ϵδ2+ΔL2ρ2δ5+ΔL2ρϵδ3+n+L2δ2+Ln1/2δρϵ).
###### Corollary 7.

Treating , , , and as positive constants, with high probability the gradient cost for finding an -approximate second-order stationary point is for the on-line case and for the finite-sum case, respectively. When , the gradient cost is .

Notice that one may directly apply an on-line variant of the Neon method to the Spider-SFO Algorithm 1 which alternately does Second-Order Descent (but not maintaining Spider) and First-Order Descent (Running a new Spider-SFO). Simple analysis suggests that the Neon+ Spider-SFO algorithm achieves a gradient cost of for the on-line case and for the finite-sum case (Allen-Zhu & Li, 2018; Xu et al., 2017). We discuss the differences in detail.

• The dominate term in the gradient cost of Neon+ Spider-SFO is the so-called coupling term in the regime of interest: for the on-line case and for the finite-sum case, separately. Due to this term, most convergence rate results in concurrent works for the on-line case such as Reddi et al. (2018); Tripuraneni et al. (2018); Xu et al. (2017); Allen-Zhu & Li (2018); Zhou et al. (2018a) have gradient costs that cannot break the barrier when is chosen to be . Observe that we always need to run a new Spider-SFO which at least costs stochastic gradient accesses.

• Our analysis sharpens the seemingly non-improvable coupling term by modifying the single large Neon step to many mini-steps. Such modification enables us to maintain the Spider estimates and obtain a coupling term of Spider-SFO+, which improves upon the Neon coupling term by a factor of .

• For the finite-sum case, Spider-SFO+ enjoys a convergence rate that is faster than existing methods only in the regime [Table 1]. For the case of , using Spider to track the gradient in the Neon procedure can be more costly than applying appropriate acceleration techniques (Agarwal et al., 2017; Carmon et al., 2016).101010Spider-SFO+ enjoys a faster rate than Neon+Spider-SFO where computing the “full” gradient dominates the gradient cost, namely in the on-line case and for the finite-sum case. Beacause it is well-known that momentum technique (Nesterov, 1983) provably ensures faster convergence rates when is sufficient small (Shalev-Shwartz & Zhang, 2016). One can also apply momentum technique to solve the sub-problem in Step 1 and 3 like Carmon et al. (2016); Allen-Zhu & Li (2018) when , and thus can achieve the state-of-the-art gradient cost of

 ~O(min(nϵ−1.5+n3/4ϵ−1.75,n1/2ϵ−2+n1/2ϵ−1δ−2)+min(n+n3/4δ−0.5,δ−2)δ−3),

in all scenarios.