 # Gradient Descent Converges to Minimizers

We show that gradient descent converges to a local minimizer, almost surely with random initialization. This is proved by applying the Stable Manifold Theorem from dynamical systems theory.

## Authors

##### This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

## 1 Introduction

Saddle points have long been regarded as a tremendous obstacle for continuous optimization. There are many well known examples when worst case initialization of gradient descent provably converge to saddle points [20, Section 1.2.3], and hardness results which show that finding even a local minimizer of non-convex functions is NP-Hard in the worst case . However, such worst-case analyses have not daunted practitioners, and high quality solutions of continuous optimization problems are readily found by a variety of simple algorithms. Building on tools from the theory of dynamical systems, this paper demonstrates that, under very mild regularity conditions, saddle points are indeed of little concern for the gradient method.

More precisely, let be twice continuously differentiable, and consider the classic gradient method with constant step size :

 xk+1=xk−α∇f(xk). (1)

We call a critical point of if , and say that satisfies the strict saddle property if each critical point of is either a local minimizer, or a “strict saddle”, i.e,

has at least one strictly negative eigenvalue. We prove:

If is twice continuously differentiable and satisfies the strict saddle property, then gradient descent (Equation 1) with a random initialization and sufficiently small constant step size converges to a local minimizer or negative infinity almost surely.

Here, by sufficiently small, we simply mean less than the inverse of the Lipschitz constant of the gradient. As we discuss below, such step sizes are standard for the gradient method. We remark that the strict saddle assumption is necessary in the worst case, due to hardness results regarding testing the local optimality of functions whose Hessians are highly degenerate at critical points (e.g, quartic polynomials) .

### 1.1 Related work

Prior work has show that first-order descent methods can circumvent strict saddle points, provided that they are augmented with unbiased noise whose variance is sufficiently large along each direction. For example,

 establishes convergence of the Robbins-Monro stochastic approximation to local minimizers for strict saddle functions. More recently, 

give quantitative rates on the convergence of noise-added stochastic gradient descent to local minimizers, for strict saddle functions. The condition that the noise have large variance along all directions is often not satisfied by the randomness which arises in sample-wise or coordinate-wise stochastic updates. In fact, it generally requires that additional, near-isotropic noise be added at each iteration, which yields convergence rates that depend heavily on problem parameters like dimension. In contrast, our results hold for the simplest implementation of gradient descent and thus do not suffer from the slow convergence associated with adding high-variance noise to each iterate.

But is this strict saddle property reasonable? Many works have answered in the affirmative by demonstrating that many objectives of interest do in fact satisfy the “strict saddle” property: PCA, a fourth-order tensor factorization

, formulations of dictionary learning [27, 26] and phase retrieval .

To obtain provable guarantees, the authors of [27, 26] and  adopt trust-region methods which leverage Hessian information in order to circumvent saddle points. This approach joins a long line of related strategies, including: a modified Newton’s method with curvilinear line search , the modified Cholesky method , trust-region methods , and the related cubic regularized Newton’s method 

, to name a few. Specialized to deep learning applications,

[12, 22] have introduced a saddle-free Newton method.

Unfortunately, such curvature-based optimization algorithms have a per-iteration computational complexity which scales quadratically or even cubically in the dimension , rendering them unsuitable for optimization of high dimensional functions. In contrast, the complexity of an iteration of gradient descent is linear in dimension. We also remark that the authors of  empirically observe gradient descent with random initializations on the phase retrieval problem reliably converges to a local minimizer, and one whose quality matches that of the solution found using more costly trust-region techniques.

More broadly, many recent works have shown that gradient descent plus smart initialization provably converges to the global minimum for a variety of non-convex problems: such settings include matrix factorization [16, 30] , phase retrieval [9, 8], dictionary learning , and latent-variable models . While our results only guarantee convergence to local minimizers, they eschew the need for complex and often computationally prohibitive initialization procedures.

Finally, some preliminary results have shown that there are settings in which if an algorithm converges to a saddle point it necessarily has a small objective value. For example,  studies the loss surface of a particular Gaussian random field as a proxy for understanding the objective landscape of deep neural nets. The results leverage the Kac-Rice Theorem [2, 6], and establish that that critical points with more positive eigenvalues have lower expected function value, often close to that of the global minimizer. We remark that functions drawn from this Gaussian random field model share the strict saddle property defined above, and so our results apply in this setting. On the other hand, our results are considerably more general, as they do not place stringent generative assumptions on the objective function .

### 1.2 Organization

The rest of the paper is organized as follows. Section 2 introduces the notation and definitions used throughout the paper. Section 3 provides an intuitive explanation for why it is unlikely that gradient descent converges to a saddle point, by studying a non-convex quadratic and emphasizing the analogy with power iteration. Section 4 states our main results which guarantee gradient descent converges to only local minimizers, and also establish rates of convergence depending on the local geometry of the minimizer. The primary tool we use is the local stable manifold theorem, accompanied by inversion of gradient descent via the proximal point algorithm. Finally, we conclude in Section 5 by suggesting several directions of future work.

## 2 Preliminaries

Throughout the paper, we will use to denote a real-valued function in , the space of twice-continuously differentiable functions, and to denote the corresponding gradient map with step size ,

 g(x)=x−α∇f(x). (2)

The Jacobian of is given by , or . In addition to being , our main regularity assumption on is that it has a Lipschitz gradient:

The -fold composition of the gradient map corresponds to performing steps of gradient descent initialized at . The iterates of gradient descent will be denoted

. All the probability statements are with respect to

, the distribution of , which we assume is absolutely continuous with respect to Lebesgue measure.

A fixed point of the gradient map is a critical point of the function . Critical points can be saddle points, local minima, or local maxima. In this paper, we will study the critical points of via the fixed points of , and then apply dynamical systems theory to .

###### Definition 2.1.
1. A point is a critical point of if it is a fixed point of the gradient map , or equivalently .

2. A critical point is isolated if there is a neighborhood around , and is the only critical point in .

3. A critical point is a local minimum if there is a neighborhood around such that for all , and a local maximum if .

4. A critical point is a saddle point if for all neighborhoods around , there are such that .

As mentioned in the introduction, we will be focused on saddle points that have directions of strictly negative curvature. This notion is made precise by the following definition.

A critical point of is a strict saddle if .

Since we are interested in the attraction region of a critical point, we define the stable set.

###### Definition 2.3 (Global Stable Set).

The global stable set of a critical point is the set of initial conditions of gradient descent that converge to :

 Ws(x∗)={x:limkgk(x)=x∗}.

## 3 Intuition

To illustrate why gradient descent does not converge to saddle points, consider the case of a non-convex quadratic, . Without loss of generality, assume with and . is the unique critical point of this function and the Hessian at is . Note that gradient descent initialized from has iterates

 xk+1=n∑i=1(1−αλi)k+1⟨ei,x0⟩ei.

where

denote the standard basis vectors. This iteration resembles power iteration with the matrix

.

The gradient method is guaranteed to converge with a constant step size provided  . For this quadratic , is equal to . Suppose , a slightly stronger condition. Then we will have for and for . If , then converges to the saddle point at since . However, if has a component outside then gradient descent diverges to . For this simple quadratic function, we see that the global stable set (attractive set) of is the subspace . Now, if we choose our initial point at random, the probability of that point landing in is zero.

As an example of this phenomena for a non-quadratic function, consider the following example from [20, Section 1.2.3]. Letting , the corresponding gradient mapping is

 g(x) =[(1−α)x(1+α)y−αy3].

The critical points are

 z1=,z2=[0−1],z3=.

The points and are isolated local minima, and is a saddle point.

Gradient descent initialized from any point of the form converges to the saddle point . Any other initial point either diverges, or converges to a local minimum, so the stable set of is the -axis, which is a zero measure set in . By computing the Hessian,

 ∇2f(x)=[1003y2−1]

we find that

has one positive eigenvalue with eigenvector that spans the

-axis, thus agreeing with our above characterization of the stable set. If the initial point is chosen randomly, there is zero probability of initializing on the -axis and thus zero probability of converging to the saddle point .

In the general case, the local stable set of a critical point is well-approximated by the span of the eigenvectors corresponding to positive eigenvalues. By an application of Taylor’s theorem, one can see that if the initial point is uniformly random in a small neighborhood around , then the probability of initializing in the span of these eigenvectors is zero whenever there is a negative eigenvalue. Thus, gradient descent initialized at will leave the neighborhood. The primary difficulty is that is randomly distributed over the entire domain, not a small neighborhood around , and Taylor’s theorem does not provide any global guarantees.

However, the global stable set can be found by inverting the gradient map via . Indeed, the global stable set is precisely . This follows because if a point converges to , then for some sufficiently large it must enter the local stable set. That is, converges to if and only if for sufficiently large . If is of measure zero, then is also of measure zero, and hence the global stable set is of measure zero. Thus, gradient descent will never converge to from a random initialization.

In Section 4, we formalize the above arguments by showing the existence of an inverse gradient map. The case of degenerate critical points, critical points with zero eigenvalues, is more delicate; the geometry of the global stable set is no longer characterized by only the number of positive eigenvectors. However in Section 4, we show that if a critical point has at least one negative eigenvalue, then the global stable set is of measure zero.

## 4 Main Results

We now state and prove our main theorem, making our intuition rigorous.

###### Theorem 4.1.

Let be a function and be a strict saddle. Assume that , then

 Pr(limkxk=x∗)=0.

That is, the gradient method never converges to saddle points, provided the step size is not chosen aggressively. Greedy methods that use precise line search may still get stuck at stationary points. However, a short-step gradient method will only converge to minimizers.

###### Remark 4.2.

Note that even for the convex functions method, a constant step size slightly less than is a nearly optimal choice. Indeed, for , if one runs the gradient method with step size of on a convex function a convergence rate of is attained.

###### Remark 4.3.

When does not exist, the above theorem is trivially true.

To prove Theorem 4.1, our primary tool will be the theory of Invariant Manifolds. Specifically, we will use Stable-Center Manifold theorem developed in [25, 24, 15], which allows for a local characterization of the stable set. Recall that a map is a diffeomorphism if is a bijection, and and are continuously differentiable.

###### Theorem 4.4 (Theorem III.7, ).

Let be a fixed point for the local diffeomorphism , where is a neighborhood of in the Banach space . Suppose that , where is the span of the eigenvectors corresponding to eigenvalues less than or equal to of , and is the span of the eigenvectors corresponding to eigenvalues greater than of . Then there exists a embedded disk that is tangent to at called the local stable center manifold. Moreover, there exists a neighborhood of , such that , and .

To unpack all of this terminology, what the stable manifold theorem says is that if there is a map that diffeomorphically deforms a neighborhood of a critical point, then this implies the existence of a local stable center manifold containing the critical point. This manifold has dimension equal to the number of eigenvalues of the Jacobian of the critical point that are less than . contains all points that are locally forward non-escaping meaning, in a smaller neighborhood , a point converges to after iterating only if it is in .

Relating this back to the gradient method, replace with our gradient map and let be a strict saddle point. We first record a very useful fact:

###### Proposition 4.5.

The gradient mapping with step size is a diffeomorphism.

We will prove this proposition below. But let us first continue to apply the stable manifold theorem. Note that . Thus, the set is a manifold of dimension equal to the number of non-negative eigenvalues of the . Note that by the strict saddle assumption, this manifold has strictly positive codimension and hence has measure zero.

Let be the neighborhood of promised by the Stable Manifold Theorem. If converges to under the gradient map, then there exists a such that for all . This means that , and hence, . That is, we have shown that

 Ws(x∗)⊆∞⋃l≥0g−l(∞⋂k=0g−k(B)).

Since diffeomorphisms map sets of measure zero to sets of measure zero, and countable unions of measure zero sets have measure zero, we conclude that has measure zero. That is, we have proven Theorem 4.1.

### 4.1 Proof of Proposition 4.5

We first check that is injective from for . Suppose that there exist and such that . Then we would have and hence

 ∥x−y∥=α∥∇f(x)−∇f(y)∥≤αL∥x−y∥.

Since , this means .

To show the gradient map is surjective, we will construct an explicit inverse function. The inverse of the gradient mapping is given by performing the proximal point algorithm on the function . The proximal point mapping of centered at is given by

 xy=argminx12∥x−y∥2−αf(x).

For , the function above is strongly convex with respect to , so there is a unique minimizer. Let be the unique minimizer, then by KKT conditions,

 y =xy−∇f(xy)=g(xy).

Hence, is mapped to by the gradient map.

We have already shown that is a bijection, and continuously differentiable. Since is invertible for , the inverse function theorem guarantees is continuously differentiable, completing the proof that is a diffeomorphism.

### 4.2 Further consequences of Theorem 4.1

###### Corollary 4.6.

Let be the set of saddle points and assume they are all strict. If has at most countably infinite cardinality, then

 Pr(limkxk∈C)=0.
###### Proof.

By applying Corollary 4.1 to each point , we have that . Since the critical points are countable, the conclusion follows since countable union of null sets is a null set. ∎

###### Remark 4.7.

If the saddle points are isolated points, then the set of saddle points is at most countably infinite.

###### Theorem 4.8.

Assume the same conditions as Theorem 4.6 and exists, thien , where is a local minimizer.

###### Proof.

Using the previous theorem, . Since exists and there is zero probability of converging to a saddle, then , where is a local minimizer. ∎

We now discuss two sufficient conditions for to exist. The following proposition prevents from escaping to , by enforcing that has compact sublevel sets, . This is true for any coercive function,

, which holds in most machine learning applications since

is usually a loss function.

###### Proposition 4.9 (Proposition 12.4.4 of ).

Assume that is continuously differentiable, has isolated critical points, and compact sublevel sets, then exists and that limit is a critical point of .

The second sufficient condition for to exist is based on the Lojasiewicz gradient inequality, which characterizes the steepness of the gradient near a critical point. The Lojasiewicz inequality ensures that the length traveled by the iterates of gradient descent is finite. This will also allow us to derive rates of convergence to a local minimum.

###### Definition 4.10 (Lojasiewicz Gradient Inequality).

A critical point is satisfies the Lojasiewicz gradient inequality if there exists a neighborhood , , and such that

 ∥∇f(x)∥≥m|f(x)−f(x∗)|a (3)

for all x in .

The Lojasiewicz inequality is very general as discussed in [7, 4, 5]. In fact every analytic function satisfies the Lojasiewicz inequality. Also if the solution is -strongly convex in a neighborhood, then the Lojasiewicz inequality is satisfied with parameters , and .

###### Proposition 4.11.

Assume the same conditions as Theorem 4.6, and the iterates do not escape to , meaning is a bounded sequence. Then exists and for a local minimum .

Furthermore if satisfies the Lojasiewicz gradient inequality for , then for some and independent of ,

 ∥xk−x∗∥≤Cbk.

For ,

 ∥xk−x∗∥≤Ck(1−a)/(2a−1).
###### Proof.

The first part of the theorem follows from , which shows that exists. By Theorem 4.8, is a local minimizer . Without loss of generality, we may assume that by shifting the function.

 also establish

 ∞∑j=k∥∥xj+1−xj∥∥≤2αm(1−a)f(xk)1−a.

Define , and since it suffices to upper bound .

Since we have established that converges, for large enough we can use the gradient inequality and :

 ek ≤2αm(1−a)f(xk)1−a ≤2αm1/a(1−a)∥∇f(xk)∥(1−a)/a ≤2(mα)1/a(1−a)(ek−ek+1)(1−a)/a.

Define and . First consider the case , then . Thus,

 ek ≤β(ek−ek+1)1/d ek+1 ≤ek−(ekβ)d ≤(1−1βd)ek,

where the last inequality uses and .

For , we have established . We show by induction that . The inductive hypothesis guarantees us , so

 ek+1 ≤Ck(1−a)/(2a−1)−Cd/βdka/(2a−1) =Ck−Cd/βdk⋅k(1−a)/(2a−1) ≤C(k−Cd−1/βd)(k−1)(k+1)(1−a)/(2a−1).

For ,we have shown . ∎

## 5 Conclusion

We have shown that gradient descent with random initialization and appropriate constant step size does not converge to a saddle point. Our analysis relies on a characterization of the local stable set from the theory of invariant manifolds. The geometric characterization is not specific to the gradient descent algorithm. To use Theorem 4.1, we simply need the update step of the algorithm to be a diffeomorphism. For example if is the mapping induced by the proximal point algorithm, then is a diffeomorphism with inverse given by gradient ascent on . Thus the results in Section 4 also apply to the proximal point algorithm. That is, the proximal point algorithm does not converge to saddles. We expect that similar arguments can be used to show ADMM, mirror descent and coordinate descent do not converge to saddle points under appropriate choices of step size. Indeed, convergence to minimizers has been empirically observed for the ADMM algorithm .

It is not clear if the step size restriction () is necessary to avoid saddle points. Most of the constructions where the gradient method converges to saddle points require fragile initial conditions as discussed in Section 3. It remains a possibility that methods that choose step sizes greedily, by Wolfe Line Search or backtracking, may still avoid saddle points provided the initial point is chosen at random. We leave such investigations for future work.

Another important piece of future work would be relaxing the conditions on isolated saddle points. It is possible that for the structured problems that arise in machine learning, whether in matrix factorization or convolutional neural networks, that saddle points are isolated after taking a quotient with respect to the associated symmetry group of the problem. Techniques from dynamical systems on manifolds may be applicable to understand the behavior of optimization algorithms on problems with a high degree of symmetry.

It is also important to understand how stringent the strict saddle assumption is. Will a perturbation of a function always satisfy the strict saddle property?  provide very general sufficient conditions for a random function to be Morse, meaning the eigenvalues at critical points are non-zero, which implies the strict saddle condition. These conditions rely on checking the density of has full support conditioned on the event that . This can be explicitly verified for functions that arise from learning problems.

However, we note that there are very difficult unconstrained optimization problems where the strict saddle condition fails. Perhaps the simplest is optimization of quartic polynomials. Indeed, checking if is a local minimizer of the quartic

 f(x)=n∑i,j=1qijx2ix2j

is equivalent to checking whether the matrix is co-positive, a co-NP complete problem. For this , the Hessian at is zero. Interestingly, the strict saddle property failing is analogous in dynamical systems to the existence of a slow manifold where complex dynamics may emerge. Slow manifolds give rise to metastability, bifurcation, and other chaotic dynamics, and it would be intriguing to see how the analysis of chaotic systems could be applied to understand the behavior of optimization algorithms around these difficult critical points.

## Acknowledgements

The authors would like to thank Chi Jin, Tengyu Ma, Robert Nishihara, Mahdi Soltanolkotabi, Yuekai Sun, Jonathan Taylor, and Yuchen Zhang for their insightful feedback. MS is generously supported by an NSF Graduate Research Fellowship. BR is generously supported by ONR awards N00014-14-1-0024, N00014-15-1-2620, and N00014-13-1-0129, and NSF awards CCF-1148243 and CCF-1217058. MIJ is generously supported by ONR award N00014-11-1-0688 and by the ARL and the ARO under grant number W911NF-11-1-0391. This research is supported in part by NSF CISE Expeditions Award CCF-1139158, DOE Award SN10040 DE-SC0012463, and DARPA XData Award FA8750-12-2-0331, and gifts from Amazon Web Services, Google, IBM, SAP, The Thomas and Stacey Siebel Foundation, Adatao, Adobe, Apple Inc., Blue Goji, Bosch, Cisco, Cray, Cloudera, Ericsson, Facebook, Fujitsu, Guavus, HP, Huawei, Intel, Microsoft, Pivotal, Samsung, Schlumberger, Splunk, State Farm, Virdata and VMware.

## References

•  Pierre-Antoine Absil, Robert Mahony, and Benjamin Andrews. Convergence of the iterates of descent methods for analytic cost functions. SIAM Journal on Optimization, 16(2):531–547, 2005.
•  Robert J Adler and Jonathan E Taylor. Random fields and geometry. Springer Science & Business Media, 2009.
•  Sanjeev Arora, Rong Ge, Tengyu Ma, and Ankur Moitra. Simple, efficient, and neural algorithms for sparse coding. In Proceedings of The 28th Conference on Learning Theory, pages 113–149, 2015.
•  Hédy Attouch, Jérôme Bolte, Patrick Redont, and Antoine Soubeyran. Proximal alternating minimization and projection methods for nonconvex problems: an approach based on the Kurdyka-Lojasiewicz inequality. Mathematics of Operations Research, 35(2):438–457, 2010.
•  Hedy Attouch, Jérôme Bolte, and Benar Fux Svaiter. Convergence of descent methods for semi-algebraic and tame problems: proximal algorithms, forward–backward splitting, and regularized Gauss–Seidel methods. Mathematical Programming, 137(1-2):91–129, 2013.
•  Antonio Auffinger, Gérard Ben Arous, and Jiří Černỳ. Random matrices and complexity of spin glasses. Communications on Pure and Applied Mathematics, 66(2):165–201, 2013.
•  Jérôme Bolte, Aris Daniilidis, Olivier Ley, Laurent Mazet, et al. Characterizations of Lojasiewicz inequalities: subgradient flows, talweg, convexity. Trans. Amer. Math. Soc, 362(6):3319–3363, 2010.
•  T Tony Cai, Xiaodong Li, and Zongming Ma. Optimal rates of convergence for noisy sparse phase retrieval via thresholded Wirtinger flow. arXiv preprint arXiv:1506.03382, 2015.
•  Emmanuel J Candes, Xiaodong Li, and Mahdi Soltanolkotabi. Phase retrieval via Wirtinger flow: Theory and algorithms. IEEE Transactions on Information Theory, 61(4):1985–2007, 2015.
•  Anna Choromanska, Mikael Henaff, Michael Mathieu, Gérard Ben Arous, and Yann LeCun. The loss surface of multilayer networks. arXiv:1412.0233, 2014.
•  Andrew R Conn, Nicholas IM Gould, and Ph L Toint. Trust region methods, volume 1. SIAM, 2000.
•  Yann N Dauphin, Razvan Pascanu, Caglar Gulcehre, Kyunghyun Cho, Surya Ganguli, and Yoshua Bengio. Identifying and attacking the saddle point problem in high-dimensional non-convex optimization. In Advances in Neural Information Processing Systems, pages 2933–2941, 2014.
•  Rong Ge, Furong Huang, Chi Jin, and Yang Yuan. Escaping from saddle points—online stochastic gradient for tensor decomposition. arXiv:1503.02101, 2015.
•  Philip E Gill and Walter Murray. Newton-type methods for unconstrained and linearly constrained optimization. Mathematical Programming, 7(1):311–350, 1974.
•  M.W. Hirsch, C.C. Pugh, and M. Shub. Invariant Manifolds. Number no. 583 in Lecture Notes in Mathematics. Springer-Verlag, 1977.
•  Raghunandan H. Keshavan, Andrea Montanari, and Sewoong Oh. Matrix completion from a few entries. IEEE Transactions on Information Theory, 56(6):2980–2998, 2009.
•  K Lange. Optimization. springer texts in statistics. 2013.
•  Jorge J Moré and Danny C Sorensen. On the use of directions of negative curvature in a modified Newton method. Mathematical Programming, 16(1):1–20, 1979.
•  Katta G Murty and Santosh N Kabadi. Some NP-complete problems in quadratic and nonlinear programming. Mathematical programming, 39(2):117–129, 1987.
•  Yurii Nesterov. Introductory lectures on convex optimization, volume 87. Springer Science & Business Media, 2004.
•  Yurii Nesterov and Boris T Polyak. Cubic regularization of Newton method and its global performance. Mathematical Programming, 108(1):177–205, 2006.
•  Razvan Pascanu, Yann N Dauphin, Surya Ganguli, and Yoshua Bengio. On the saddle point problem for non-convex optimization. arXiv:1405.4604, 2014.
•  Robin Pemantle. Nonconvergence to unstable points in urn models and stochastic approximations. The Annals of Probability, pages 698–712, 1990.
•  Michael Shub. Global stability of dynamical systems. Springer Science & Business Media, 1987.
•  Stephen Smale. Differentiable dynamical systems. Bulletin of the American mathematical Society, 73(6):747–817, 1967.
•  Ju Sun, Qing Qu, and John Wright. Complete dictionary recovery over the sphere I: Overview and the geometric picture. arXiv:1511.03607, 2015.
•  Ju Sun, Qing Qu, and John Wright. Complete dictionary recovery over the sphere II: Recovery by Riemannian trust-region method. arXiv:1511.04777, 2015.
•  Ju Sun, Qing Qu, and John Wright. A geometric analysis of phase retrieval. Forthcoming, 2016.
•  Yuchen Zhang, Xi Chen, Denny Zhou, and Michael I Jordan. Spectral methods meet EM: A provably optimal algorithm for crowdsourcing. In Advances in neural information processing systems, pages 1260–1268, 2014.
•  Tuo Zhao, Zhaoran Wang, and Han Liu. Nonconvex low rank matrix factorization via inexact first order oracle.