First-order Methods Almost Always Avoid Saddle Points

10/20/2017 ∙ by Jason D. Lee, et al. ∙ MIT University of Southern California Singapore University of Technology and Design berkeley college 0

We establish that first-order methods avoid saddle points for almost all initializations. Our results apply to a wide variety of first-order methods, including gradient descent, block coordinate descent, mirror descent and variants thereof. The connecting thread is that such algorithms can be studied from a dynamical systems perspective in which appropriate instantiations of the Stable Manifold Theorem allow for a global stability analysis. Thus, neither access to second-order derivative information nor randomness beyond initialization is necessary to provably avoid saddle points.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Saddle points have long been regarded as a major obstacle for non-convex optimization over continuous spaces. It is well understood that in many applications of interest, the number of saddle points significantly outnumber the number of local minima, which is especially problematic when the solutions associated with worst-case saddle points are considerably worse than those associated with worst-case local minima [14, 34, 12]. Moreover, it is not hard to construct examples where a worst-case initialization of gradient descent (or other first-order methods) provably converge to saddle points [30, Section 1.2.3].

The main message of our paper is that, under very mild regularity conditions, saddle points have little effect on the asymptotic behavior of first-order methods. Building on tools from the theory of dynamical systems, we generalize recent analysis of gradient descent [24, 33] to establish that a wide variety of first-order methods — including gradient descent, proximal point algorithm, block coordinate descent, mirror descent — avoid so-called “strict” saddle points for almost all initializations; that is, saddle points where the Hessian of the objective function admits at least one direction of negative curvature (see Definition 1).

Our results provide a unified theoretical framework for analyzing the asymptotic behavior of a wide variety of classic optimization heuristics in non-convex optimization. Furthermore, we believe that furthering our understanding of the behavior and geometry of deterministic optimization techniques with random initialization can serve in the development of stochastic algorithms which improve upon their deterministic counterparts and achieve strong convergence-rate results; indeed, such insights have already led to significant improves in modifying gradient descent to navigate saddle-point geometry

[15, 21].

1.1 Related work

In recent years, the optimization and machine learning communities have dedicated much effort to understanding the geometry of non-convex landscapes by searching for unified geometric properties which could be leverage by general-purpose optimization techniques. The strict saddle property (Definition 

1

) is one such property which has been shown to hold in a wide and diverse range of salient objective functions: PCA, a fourth-order tensor factorization

[17], formulations of dictionary learning [45, 44], phase retrieval [43], low-rank matrix factorizations [19, 18, 8]

, and simple neural networks

[41, 16, 9]. It is also known that, in the worst case, the strict saddle property is unavoidable as finding descent-directions at critical points with degenerate Hessians is NP-hard in general [29].

Earlier work had shown that first-order descent methods can circumvent strict saddle points, provided that they are augmented with unbiased noise whose variance is sufficiently large in each direction. For example,

[35] establishes convergence of the Robbins-Monro stochastic approximation to local minimizers for strict saddle functions. More recently, [17] give quantitative rates on the convergence of noisy gradient descent to local minimizers, for strict saddle functions.

To obtain provable guarantees without the addition of stochastic noise, [45, 44] and [43] adopt trust-region methods which leverage Hessian information in order to circumvent saddle points. This approach represents a refinement of a long tradition of related, “second-order” strategies, including: a modified Newton’s method with curvilinear line search [28], the modified Cholesky method [20], trust-region methods [13], and the related cubic regularized Newton’s method [31]

, to name a few. Specialized to deep learning applications,

[14, 34] have introduced a saddle-free Newton method.

However, such curvature-based optimization algorithms have a per-iteration computational complexity which scales quadratically or even cubically in the dimension , rendering them unsuitable for optimization of high-dimensional functions. In more recent work, several works have presented faster curvature-based methods including [39, 26, 36]

by combining fast first-order methods with fast eigenvector algorithms, to obtain lower per-iteration complexity.

Fortunately, it appears that neither the addition of isotropic noise, nor the use of second-order methods are necessary for circumventing saddle points. For example, recent work by [21] showed that by carefully perturbing the iterates of gradient descent in the vicinity of possible saddles results in a first-order method which converges to local minimizers in a number of iterations with only poly-logarithmic dimension dependence. Moreover, many recent works have shown that, even without any random perturbations, a combination of gradient descent and a smart-initialization provably converges to the global minimum for a variety of non-convex problems: such settings include matrix factorization [22, 47] , phase retrieval [11, 10], dictionary learning [5], and latent-variable models [46, 7]. While our results only guarantee convergence to local minimizers, they eschew the need for complex and often computationally prohibitive initialization procedures.

In addition to what has been established theoretically, there is a broadly-accepted folklore in the field that running gradient descent with a random initialization is sufficient to identity a local optima. For example, the authors of [43] empirically observe gradient descent with random initializations on the phase retrieval problem always converges to a local minimizer, one whose quality matches that of the solution found using more costly trust-region techniques. It is the purpose of this work to place these intuitions on firm mathematical footing.

Finally, we emphasize that their are many settings in which all local optima (but not saddles!) have objective values which are nearly as small as those of the global minima; see for example [19, 18, 41, 42, 44]. Some preliminary results have suggested that this may be a a quite general phenomenon. For example, [12] study the loss surface of a particular Gaussian random field as a proxy for understanding the objective landscape of deep neural nets. The results leverage the Kac-Rice Theorem [4, 6]

, and establish that critical points with more positive eigenvalues have lower expected function value, often close to that of the global minimizer. We remark that functions drawn from this Gaussian random field model share the strict saddle property defined above, and so our results apply in this setting. On the other hand, our results are considerably more general, as they do not place stringent generative assumptions on the objective function

.

1.2 Organization

The rest of the paper is organized as follows. Section 2 introduces the notation and definitions used throughout the paper. Section 3 provides an intuitive explanation for why it is unlikely that gradient descent converges to a saddle point, by studying a non-convex quadratic and emphasizing the analogy with power iteration. Section 4 develops the main technical theorem, which uses the stable manifold theorem to show that the stable set of unstable fixed points has measure zero. Section 5 applies the main theorem to show that gradient descent, block coordinate descent, proximal point, manifold gradient descent, and mirror descent all avoid saddle points. Finally, we conclude in Section 6 by suggesting several directions of future work.

2 Preliminaries

Throughout the paper, we will use to denote a real-valued function in , the space of twice-continuously differentiable functions.

Definition 1 (Strict Saddle).

When ,

  1. A point is a critical point of if .

  2. A point is a strict saddle point111For the purposes of this paper, strict saddle points include local maximizers. of if is a critical point and . Let denote the set of strict saddle points.

When is a manifold, the same definition applies, but with gradient and Hessian replaced by the Riemannian gradient and Riemannian Hessian . See Section 5.5 for details, and Chapter 5.5 of [1].

Our interest is in the attraction region of an optimization algorithm , viewed as a mapping from . The iterates of the algorithm are generated by the sequence

where is the -fold composition of . As an example, gradient descent corresponds to .

Since we are interested in the region of attraction of a critical point, we provide the definition of the stable set.

Definition 2 (Global Stable Set).

The global stable set of the strict saddles is the set of initial conditions where iteration of the mapping converges to a strict saddle. This is defined as

3 Intuition

To illustrate why gradient descent and related first-order methods do not converge to saddle points, consider the case of a non-convex quadratic, . Without loss of generality, assume with and . is the unique critical point of this function and the Hessian at is . Gradient descent initialized from has iterates

where

denote the standard basis vectors. This iteration resembles power iteration with the matrix

.

Let , and suppose . Thus we have for and for . If , then converges to the saddle point at zero since . However, if has a component outside then gradient descent diverges to . For this simple quadratic function, we see that the global stable set (attractive set) of zero is the subspace

. Now, if we choose our initial point at random, the probability of that point landing in

is zero as long as (i.e., is not full dimensional).

As an example of this phenomenon for a non-quadratic function, consider the following example from [30, Section 1.2.3]. Letting , the corresponding gradient mapping is

The critical points are

The points and are isolated local minima, and is a saddle point.

Gradient descent initialized from any point of the form converges to the saddle point . Any other initial point either diverges, or converges to a local minimum, so the stable set of is the -axis, which is a zero-measure set in . By computing the Hessian,

we find that has one positive eigenvalue with eigenvector that spans the -axis, thus agreeing with our above characterization of the stable set. If the initial point is chosen randomly, there is zero probability of initializing on the -axis and thus zero probability of converging to the saddle point .

For gradient descent, the local attractive set of a critical point is well-approximated by the span of the eigenvectors corresponding to positive eigenvalues of the Hessian. By an application of Taylor’s theorem, one can see that if the initial point is uniformly random in a small neighborhood around , then the probability of initializing in the span of these eigenvectors is zero whenever there is a negative eigenvalue. Thus, gradient descent initialized at will leave the neighborhood of . Although this argument provides valuable intuition, there are several difficulties with formalizing this argument: 1) is randomly distributed over the entire domain, not a small neighborhood around , and Taylor’s theorem does not provide any global guarantees, and 2) it does not rule out converging to a different saddle point.

4 Stable Manifold Theorem and Unstable Fixed Points

4.1 Setup

For the rest of this paper, is a mapping from to itself, and is a -dimensional manifold without boundary. Recall that a -smooth, -dimensional manifold is a space , together with a collection of charts , called an atlas, where each is a homeomorphism from an open subset to . The charts are required to be compatible in the sense that, whenever , then the transition map is a map from . We also require that , and is second countable, which means that for any set contained in for some index set , there exists a countable set such that . We can now recall the definition of a measure zero subset of a manifold:

Definition 3 (Section 5.4 of [27]).

Given a -dimensional manifold , we say that a set is measure zero if there is an atlas such that has Lebesgue-measure zero as a subset of . In this case, we use the shorthand . The measure zero property is independent of the choice of atlas [27, Chapter 5].

Definition 4 (Chapter 3 of [1]).

The differential of the mapping , denoted as , is a linear operator from , where is the tangent space of at point . Given a curve in with and , the linear operator is defined as . The determinant of the linear operator is the determinant of the matrix representing with respect to an arbitrary basis222The determinant is invariant under similarity transformations, so is independent of the choice of basis..

Lemma 1.

Let be a measure zero subset. If for all , then has measure zero.

Proof.

For clarity, let . Let be a countable collection of charts of the co-domain of . By countable additivity of measure, it suffices to show that each is measure zero. Without loss of generality, we may assume that is contained in a chart , else we could repeat the same argument for each element of the chart.

We wish to show that . Let be another countable collection of charts of the domain of . Define , and note that . Thus

By assumption, is measure zero. The function is if , and thus locally Lipschitz, so preserves measure zero sets. By countable additivity and the displayed equation above, has measure zero.

4.2 Unstable Fixed Points

Definition 5 (Unstable fixed point).

Let

be the set of fixed points where the differential has at least a single eigenvalue with magnitude greater than one. These are the unstable fixed points.

Theorem 1 (Theorem III.7, [40]).

Let be a fixed point for the local diffeomorphism . Suppose that , where is the span of the eigenvectors corresponding to eigenvalues of magnitude less than or equal to one of , and is the span of the eigenvectors corresponding to eigenvalues of magnitude greater than one of . Then there exists a embedded disk that is tangent to at called the local stable center manifold. Moreover, there exists a neighborhood of , such that , and .

Theorem 2.

Let be a mapping from and for all . Then the set of initial points that converge to an unstable fixed point has measure zero, .

Proof.

For each , there is an associated open neighborhood promised by the Stable Manifold Theorem 1. forms an open cover, and since is second-countable we can extract a countable subcover, so that .

Define . Fix a point . Since , then for some non-negative integer and all , . Since we have a countable sub-cover, for some and all . This implies that for all . By Theorem 1, is a subset of the local center stable manifold which has co-dimension at least one, and is thus measure zero.

Finally, implies that . Since is unknown we union over all non-negative integers, to obtain . Since was arbitrary, we have shown that . Using Lemma 1 and that countable union of measure zero sets is measure zero, has measure zero. ∎

Next, we state a simple corollary that only requires verifying , and .

Corollary 1.

Under the same conditions as Theorem 2, and in addition assume , then .

Proof.

Since , then . Using Theorem 2, . ∎

5 Application to Optimization

5.1 Gradient Descent and Proximal Point

As an application of Theorem 2, we show that gradient descent avoids saddle points. Consider the gradient descent algorithm with step-size :

(1)
Assumption 1 (Lipschitz Gradient).

Let , and .

Proposition 1.

Every strict saddle point is an unstable fixed point of gradient descent, meaning .

Proof.

First we verify that critical points of are fixed points of . Since , then and is a fixed point.

At a strict saddle , with eigenvalues , where are eigenvalues of . Since is a strict saddle, then there is at least one eigenvalue , and . Thus . ∎

Proposition 2.

Under Assumption 1 and , then .

Proof.

By a straightforward calculation

where . The eigenvalues of are , and so

Using the Lipschitz gradient assumption, and each term in the product is positive, so .

Corollary 2.

Let be the gradient descent algorithm as defined in Equation (1). Under Assumption 1 and , the stable set of the strict saddle points has measure zero, meaning .

Proof.

The proof is a straightforward application of the previous two Propositions and Corollary 1. Proposition 1 shows that , and Proposition 2 shows that . By applying Corollary 1, we conclude that . ∎

5.2 Proximal Point

The proximal point algorithm is given by the iteration

(2)
Proposition 3.

Under Assumption 1 and , then

  1. .

  2. Every strict stable point is an unstable fixed point of proximal point, meaning .

Proof.

Since is -Lipschitz, is strongly convex for , and the is well-defined and unique. By the optimality conditions, . By implicit differentiation, , and so

At a strict saddle , , and thus has an eigenvalue greater than one. For , is invertible, and thus .

By combining Proposition 3 and Corollary 1, we have the following:

Corollary 3 (Proximal Point).

Let be the proximal point algorithm as defined in Equation (2). Under Assumption 1 and , the stable set of the strict saddle points has measure zero, meaning .

5.3 Coordinate Descent

1 Input: Function , step size , initial point
2 For ,
3 For index
, where
(3)
Algorithm 1 Coordinate Descent

We define to be the coordinate descent update of index in Algorithm 1. One iteration of coordinate gradient descent corresponds to the update

(4)
Assumption 2 (Lipschitz Coordinate Gradient).

Let , and

Lemma 2.

The differential is

(5)

where is a standard basis vector.

Proof.

This is an application of the chain rule. The differential of the composition of two functions

is just . By repeatedly applying this and observing that , we have the result. ∎

Proposition 4.

Under Assumption 2 and , then .

Proof.

It suffices to prove that every term of Equation 5

is an invertible matrix. Using the matrix determinant lemma, the characteristic polynomial of the matrix

is equal to . For , the eigenvalues of are all positive, and thus is invertible. ∎

Proposition 5 (Instability at saddle points).

Every strict saddle point is an unstable fixed point of coordinate descent, meaning .

Proof.

Let , , and be the eigenvector corresponding to the smallest eigenvalue of .

We shall prove that for some which depends on , but not on . Applying Gelfand’s theorem,

and thus has an eigenvalue of magnitude greater than .

We fix some arbitrary iteration and let . We will first show that there exists an so that

(6)

for all . Let and , so that . We see that the sequence is decreasing (non-increasing),

(7)

where the last inequality uses that .

Next we use the claim to show a sufficient decrease by lower bounding .

Claim 1.

Let be in the range of . There exists a so that for some global constant that depends on .

Proof.

We assume that for all , for some to be chosen later. For , it holds that and . Suppose for that and thus Using induction and triangle inequality we get

where we assume so that for all . Using the above calculation,

Thus , and

where

is the smallest non-zero singular value of

. Thus by choosing small enough such that

we have obtained a contradiction. ∎

Decompose into the orthogonal components defined by the nullspace and range space . Notice that acts as the identity on , so

Define an auxiliary sequence , and . Similarly, , , and . It follows that

(non-increasing property in Equation (7))
(using Equation (7))
(using Claim 1)
(since )
(non-increasing property)

Let . By inducting, and noting that ,

Using ,

where the last inequality uses that . By Gelfand’s theorem, we have established

and thus has an eigenvalue of magnitude greater than one. Thus . ∎

By combining Propositions 5, 4, and Corollary 1, we have the following:

Corollary 4 (Coordinate Descent).

Let be the coordinate descent algorithm as defined in Equation (4). Under Assumption 2 and , the stable set of the strict saddle points has measure zero, meaning .

Remark 1.

In the worst-case, , but in many instances , so coordinate descent can use more aggressive step-sizes. The step-size choice is standard for coordinate-descent methods [37].

5.4 Block Coordinate Descent

The results of this section are a strict generalization of the previous section, but we present the coordinate descent case separately, since the proofs are considerably shorter.

We partition the set to blocks such that . For ease of notation, we define .

1 Input: Function , step size , initial point
2 For ,
3 For block
4 For index in block
, where
(8)
Algorithm 2 Block Coordinate Descent

We define to be the block coordinate descent update of block in Algorithm 2. Block coordinate gradient descent is a dynamical system

(9)

where . We define the matrix , i.e., the projector onto the entries in .

Lemma 3.

The differential is

(10)
Proof.

This is an application of the chain rule. The differential of the composition of two functions is just . By repeatedly applying this and observing that , we obtain the result. ∎

Assumption 3.

Let , and be the submatrix of by extracting the rows and columns indexed by . Let

Proposition 6.

Under Assumption 3 and , then .

Proof.

It suffices to prove that every term of the product 10 is an invertible matrix. Every matrix of the form has eigenvalues equal to one and the rest of its eigenvalues correspond to eigenvalues of . Since , then the eigenvalues of