Depth Creates No Bad Local Minima

02/27/2017 ∙ by Haihao Lu, et al. ∙ 0

In deep learning, depth, as well as nonlinearity, create non-convex loss surfaces. Then, does depth alone create bad local minima? In this paper, we prove that without nonlinearity, depth alone does not create bad local minima, although it induces non-convex loss surface. Using this insight, we greatly simplify a recently proposed proof to show that all of the local minima of feedforward deep linear neural networks are global minima. Our theoretical results generalize previous results with fewer assumptions, and this analysis provides a method to show similar results beyond square loss in deep linear models.

READ FULL TEXT VIEW PDF

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Deep learning has recently had a profound impact on the machine learning, computer vision, and artificial intelligence communities. In addition to its practical successes, previous studies have revealed several reasons why deep learning has been successful from the viewpoint of its

model classes. An (over-)simplified explanation is the harmony of its great expressivity and big data: because of its great expressivity, deep learning can have less bias, while a large training dataset leads to less variance. The great expressivity can be seen from an aspect of representation learning as well: whereas traditional machine learning makes use of features designed by human users or experts as a type of prior, deep learning tries to learn features from the data as well. More accurately, a key aspect of the model classes in deep learning is the generalization property; despite its great expressivity, deep learning model classes can maintain great generalization properties (Livni et al., 2014; Mhaskar et al., 2016; Poggio et al., 2016). This would distinguish deep learning from other possibly too flexible methods, such as shallow neural networks with too many hidden units, and traditional kernel methods with a too powerful kernel. Therefore, the practical success of deep learning seems to be supported by the great quality of its model classes.

However, having a great model class is not so useful if we cannot find a good model in the model class via training. Training a deep model is typically framed as non-convex optimization. Because of its non-convexity and high dimensionality, it has been unclear whether we can efficiently train a deep model. Note that the difficulty comes from the combination of non-convexity and high dimensionality in weight parameters. If we can reformulate the training problem into several decoupled training problems, with each having a small number of weight parameters, we can effectively train a model via non-convex optimization as theoretically shown in Bayesian optimization and global optimization literatures (Kawaguchi et al., 2015; Wang et al., 2016; Kawaguchi et al., 2016). As a result of non-convexity and high-dimensionality, it was shown that training a general neural network model is NP-hard (Blum and Rivest, 1992). However, such a hardness-result in a worst case analysis would not tightly capture what is going on in practice, as we seem to be able to efficiently train deep models in practice.

To understand its practical success beyond worst case analysis, theoretical and practical investigations on the training of deep models have recently become an active research area (Saxe et al., 2014; Dauphin et al., 2014; Choromanska et al., 2015; Haeffele and Vidal, 2015; Shamir, 2016; Kawaguchi, 2016; Swirszcz et al., 2016; Arora et al., 2016; Freeman and Bruna, 2016; Soudry and Hoffer, 2017).

An important property of a deep model is that the non-convexity comes from depth, as well as nonlinearity: indeed, depth by itself creates highly non-convex optimization problems. One way to see a property of the non-convexity induced by depth is the non-uniqueness owing to weight–space symmetries (Krkova and Kainen, 1994): the model represents the same function mapping from the input to the output with different distinct settings in the weight space. Accordingly, there are many distinct globally optimal points and many distinct points with the same loss values due to weight–space symmetries, which would result in a non-convex epigraph (i.e., non-convex function) as well as non-convex sublevel sets (i.e., non-quasiconvex function). Thus, it has been unclear whether depth by itself can create a difficult non-convex loss surface. The recent work (Kawaguchi, 2016) indirectly showed, as a consequence of its main theoretical results, that depth does not create bad local minima of deep linear model with Frobenius norm although it creates potentially bad saddle points.

In this paper, we directly prove that all local minima of deep linear model corresponds to local minima of shallow model. Building upon this new theoretical insight, we propose a simpler proof for one of the main results in the recent work (Kawaguchi, 2016)

; all of the local minima of feedforward deep linear neural networks with Frobenius norm are global minima. The power of this proof can go beyond Frobenius norm: as long as the loss function satisfies Theorem

3.2, all local minima of deep linear model corresponds to local minimum of shallow model.

2 Main Result

To examine the effect of depth alone, we consider the following optimization problem of feedforward deep linear neural networks with the square error loss:

(1)

where is the weight matrix, is the input training data, and is the target training data. Let be the index corresponding to the smallest width. Note that for any , we have . To analyze optimization problem (1), we also consider the following optimization problem with a “shallow” linear model, which is equivalent to problem (1) in terms of the global minimum value:

(2)

where . Note that problem (2) is non-convex, unless , whereas problem (1) is non-convex, even when with . In other words, deep parameterization creates a non-convex loss surface even without nonlinearity.

Though we only consider the Frobenius loss here, the proof holds for general cases. As long as the loss function satisfies Theorem 3.2, all local minima of deep linear model corresponds to local minimum of shallow model.

Our first main result states that even though deep parameterization creates a non-convex loss surface, it does not create new bad local minima. In other words, every local minimum in problem (1) corresponds to a local minimum in problem (2).

Theorem 2.1.

(Depth creates no new bad local minima) Assume that and have full row rank. If is a local minimum of problem (1), then achieves the value of a local minimum of problem (2).

Therefore, we can deduce the property of the local minima in problem (1) from those in problem (2). Accordingly, we first analyze the local minima in problem (2), and obtain the following statement.

Theorem 2.2.

(No bad local minima for rank restricted shallow model) If has full row rank, all local minima of optimization problem (2) are global minima.

By combining Theorems 2.1 and 2.2, we conclude that every local minimum is a global minimum for feedforward deep linear networks with a square error loss.

Theorem 2.3.

(No bad local minima for deep linear neural networks) If and have full row rank, then all local minima of problem (1) are global minima.

Theorem 2.3 generalizes one of the main results in (Kawaguchi, 2016)

with fewer assumptions. Following the theoretical work with a random matrix theory

(Dauphin et al., 2014; Choromanska et al., 2015), the recent work (Kawaguchi, 2016) showed that under some strong assumptions, all of the local minima are global minima for a class of nonlinear deep networks. Furthermore, the recent work (Kawaguchi, 2016)

proved the following properties for a class of general deep linear networks with arbitrary depth and width: 1) the objective function is non-convex and non-concave; 2) all of the local minima are global minima; 3) every other critical point is a saddle point; and 4) there is no saddle point with the Hessian having no negative eigenvalue for shallow networks with one hidden layer, whereas such saddle points exist for deeper networks. Theorem

2.3 generalizes the second statement with fewer assumptions; the previous papers (Baldi, 1989; Kawaguchi, 2016) assume that the data matrix has distinct eigenvalues, whereas we do not assume that.

3 Proof

In this section, we provide the proofs of Theorems 2.1, 2.2, and 2.3.

3.1 Proof of Theorem 2.1

In order to deduce the proof of Theorem 2.1

, we need some fundamental facts in linear algebra. The next two lemmas recall some basic facts of perturbation theory for singular value decomposition (SVD).

Let and be two () matrices with SVDs

where , , , , , , and are orthogonal matrices.

Lemma 3.1.

Continuity of Singular Value The singular value of a matrix is a continuous map of entries of the matrix.

Lemma 3.2.

(Wedin, 1972) Continuity of Singular Space

If

then:

For a fixed matrix , we say “matrix is a perturbation of matrix ” if is , which means that the difference between and is much smaller than any non-zero number in matrix .

Lemma 3.2 implies that any SVD for a perturbed matrix is a perturbation of some SVD for the original matrix under full rank condition. More formally:

Lemma 3.3.

Let be a full-rank matrix with singular value decomposition . is a perturbation of . Then, there exists one SVD of , , such that is a perturbation of , is a perturbation of and is a perturbation of .(Notice that SVD of a matrix may not be unique due to rotation of the eigen-space corresponding to the same eigenvalue)

Proof: With the small perturbation of matrix , Lemma 3.1 shows that the singular values does not change much. Thus, if is small enough, is also small for all . Remember that all singular values of are positive. By letting contain only the singular value (which may be multiple, and hence and are the singular spaces corresponding to the singular value ), we have in Lemma 3.2, thus Lemma 3.2 implies that the singular space of the perturbed matrix corresponding to singular value in the initial matrix does not change much. The statement of the lemma follows by combining this result for the different singular values together (i.e., consider each index for different in the above argument). ∎

We say that satisfies the rank condition, if . Any perturbation of the products of matrices is the product of the perturbed matrices, when the original matrix satisfies the rank constraint. More formally:

Theorem 3.1.

Let with . Then, for any , such that is a perturbation of and , there exists , such that is perturbation of for all and .

We will prove the theorem by induction. When , we can easily show that the perturbation of the product of two matrices is the product of one matrix and the perturbation of the other matrix. When , we let be the product of two specific matrices, and by induction the perturbation of the product () is the product of a perturbation of and perturbations of the other matrix. And a perturbation of is also the product of perturbations of those two specific matrices, which proves the statement when .

Proof: The case with holds by setting . We prove the lemma with by induction.

We first consider the base case where with .

Let be the SVD of . It follows Lemma 3.3 that there exists an SVD of , , such that is a perturbation of , is a perturbation of and is a perturbation of . Because , with a small perturbation, the positive singular values remain strictly positive, whereby, . Together with the assumption , we have . Let and . Note that . Hence, is a diagonal matrix. Remember is a perturbation of , thus there is an , which is a perturbation of (each row of is a scale of the corresponding row of ), such that . Let and . Then, is a perturbation of , is a perturbation of , and , which proves the case when .

For the inductive step, given that the lemma holds for the case with , let us consider the case when with . Let be an index set defined as if , if or . We denote the -th element of a set by . Then, exists as . Note that can be written as a product of matrices with (for example, ). Thus, from the inductive hypothesis, for any , such that is a perturbation of and , there exists a set of desired matrices and for , such that is perturbation of for all , is perturbation of , and the product is equal to . Meanwhile, because is either a by matrix or a by matrix, we have and , and it follows that . Thus, by setting and (note that in is equal to in ), we can apply the proof for the case of to conclude: there exists , such that is perturbation of for all , and . Combined with the above statement from the inductive hypothesis, this implies the lemma with , whereby we finish the proof by induction. ∎

The next two theorems show that, for any local minimum of , there is another local minimum of , whose function value is the same as the original and it satisfies the rank constraint.

Theorem 3.2.

Let be a local minimum of problem (1) and . If is not of full rank, then there exists a , such that is of full rank, is a perturbation of , is a local minimum of problem (1), and .

The idea of the proof is that if we just change one weight and keep all other weights, it becomes a convex least square problem. Then we are able to perturb to maintain the objective value as well as the perturbation is full rank.

Proof of Theorem 3.2 For notational convenience, let and , and let . Because is a local minimum of , is a local minimum of . Let and are the SVDs of and , respectively, where is a diagonal matrix with the first terms being strictly positive, . Minimizing over is a least square problem, and the normal equation is

(3)

hence

where is a Moore–Penrose pseudo-inverse and is a matrix with suitable dimension with the entries in the top left rectangular being .

Since is of full rank,

Thus, we can choose a proper (which contains s at proper positions with all other terms being s) such that is of full rank, whereby is of full rank. Therefore, there is a full rank that satisfies the normal equation (3).

Let . Then, also satisfies the normal equation, and , for any .

Note that is a local minimum of . Thus, there exists a , such that for any satisfying , we have . It follows from being full rank that there exists a small enough , such that is full rank and is arbitrarily small (in particular, ), because the non-full-rank matrices are discrete on the line of with parameter by considering the determine of or as a polynomial of . Therefore, for any , such that , we have

whereby

This shows that is also a local minimum of problem (1) for some small enough . ∎

Lemma 3.4.

Let for two given matrices and . If , and , then any perturbation of is the product of and perturbation of .

Proof: Let be the SVD of , then, . Let be a perturbation of and let . Then, is a perturbation of and by noticing , as has full row rank. ∎

Theorem 3.3.

If is a local minimum with being full rank, then, there exists , such that is a perturbation of for all , is a local minimum, , and .

In the proof of Theorem 3.3, we will use Theorem 3.2 and Lemma 3.4 to show that we can perturb in sequence to make sure the perturbed weight is still the optimal solution and . Similar strategy can make sure , which then proves the whole theorem.

Proof of Theorem 3.3 : If , consider

Then, it follows from Lemma 3.4 and is a local minimum of that is a local minimum of , where . It follows from Theorem 3.2 that there exists , such that is close enough to , is a local minimum of , , and . Note is a perturbation of , whereby, from Lemma 3.4, there exists , , which are perturbations of and , respectively, such that . Thus, is a local minimum of , and .

By that analogy, we can find , such that is a local minimum of , is a perturbation of for , and .

Similarly, we can find , such that is a local minimum of , is a perturbation of for , and .

Noticing that

and , we have , which completes the proof. ∎

Proof of Theorem 2.1: It follows from Theorem 3.2 and Theorem 3.3 that there exists another local minimum , such that and . Remember that . It then follows from Theorem 3.1 that for any , such that is a perturbation of and , we have , where is a perturbation of . Therefore, by noticing is a local minimum of (1), we have

which shows that is a local minimum of (2). ∎

In the proof of Theorem 2.2, we at first show that we just need to consider the case where

is an identity matrix and

is a diagonal matrix by noticing rotation is invariant under Frobenius norm. Then we show that the local minimum must be a block diagonal and symmetric matrix, and each block term is a projection matrix on the space corresponding to the same eigenvalue of the diagonal matrix

. Finally, we show that those projection matrices must be onto the eigenspace of

corresponding to the as large as possible eigenvalues, which then shows that the local minimum shares the same function value.

3.2 Proof of Theorem 2.2

Let be the SVD decomposition of , where is a diagonal matrix with full row rank. Then,

where is a constant in and is a submatrix of , which contains the to row and to column of . If is a local minimum of (2), then is a local minimum of

(4)

where , and the difference of objective function values of (2) and (4) is a constant. Let be the SVD of , then

and if is a local minimum of , we have is a local minimum of

(5)

and the objective function values of (4) and (5) are the same at corresponding points. Let have distinct positive diagonal terms with multiplicities . Let be a local minimum of (5), and

be the SVD of , where are positive singular values. Let and be the projection matrix to the space spanned by and , respectively. Note that , thus, is also a local minimum of

(6)

which is a convex problem, and it can be shown by the first order optimality condition that the only local minimum of (6) is . Similarly, we have . Then, is a diagonal matrix, with distinct non-zero diagonal terms with multiplicities . Therefore,

Note that the left hand is a symmetric matrix, thus, is also a symmetric matrix. Meanwhile, is a symmetric matrix, whereby is a -block diagonal matrix with each block corresponding to the same diagonal terms of . Therefore, is also a -block diagonal matrix.

Let

where is a matrix, then implies . Thus, is a symmetric matrix and is a projection matrix. Let , then, and , whereby,

Let be the largest number that . Then, it is easy to find that the global minima of (6) satisfy for , and for which gives all of the global minima.

Now, let us show that all local minima must be global minima. As local minima is a block diagonal matrix, thus, we can assume without loss of generality that both and are square matrices, because the all rows and columns in and do not change anything. Thus, it follows is symmetric that is a symmetric matrix. Remember that is a projection matrix, thus the eigenvalues of are either or , whereby

where is the

th normalized orthogonal eigen-vector of

corresponding to eigenvalue .

It is easy to see that, at a local minimum, we have , otherwise, there is a descent direction by adding a rank 1 matrix to corresponding to one positive eigenvalue. If there exists , such that , , and , then, there exists , such that for . Let

Then, , and

It is easy to check that is monotonically decreasing with , which gives a descent direction at , contradicting with that is a local minimum. Therefore, there is no such and , which shows that is a global minimum. ∎

3.3 Proof of Theorem 2.3

The statement follows from Theorem 2.1 and 2.2.

4 Conclusion

We have proven that, even though depth creates a non-convex loss surface, it does not create new bad local minima. Based on this new insight, we have successfully proposed a new simple proof for the fact that all of the local minima of feedforward deep linear neural networks are global minima as a corollary.

The benefits of this new results are not limited to the simplification of the previous proof. For example, our results apply to problems beyond square loss. Let us consider the shallow problem (S) s.t. , and and the deep parameterization counterpart (D)