1 Introduction
Despite its huge empirical success, deep learning still preserves many features of alchemy [Rahimi, 2017]: progress in this field is obtained mainly by trial and error, and our intuition about how do neural networks actually work often misleads us.
Alchemy, in order to become usual chemistry, needs a theoretical ground. For now, a solid theoretical ground for deep learning is lacking, however, fortunately, many pieces of theory appeared from different directions during several past years. The purpose of this essay is not to provide a comprehensive review, but to draw connections between some works on this topic. The list of works mentioned here is by no means representative, or, all the more so, complete.
Since the theory of deep learning is lacking, some features of neural networks learning seem ”mysterious”. We emphasize two mysteries of deep learning:

Generalization mystery. It is very common for contemporary neural networks to have many more parameters than the number of training examples at hand. Having huge abundance of parameters results in existence of ”bad” local minima in terms of test error [Zhang et al., 2016]
. Classic generalization bounds based on VCdimension are pessimistic, and consequently vacuous in this case. However, stochastic gradient descent seems to avoid these bad local minima. Hence new, neuralnetspecific, bounds have to be developed.

Optimization mystery. It is indeed surprising, why (stochastic) gradient descent does not get trapped into ”bad” local minima in terms of test error. However, it is also quite surprising, why gradient descent (a local optimization method!) does not get trapped into bad local minima in terms of train error. It seems true that, since the number of parameters in modern neural networks is usually much greater than the number of training samples, there exists such parameter configuration for which our neural net perfectly fits the data, however, it is far from obvious why the gradient descent finds this configuration.
In the current essay we will talk about the optimization mystery.
In order to illustrate the abovementioned phenomena, we trained a modern convolutional architecture, Convsmall from [Miyato et al., 2017], on a subset of 4000 examples of CIFAR10 dataset. We also multiplied the number of filters in each convolutional layer by some width factor, in order to illustrate how the number of parameters affects optimization and generalization.
The results are shown in Figure 1. One can see that the network indeed generalizes well, despite of severe overparameterization. Moreover, the test accuracy does not decrease as network grows larger. From the optimization side, we see that SGD indeed finds a global minimum in terms of train error (note that there are no global minima in terms of crossentropy in any finite ball in weightspace). Moreover, as network grows larger, optimization becomes faster (in terms of number of iterations).
2 Optimization mystery
2.1 Loss landscape
Let us first deal with one of these observations: gradient descent finds a global minimum for random initialization. It is known for randomly initialized gradient descent that it is able to find a local minimum almost surely (Theorem 4.8 of [Lee et al., 2016]). Hence it is tempting to propose the following hypothesis:
All local minima of loss landscape of neural nets are global.  (1) 
We begin with the simplest optimization problem — regression:
(2) 
One can easily show that all local minima of this problem are indeed global, and in the case of nondegenerate data covariance matrix , this minimum is unique.
There are two possible ways to depart from this trivial setting: acquiring depth, and adding a nonlinearity.
2.1.1 Deep linear nets
Let us consider a deep linear net first. The corresponding regression problem is given as follows:
(3) 
This setting looks odd, since in terms of a class of realizable functions, a deep linear net is equivalent to a shallow one with rank restriction. Indeed, let
, and consider the following shallow problem:(4) 
where — index of the ”bottleneck” layer. Then, we have a manytoone correspondence:
and all minima of a deep problem are of the same value as ones of a shallow problem:
Note, however, that both problems are (generally) nonconvex: shallow problem is nonconvex, because of nonconvexity of its optimization domain (for nontrivial rankconstraint), and deep problem is nonconvex due to weightspace symmetries. Indeed, if we multiply one of the weight matrices by a nonzero factor, and divide another weight matrix by the same factor simultaneously, the corresponding function will not change.
The aboveproposed hypothesis 1 is false for a general nonconvex problem, however, it was proven in [Lu and Kawaguchi, 2017] (see Theorem 2.2 there) that it is true for a rankconstrained shallow problem^{1}^{1}1More precisely, Theorem 2.2 of [Lu and Kawaguchi, 2017] assumes a finite dataset. It is not obvious how to generalize it to a general data distribution.. Moreover, [Lu and Kawaguchi, 2017] prove that the same is true for a deep problem too.
Consider a finite dataset of size . Assume to be of full rank (this corresponds to nondegeneracy of the corresponding covariance matrix). At the high level, the proof is based on three theorems:

Theorem 2.1: If is a local minimum of , then is a local minimum of .

Theorem 2.2: Every local minimum of is global.

Theorem 2.3: Every local minimum of is global.
Note that Theorem 2.3 is a simple corollary of two previous theorems. Indeed, let be a local minimum of , then due to Theorem 2.1, is a local minimum of . Since all local minima of are of the same value (Theorem 2.2), all local minima of are of the same value too. Hence they are all global.
Proof of Theorem 2.1 starts with the following observation. Let be a local minimum of . Let be a small perturbation of . In order to prove that is a local minimum of , we have to show that for any perturbation of holds . Our aim then is to prove that for any perturbation there exists a perturbation such that . Once this is proven, we can conclude that:
Perturbation that corresponds to perturbation is explicitly constructed for the case , and . Generalization to is given by induction (Theorem 3.1). If , it is proven in Theorem 3.2 and Theorem 3.3 that one can perturb in such a way that perturbation would have , will be a local minimum of , and loss will not change. In other words, we can always perturb a local minimum to make it full rank and retain local minimality.
2.1.2 Nonlinear nets
So, our hypothesis is true for linear nets. However, a linear net is not a practical case, and we have to move to nonlinear nets. A general loss of the simplest nonlinear net is given as follows:
(5) 
where
is an elementwise nonlinearity. Unfortunately, it happens that even a simple onelayer nonlinear regression has a quite hardtoanalyze loss landscape, even for gaussian data distribution and labels generated from a teacher net of the same architecture (see, e.g.
[Tian, 2017]). Hence one can hardly hope to obtain a simple, analyzable loss landscape for a nonlinear net with one hidden layer:(6) 
Wide shallow sigmoid nets.
Surprisingly, our hypothesis becomes true for a nonlinear net as its hidden layer becomes sufficiently wide. More precisely, consider the following variant of the previous problem:
(7) 
where index denotes Frobenius norm, is a finite dataset of size , . Note that here denotes the width of a hidden layer. [Yu and Chen, 1995] proved that as long as and columns of (data points) are distinct, for all local minima are global^{2}^{2}2Actually, [Yu and Chen, 1995] proved their theorem for the case , but it is not hard to generalize it to ..
The proof is based on the following observation. Let be a local minimum of 7. Let us fix , and consider the following surrogate loss:
Note that corresponds to the loss of a linear regression 2 with modified dataset . Since the problem is convex, all of its critical points are minima of the same value. Obviously, if has rank (this necessitates the width of a hidden layer to be greater than the number of datapoints ), we can perfectly fit the modified dataset, and all local minima of have zero value. In this case, , and is indeed a global minimum of .
Unfortunately, the condition is not sufficient for to always be of rank . However, it is proven in the paper that as long as all columns of are distinct, the set of such for which , has Lebesgue measure of zero^{3}^{3}3[Yu and Chen, 1995] gave an incorrect proof of this statement. One can find a correct proof of a more general statement in Lemma 4.4 of [Nguyen and Hein, 2017]
. The proof sufficiently utilizes the fact that sigmoid is an analytic function; the proven statement is false for ReLU.
The proof of the main theorem proceeds as follows. We have already proven the theorem for . However, if , since the value of the measure for the corresponding is zero, we can find a small enough perturbation , so that . Suppose . Then, for a small enough perturbation , . Launch a gradient flow on . It will find a point with zero loss, since . Hence a gradient flow leaves some fixed vicinity of (vicinity of positive loss), no matter how small the perturbation is. This means that the point is unstable in Lyapunov sense, hence it cannot be a local minimum. Hence all local minima of 7 have value zero, regardless of the rank of ; hence all of them are global.
Deep and wide sigmoid nets.
We see that our hypothesis 1 is true for wideenough shallow nets. What about deep nets? Consider a problem, similar to 7:
(8) 
where as before, denotes a finite dataset of size , , and is an elementwise sigmoid nonlinearity. In the previous theorem the main assumption was that the hidden layer was wide enough. Here we assume a similar restriction:
These two assumptions are enough to prove that the set of , such that has rank , has Lebesgue measure zero (Lemma 4.4 of [Nguyen and Hein, 2017]).
Let us try directly applying the logic of the previous theorem. For a shallow net implied at a local minimum. This was due to the fact that the problem was convex (since it was a linear regression). Unfortunately, given , the problem is not generally convex (since for it is not a linear regression any more). Hence it is not true that all local minima have zero loss, even given .
However, if we additionally assume that our local minimum is nondegenerate in the following sense:
then at a local minimum will imply that the loss in this local minimum is zero (Lemma 3.5 of [Nguyen and Hein, 2017]). Note that for a shallow net , , and this assumption is trivially true. Note also that this assumption implies . In other words, the width of a net should not increase after the th layer.
The condition does not generally hold even given . Similar to the theorem concerning shallow nets, we have to deal with the case when at our local minimum. Previously, we have used the convexity of the problem , which implied that continuoustime gradient descent finds a global minimum of this problem. Now we have the problem , which is not generally convex, and we do not have a guarantee to find a global minimum of it with continuoustime gradient descent any more.
In their Theorem 4.6, [Nguyen and Hein, 2017] use the following technique. They assume the Hessian to be nondegenerate at a local minimum we consider. This allows them to use the implicit function theorem to deduce that if is a critical point, then for any perturbation one can find a perturbation , such that is also a critical point. Take a perturbation such that has rank (such perturbation exists due to Lemma 4.4, mentioned previously). The corresponding perturbed matrices for will have ranks as long as perturbations are small enough. Hence at the perturbed point we find ourselves at a simple, previouslydiscussed case, and conclude that
. However, since the loss function is continuous wrt weights, we further conclude that
, and the considered minimum is global.To sum up, the main theorem (Corollary 3.9 of [Nguyen and Hein, 2017]) states the following. Let be a local minimum of a problem 8. Assume the following conditions hold:

Columns of (data points) are distinct;

One of the layers is wide enough: ;

Our minimum is nondegenerate: ;

Loss function has nondegenerate Hessian at our minimum: .
Then, , and is a global minimum of 8.
Unfortunately, as we see, this theorem does not state that all local minima of 8 are global. It states only that some minima, which are nondegenerate in some sense, are global. We hypothesize that the third condition could be relaxed to simply , but we also suspect that degenerate minima in terms of the Hessian can exist, and our initial hypothesis 1 is generally false.
Remark on nonsmooth nonlinearities.
The above theorem could be generalized to all commonlyused smooth nonlinearities, e.g. tanh, softplus, however, it is unlikely for it to be generalized to nonsmooth ones: ReLU, or LeakyReLU. Indeed, when some units of a ReLU network are deactivated, we effectively obtain a smaller network. We do not have guarantees for notwideenough nets. Hence, in the worst case, a smaller network could have local minima. These local minima could become (degenerate) saddles in the initial network, or again (degenerate) local minima. It is not obvious, whether the second alternative is possible or not. The same reasoning holds for the theorem of [Yu and Chen, 1995].
Remark on criticality with respect to .
One can notice that in the proof of the theorem of [Nguyen and Hein, 2017] we have never used the fact that the local minimum is a critical point of loss function wrt the first weight matrices, i.e. that . It means that given the four abovestated conditions, we only need to assume to be a local minimum of . Hence, according to the theorem, becomes a minimum of automatically. It is worth thinking of how the condition could be used to relax some of the four abovestated assumptions. The same reasoning holds for theorem of [Yu and Chen, 1995].
2.2 Gradient descent dynamics
As we have seen in the previous section, despite the fact, that gradient descent is guaranteed to converge to a local minimum, it is not obvious, whether all local minima are global or not, especially for ReLU networks, where theorem of [Nguyen and Hein, 2017] does not hold. However, as we can see in Figure 1, (stochastic) gradient descent indeed converges to a global minimum on the train set.
A prominent result in this direction was obtained by [Du et al., 2019]. Consider a ReLU net with one hidden layer and a single output:
(9) 
where is an input, are weights of the hidden layer, are weights of the output layer, and denotes an elementwise ReLU. Note that the width of the hidden layer is denoted by here. We consider regression on a finite dataset of size . This leads to the following loss function:
(10) 
Note that the corresponding optimization problem is very similar to 7. Our new problem differs significantly from 7 in the following aspects:

ReLU instead of sigmoid,

optimization is performed wrt to only.
Note also the change of notation (in order to be compatible with the paper).
The result of [Du et al., 2019] easily transfers to sigmoid, but it is unlikely that the result of [Yu and Chen, 1995] could be generalized to ReLU. Hence the first point is by no means a drawback, it is an advantage.
The last point is even more crucial. The proof of theorem of [Yu and Chen, 1995] critically relied on optimization wrt (which is denoted as here). The proof of [Du et al., 2019] does not rely on it. The main result (which we have not stated yet), could be obtained with or without optimization wrt .
Before stating the main result of [Du et al., 2019], we need several assumptions and definitions. Assume that the weights are initialized as follows:
Assume also that datapoints lie on a sphere and y’s are bounded:
We consider dynamics of a continuoustime gradient descent:
In the proof we are going to reason about the dynamics of network individual predictions:
We now proceed to the main result. Theorem 3.2 of [Du et al., 2019] states the following. Let and ; then w.p. over initialization we have:
(11) 
Here is a datadependent constant, which we will define soon. Theorem 3.1 of [Du et al., 2019] states that as long as there are no data points which are parallel to each other, this constant is strictly positive.
Let us take a closer look at this result. It states basically that as long as the hidden layer is wide enough, neural network predictions on datapoints converge to groundtruth answers exponentially fast with high probability. A trivial consequence is that continuoustime gradient descent indeed converges to global minimum with high probability.
Note that the continuoustime gradient descent is assumed only for convenience. A similar result (with proper learning rate) could be obtained for discretetime gradient descent as well (Theorem 4.1 of [Du et al., 2019]).
Proof sketch.
The proof starts with the following reasoning. We want to know how network predictions change with time:
where square brackets denote indicators. Note that at initialization, and does not change during training process. Hence we shall omit it from now on. If we define
(12) 
then we can rewrite the previous equation as:
This differential equation governs the evolution of network predictions on train set. One can easily show that
is a Gram matrix, hence it is positive semidefinite. Our main goal is to show that its minimal eigenvalue remains bounded below with
throughout the whole process of optimization:Our goal is to prove .  (13) 
If this holds, we easily obtain the desired result:
After solving the differential inequality we get:
which is 11. Before giving the sketch of the proof, we have to define . Let be the expectation of Gram matrix at initialization :
We define as its minimal eigenvalue, which is nonnegative, since is a Gram matrix. Theorem 3.1 of [Du et al., 2019] states that it is positive as long as datapoints are not parallel to each other.
The proof of 13 is divided into two parts. In the first part we prove that as long as is large enough, is close to its expectation, , in terms of norm. As a consequence, the spectrum of does not differ much from the spectrum of , and . In the second part we prove that as long as is large enough, does not change much in terms of norm throughout the optimization process. As a consequence, its spectrum also does not change much, and .
The first part is covered by Lemma 3.1 of [Du et al., 2019]. Essentially, it uses a concentration bound (Hoeffding inequality) to bound , and then to bound .
The second part is a bit more involved. Lemma 3.2 of [Du et al., 2019] states that as long as weights are close enough to initialization, Gram matrix does not change much. This is due to the fact that if we perturb the initial weights, few of the indicators in the definition 12 change. Lemma 3.3 states that as long as on , is sufficiently close to . Lemmas 3.2 and 3.3 together imply that on is equivalent to the fact that is close to on . Since for both statements should necessarily be true, Lemma 3.4 states that they are true for all using continuity argument.
Remark on convergence rate.
The theorem 11 states that neural network predictions on train dataset converge to train labels exponentially fast, which is a very strong result. However, in optimization theory convergence rate is usually measured in terms of how fast the algorithm converges to a stationary point. A (firstorder) stationary point is a critical point of loss function:
A secondorder stationary point is a stationary point with positive semidefinite hessian of loss function:
It is well known in optimization theory that given a general gradientLipschitz loss function gradient descent converges to a stationary point in time independent from the number of parameters . Let us look at how fast the gradient of loss function decays to zero in our case:
We see that as increases (which is related to the number of parameters), gradient norm decreases, hence the time to reduce the gradient norm to a value below some epsilon also decreases, which was not the case in general setting.
What about secondorder stationary points? Theorem 4.8 of [Lee et al., 2016] states that randomly initialized gradient descent converges to a secondorder stationary point almost surely. However, this theorem tells nothing about the convergence rate. [Du et al., 2017] constructs an example of loss function for which randomly initialized gradient descent requires time exponential in in order to converge to a secondorder stationary point. It is not hard to check that in our case the Hessian of the loss function is positive semidefinite almost everywhere (except the points where ReLU is not differentiable). Indeed, let us take a look at our Hessian:
We see that the Hessian is a sum of Gram matrices, which are positive semidefinite, hence the whole Hessian is positive semidefinite too. Given that, our method converges to a secondorder stationary point in time that decreases with the number of parameters.
3 Conclusion
As we have seen, the optimization mystery could be revealed, but either for linear networks (theorem of [Lu and Kawaguchi, 2017]), which are not commonly used, or for unrealistically wide nonlinear nets (theorems of [Yu and Chen, 1995], [Nguyen and Hein, 2017], and [Du et al., 2019]). We suspect that assuming a cluster structure of the dataset (i.e. classes of MNIST are separated from each other in pixel space) will allow to reduce the required amount of overparameterization sufficiently (this idea was actually elaborated in [Nguyen and Hein, 2017]). However, it seems that even for unstructured data, the amount of overparameterization required in theorem of [Du et al., 2019] is still too large [Zhang et al., 2016]. We suspect that these bounds could be reduced even in general case using more involved techniques.
References
 [Du et al., 2017] Du, S. S., Jin, C., Lee, J. D., Jordan, M. I., Poczos, B., and Singh, A. (2017). Gradient Descent Can Take Exponential Time to Escape Saddle Points. arXiv eprints.
 [Du et al., 2019] Du, S. S., Zhai, X., Poczos, B., and Singh, A. (2019). Gradient descent provably optimizes overparameterized neural networks. In International Conference on Learning Representations.
 [Lee et al., 2016] Lee, J. D., Simchowitz, M., Jordan, M. I., and Recht, B. (2016). Gradient Descent Converges to Minimizers. arXiv eprints.
 [Lu and Kawaguchi, 2017] Lu, H. and Kawaguchi, K. (2017). Depth Creates No Bad Local Minima. arXiv eprints.

[Miyato et al., 2017]
Miyato, T., Maeda, S.i., Koyama, M., and Ishii, S. (2017).
Virtual Adversarial Training: A Regularization Method for Supervised and SemiSupervised Learning.
arXiv eprints.  [Nguyen and Hein, 2017] Nguyen, Q. and Hein, M. (2017). The loss surface of deep and wide neural networks. arXiv eprints.
 [Rahimi, 2017] Rahimi, A. (2017). Machine Learning has become Alchemy. Presented at NIPS2017, Test of Time Award.
 [Tian, 2017] Tian, Y. (2017). An Analytical Formula of Population Gradient for twolayered ReLU network and its Applications in Convergence and Critical Point Analysis. arXiv eprints.

[Yu and Chen, 1995]
Yu, X.H. and Chen, G.A. (1995).
On the Local Minima Free Condition of Backpropagation Learning.
IEEE Transactions on Neural Networks, pages 1300 – 1303.  [Zhang et al., 2016] Zhang, C., Bengio, S., Hardt, M., Recht, B., and Vinyals, O. (2016). Understanding deep learning requires rethinking generalization. arXiv eprints.
Comments
There are no comments yet.