1 Introduction
Deep convolutional neural networks (DCNN) have achieved the stateoftheart performance in many applications such as computer vision
(Krizhevsky et al., 2012)(Dauphin et al., 2016)and reinforcement learning applied in classic games like Go
(Silver et al., 2016). Despite the highly nonconvex nature of the objective function, simple firstorder algorithms like stochastic gradient descent and its variants often train such networks successfully. Why such simple methods in learning DCNN is successful remains elusive from the optimization perspective.
Recently, a line of research (Tian, 2017; Brutzkus & Globerson, 2017; Li & Yuan, 2017; Soltanolkotabi, 2017; ShalevShwartz et al., 2017b) assumed the input distribution is Gaussian and showed that stochastic gradient descent with random or initialization is able to train a neural network with ReLU activation in polynomial time. However, these results all assume there is only one unknown layer , while
is a fixed vector. A natural question thus arises:
Does randomly initialized (stochastic) gradient descent learn neural networks with multiple layers?
In this paper, we take an important step by showing that randomly initialized gradient descent learns a nonlinear convolutional neural network with two unknown layers and . To our knowledge, our work is the first of its kind.
Formally, we consider the convolutional case in which a filter is shared among different hidden nodes. Let be an input sample, e.g., an image. We generate patches from , each with size : where the th column is the th patch generated by selecting some coordinates of : . We further assume there is no overlap between patches. Thus, the neural network function has the following form:
We focus on the realizable case, i.e., the label is generated according to for some true parameters and and use loss to learn the parameters:
We assume
is sampled from a Gaussian distribution and there is no overlap between patches. This assumption is equivalent to that each entry of
is sampled from a Gaussian distribution (Brutzkus & Globerson, 2017; Zhong et al., 2017b). Following (Zhong et al., 2017a, b; Li & Yuan, 2017; Tian, 2017; Brutzkus & Globerson, 2017; ShalevShwartz et al., 2017b), in this paper, we mainly focus on the population loss:We study whether the global convergence and can be achieved when optimizing using randomly initialized gradient descent.
A crucial difference between our twolayer network and previous onelayer models is there is a positivehomogeneity issue. That is, for any , . This interesting property allows the network to be rescaled without changing the function computed by the network. As reported by (Neyshabur et al., 2015), it is desirable to have scalinginvariant learning algorithm to stabilize the training process.
One commonly used technique to achieve stability is weightnormalization introduced by Salimans & Kingma (2016). As reported in (Salimans & Kingma, 2016), this reparametrization improves the conditioning of the gradient because it couples the magnitude of the weight vector from the direction of the weight vector and empirically accelerates stochastic gradient descent optimization.
In our setting, we reparametrize the first layer as and the prediction function becomes
(1) 
The loss function is
(2) 
In this paper we focus on using randomly initialized gradient descent for learning this convolutional neural network. The pseudocode is listed in Algorithm 1.^{1}^{1}1With some simple calculations, we can see the optimal solution for is unique, which we denote as whereas the optimal for is not because for every optimal solution , for is also an optimal solution. In this paper, with a little abuse of the notation, we use to denote the equivalent class of optimal solutions.
Main Contributions. Our paper have three contributions. First, we show if is initialized by a specific random initialization, then with high probability, gradient descent from converges to teacher’s parameters . We can further boost the success rate with more trials.
Second, perhaps surprisingly, we prove that the objective function (Equation (2)) does have a spurious local minimum: using the same random initialization scheme, there exists a pair so that gradient descent from converges to this bad local minimum. In contrast to previous works on guarantees for nonconvex objective functions whose landscape satisfies “no spurious local minima” property (Li et al., 2016; Ge et al., 2017a, 2016; Bhojanapalli et al., 2016; Ge et al., 2017b; Kawaguchi, 2016), our result provides a concrete counterexample and highlights a conceptually surprising phenomenon:
Randomly initialized local search can find a global minimum in the presence of spurious local minima.
Finally, we conduct a quantitative study of the dynamics of gradient descent. We show that the dynamics of Algorithm 1 has two phases. At the beginning (around first 50 iterations in Figure 0(b)), because the magnitude of initial signal (angle between and ) is small, the prediction error drops slowly. After that, when the signal becomes stronger, gradient descent converges at a much faster rate and the prediction error drops quickly.
Technical Insights. The main difficulty of analyzing the convergence is the presence of local minima. Note that local minimum and the global minimum are disjoint (c.f. Figure 0(b)). The key technique we adopt is to characterize the attraction basin for each minimum. We consider the sequence generated by Algorithm 1 with step size using initialization point . The attraction basin for a minimum is defined as the
The goal is to find a distribution for weight initialization so that the probability that the initial weights are in of the global minimum is bounded below:
for some absolute constant .
While it is hard to characterize , we find that the set is a subset of (c.f. Lemma 5.2Lemma 5.4). Furthermore, when the learning rate is sufficiently small, we can design a specific distribution so that:
This analysis emphasizes that for nonconvex optimization problems, we need to carefully characterize both the trajectory of the algorithm and the initialization. We believe that this idea is applicable to other nonconvex problems.
To obtain the convergence rate, we propose a potential function (also called Lyapunov function in the literature). For this problem we consider the quantity where and we show it shrinks at a geometric rate (c.f. Lemma 5.5).
Organization This paper is organized as follows. In Section 3 we introduce the necessary notations and analytical formulas of gradient updates in Algorithm 1. In Section 4, we provide our main theorems on the performance of the algorithm and their implications. In Section 6, we use simulations to verify our theories. In Section 5, we give a proof sketch of our main theorem. We conclude and list future directions in Section 7. We place most of our detailed proofs in the appendix.
2 Related Works
From the point of view of learning theory, it is well known that training a neural network is hard in the worst cases (Blum & Rivest, 1989; Livni et al., 2014; Šíma, 2002; ShalevShwartz et al., 2017a, b) and recently, Shamir (2016) showed that assumptions on both the target function and the input distribution are needed for optimization algorithms used in practice to succeed.
Solve NN without gradient descent. With some additional assumptions, many works tried to design algorithms that provably learn a neural network with polynomial time and sample complexity (Goel et al., 2016; Zhang et al., 2015; Sedghi & Anandkumar, 2014; Janzamin et al., 2015; Goel & Klivans, 2017a, b). However these algorithms are specially designed for certain architectures and cannot explain why (stochastic) gradient based optimization algorithms work well in practice.
Gradientbased optimization with Gaussian Input. Focusing on gradientbased algorithms, a line of research analyzed the behavior of (stochastic) gradient descent for Gaussian input distribution. Tian (2017)
showed that population gradient descent is able to find the true weight vector with random initialization for onelayer oneneuron model.
Soltanolkotabi (2017) later improved this result by showing the true weights can be exactly recovered by empirical projected gradient descent with enough samples in linear time. Brutzkus & Globerson (2017) showed population gradient descent recovers the true weights of a convolution filter with nonoverlapping input in polynomial time. Zhong et al. (2017b, a)proved that with sufficiently good initialization, which can be implemented by tensor method, gradient descent can find the true weights of a onehiddenlayer fully connected and convolutional neural network.
Li & Yuan (2017) showed SGD can recover the true weights of a onelayer ResNet model with ReLU activation under the assumption that the spectral norm of the true weights is within a small constant of the identity mapping. (Panigrahy et al., 2018) also analyzed gradient descent for learning a twolayer neural network but with different activation functions. This paper also follows this line of approach that studies the behavior of gradient descent algorithm with Gaussian inputs.Local minimum and Global minimum. Finding the optimal weights of a neural network is nonconvex problem. Recently, researchers found that if the objective functions satisfy the following two key properties, (1) all saddle points and local maxima are strict (i.e., there exists a direction with negative curvature), and (2) all local minima are global (no spurious local minmum), then perturbed (stochastic) gradient descent (Ge et al., 2015) or methods with second order information (Carmon et al., 2016; Agarwal et al., 2017) can find a global minimum in polynomial time. ^{2}^{2}2Lee et al. (2016) showed vanilla gradient descent only converges to minimizers with no convergence rates guarantees. Recently, Du et al. (2017a) gave an exponential time lower bound for the vanilla gradient descent. In this paper, we give polynomial convergence guarantee on vanilla gradient descent. Combined with geometric analyses, these algorithmic results have shown a large number problems, including tensor decomposition (Ge et al., 2015), dictionary learning (Sun et al., 2017), matrix sensing (Bhojanapalli et al., 2016; Park et al., 2017), matrix completion (Ge et al., 2017a, 2016) and matrix factorization (Li et al., 2016) can be solved in polynomial time with local search algorithms.
This motivates the research of studying the landscape of neural networks (Kawaguchi, 2016; Choromanska et al., 2015; Hardt & Ma, 2016; Haeffele & Vidal, 2015; Mei et al., 2016; Freeman & Bruna, 2016; Safran & Shamir, 2016; Zhou & Feng, 2017; Nguyen & Hein, 2017a, b; Ge et al., 2017b; Zhou & Feng, 2017; Safran & Shamir, 2017). In particular, Kawaguchi (2016); Hardt & Ma (2016); Zhou & Feng (2017); Nguyen & Hein (2017a, b); Feizi et al. (2017) showed that under some conditions, all local minima are global. Recently, Ge et al. (2017b) showed using a modified objective function satisfying the two properties above, onehiddenlayer neural network can be learned by noisy perturbed gradient descent. However, for nonlinear activation function, where the number of samples larger than the number of nodes at every layer, which is usually the case in most deep neural network, and natural objective functions like , it is still unclear whether the strict saddle and “all locals are global” properties are satisfied. In this paper, we show that even for a onehiddenlayer neural network with ReLU activation, there exists a spurious local minimum. However, we further show that randomly initialized local search can achieve global minimum with constant probability.
3 Preliminaries
We use boldfaced letters for vectors and matrices. We use to denote the Euclidean norm of a finitedimensional vector. We let and be the parameters at the th iteration and and be the optimal weights. For two vector and , we use to denote the angle between them. is the th coordinate of and is the transpose of the th row of (thus a column vector). We denote the dimensional unit sphere and the ball centered at with radius .
In this paper we assume every patch
is vector of i.i.d Gaussian random variables. The following theorem gives an explicit formula for the population loss. The proof uses basic rotational invariant property and polar decomposition of Gaussian random variables. See Section
A for details.Theorem 3.1.
If every entry of is i.i.d. sampled from a Gaussian distribution with mean
and variance
, then population loss is(3) 
where and
Using similar techniques, we can show the gradient also has an analytical form.
Theorem 3.2.
Suppose every entry of is i.i.d. sampled from a Gaussian distribution with mean and variance . Denote . Then the expected gradient of and can be written as
As a remark, if the second layer is fixed, upon proper scaling, the formulas for the population loss and gradient of are equivalent to the corresponding formulas derived in (Brutzkus & Globerson, 2017; Cho & Saul, 2009). However, when the second layer is not fixed, the gradient of depends on , which plays an important role in deciding whether converging to the global or the local minimum.
4 Main Result
We begin with our main theorem about the convergence of gradient descent.
Theorem 4.1.
Suppose the initialization satisfies , , and step size satisfies
Then the convergence of gradient descent has two phases.
(Phase I: Slow Initial Rate)
There exists such that we have
and where .
(Phase II: Fast Rate)
Suppose at the th iteration, and , then there exists ^{3}^{3}3 hides logarithmic factors on and such that .
Theorem 4.1 shows under certain conditions of the initialization, gradient descent converges to the global minimum. The convergence has two phases, at the beginning because the initial signal () is small, the convergence is quite slow. After iterations, the signal becomes stronger and we enter a regime with a faster convergence rate. See Lemma 5.5 for technical details.
Initialization plays an important role in the convergence. First, Theorem 4.1 needs the initialization satisfy , and . Second, the step size and the convergence rate in the first phase also depends on the initialization. If the initial signal is very small, for example, which makes close to , we can only choose a very small step size and because depends on the inverse of , we need a large number of iterations to enter phase II. We provide the following initialization scheme which ensures the conditions required by Theorem 4.1 and a large enough initial signal.
Theorem 4.2.
Let and , then exists
that , and . Further, with high probability, the initialization satisfies , and .
Theorem 4.2 shows after generating a pair of random vectors , trying out all sign combinations of , we can find the global minimum by gradient descent. Further, because the initial signal is not too small, we only need to set the step size to be and the number of iterations in phase I is at most . Therefore, Theorem 4.1 and Theorem 4.2 together show that randomly initialized gradient descent learns an onehiddenlayer convolutional neural network in polynomial time. The proof of the first part of Theorem 4.2 uses the symmetry of unit sphere and ball and the second part is a standard application of random vector in highdimensional spaces. See Lemma 2.5 of (Hardt & Price, 2014) for example.
Remark 1: For the second layer we use type initialization, verifying common initialization techniques (Glorot & Bengio, 2010; He et al., 2015; LeCun et al., 1998).
Remark 2: The Gaussian input assumption is not necessarily true in practice, although this is a common assumption appeared in the previous papers (Brutzkus & Globerson, 2017; Li & Yuan, 2017; Zhong et al., 2017a, b; Tian, 2017; Xie et al., 2017; ShalevShwartz et al., 2017b) and also considered plausible in (Choromanska et al., 2015). Our result can be easily generalized to rotation invariant distributions. However, extending to more general distributional assumption, e.g., structural conditions used in (Du et al., 2017b) remains a challenging open problem.
Remark 3: Since we only require initialization to be smaller than some quantities of and . In practice, if the optimization fails, i.e., the initialization is too large, one can halve the initialization size, and eventually these conditions will be met.
4.1 Gradient Descent Can Converge to the Spurious Local Minimum
Theorem 4.2 shows that among , there is a pair that enables gradient descent to converge to the global minimum. Perhaps surprisingly, the next theorem shows that under some conditions of the underlying truth, there is also a pair that makes gradient descent converge to the spurious local minimum.
Theorem 4.3.
Without loss of generality, we let . Suppose and is sufficiently small. Let and , then with high probability, there exists that , , . If is used as the initialization, when Algorithm 1 converges, we have
and .
Unlike Theorem 4.1 which requires no assumption on the underlying truth , Theorem 4.3 assumes . This technical condition comes from the proof which requires invariance for all iterations. To ensure there exists which makes , we need relatively small. See Section E for more technical insights.
A natural question is whether the ratio becomes larger, the probability randomly gradient descent converging to the global minimum, becomes larger as well. We verify this phenomenon empirically in Section 6.
5 Proof Sketch
In Section 5.1, we give qualitative high level intuition on why the initial conditions are sufficient for gradient descent to converge to the global minimum. In Section 5.2, we explain why the gradient descent has two phases.
5.1 Qualitative Analysis of Convergence
The convergence to global optimum relies on a geometric characterization of saddle points and a series of invariants throughout the gradient descent dynamics. The next lemma gives the analysis of stationary points. The main step is to check the first order condition of stationary points using Theorem 3.2.
Lemma 5.1 (Stationary Point Analysis).
When the gradient descent converges, and , we have either
This lemma shows that when the algorithm converges, and and are not orthogonal, then we arrive at either a global optimal point or a local minimum. Now recall the gradient formula of : . Notice that and is just the projection matrix onto the complement of . Therefore, the sign of inner product between and plays a crucial role in the dynamics of Algorithm 1 because if the inner product is positive, the gradient update will decrease the angle between and and if it is negative, the angle will increase. This observation is formalized in the lemma below.
Lemma 5.2 (Invariance I: Tje Angle between and always decreases.).
If , then .
This lemma shows that when for all , gradient descent converges to the global minimum. Thus, we need to study the dynamics of . For the ease of presentation, without loss of generality, we assume . By the gradient formula of , we have
(4) 
We can use induction to prove the invariance. If and the first term of Equation (4) is nonnegative. For the second term, notice that if , we have , so the second term is nonnegative. Therefore, as long as is also nonnegative, we have the desired invariance. The next lemma summarizes the above analysis.
Lemma 5.3 (Invariance II: Positive Signal from the Second Layer.).
If , , and , then .
It remains to prove . Again, we study the dynamics of this quantity. Using the gradient formula and some algebra, we have
where have used the fact that for all . Therefore we have
These imply the third invariance.
Lemma 5.4 (Invariance III: Summation of Second Layer Always Small.).
If and then .
5.2 Quantitative Analysis of Two Phase Phenomenon
In this section we demonstrate why there is a twophase phenomenon. Throughout this section, we assume the conditions in Section 5.1 hold. We first consider the convergence of the first layer. Because we are using weightnormalization, only the angle between and will affect the prediction. Therefore, in this paper, we study the dynamics . The following lemma quantitatively characterize the shrinkage of this quantity of one iteration.
Lemma 5.5 (Convergence of Angle between and ).
This lemma shows the convergence rate depends on two crucial quantities, and . At the beginning, both and are small. Nevertheless, Lemma C.3 shows is universally lower bounded by . Therefore, after we have . Once , Lemma C.2 shows, after iterations, . Combining the facts (Lemma C.3) and , we have . Now we enter phase II.
In phase II, Lemma 5.5 shows
for some positive absolute constant . Therefore, we have much faster convergence rate than that in the Phase I. After only iterations, we obtain .
6 Experiments
In this section, we illustrate our theoretical results with numerical experiments. Again without loss of generality, we assume in this section.
6.1 Multiphase Phenomenon
In Figure 2, we set , and we consider 4 key quantities in proving Theorem 4.1, namely, angle between and (c.f. Lemma 5.5), (c.f. Lemma C.5), (c.f. Lemma C.4) and prediction error (c.f. Lemma C.6).
When we achieve the global minimum, all these quantities are . At the beginning (first iterations), and the prediction error drop quickly. This is because for the gradient of , is the dominating term which will make closer to quickly.
After that, for the next iterations, all quantities decrease at a slow rate. This phenomenon is explained to the Phase I stage in Theorem 4.1. The rate is slow because the initial signal is small.
After iterations, all quantities drop at a much faster rate. This is because the signal is very strong and since the convergence rate is proportional to this signal, we have a much faster convergence rate (c.f. Phase II of Theorem 4.1).
6.2 Probability of Converging to the Global Minimum
In this section we test the probability of converging to the global minimum using the random initialization scheme described in Theorem 4.2. We set and vary and . We run 5000 random initializations for each and compute the probability of converging to the global minimum.
In Theorem 4.3, we showed if is sufficiently small, randomly initialized gradient descent converges to the spurious local minimum with constant probability. Table 1 empirically verifies the importance of this assumption. For every fixed if becomes larger, the probability of converging to the global minimum becomes larger.
An interesting phenomenon is for every fixed ratio when becomes lager, the probability of converging to the global minimum becomes smaller. How to quantitatively characterize the relationship between the success probability and the dimension of the second layer is an open problem.
7 Conclusion and Future Works
In this paper we proved the first polynomial convergence guarantee of randomly initialized gradient descent algorithm for learning a onehiddenlayer convolutional neural network. Our result reveals an interesting phenomenon that randomly initialized local search algorithm can converge to a global minimum or a spurious local minimum. We give a quantitative characterization of gradient descent dynamics to explain the twophase convergence phenomenon. Experimental results also verify our theoretical findings. Here we list some future directions.
Our analysis focused on the population loss with Gaussian input. In practice one uses (stochastic) gradient descent on the empirical loss. Concentration results in (Mei et al., 2016; Soltanolkotabi, 2017) are useful to generalize our results to the empirical version. A more challenging question is how to extend the analysis of gradient dynamics beyond rotationally invariant input distributions. Du et al. (2017b) proved the convergence of gradient descent under some structural input distribution assumptions in the onelayer convolutional neural network. It would be interesting to bring their insights to our setting.
Another interesting direction is to generalize our result to deeper and wider architectures. Specifically, an open problem is under what conditions randomly initialized gradient descent algorithms can learn onehiddenlayer fully connected neural network or a convolutional neural network with multiple kernels. Existing results often require sufficiently good initialization (Zhong et al., 2017a, b). We believe the insights from this paper, especially the invariance principles in Section 5.1 are helpful to understand the behaviors of gradientbased algorithms in these settings.
k  0  1  4  9  16  25 

25  0.50  0.55  0.73  1  1  1 
36  0.50  0.53  0.66  0.89  1  1 
49  0.50  0.53  0.61  0.78  1  1 
64  0.50  0.51  0.59  0.71  0.89  1 
81  0.50  0.53  0.57  0.66  0.81  0.97 
100  0.50  0.50  0.57  0.63  0.75  0.90 
8 Acknowledgment
This research was partly funded by NSF grant IIS1563887, AFRL grant FA87501720212 DARPA D17AP00001. J.D.L. acknowledges support of the ARO under MURI Award W911NF1110303. This is part of the collaboration between US DOD, UK MOD and UK Engineering and Physical Research Council (EPSRC) under the Multidisciplinary University Research Initiative. The authors thank Xiaolong Wang and Kai Zhong for useful discussions.
References
 Agarwal et al. (2017) Agarwal, Naman, AllenZhu, Zeyuan, Bullins, Brian, Hazan, Elad, and Ma, Tengyu. Finding Approximate Local Minima Faster Than Gradient Descent. In STOC, 2017. Full version available at http://arxiv.org/abs/1611.01146.
 Bhojanapalli et al. (2016) Bhojanapalli, Srinadh, Neyshabur, Behnam, and Srebro, Nati. Global optimality of local search for low rank matrix recovery. In Advances in Neural Information Processing Systems, pp. 3873–3881, 2016.
 Blum & Rivest (1989) Blum, Avrim and Rivest, Ronald L. Training a 3node neural network is NPcomplete. In Advances in neural information processing systems, pp. 494–501, 1989.
 Brutzkus & Globerson (2017) Brutzkus, Alon and Globerson, Amir. Globally optimal gradient descent for a Convnet with Gaussian inputs. arXiv preprint arXiv:1702.07966, 2017.
 Carmon et al. (2016) Carmon, Yair, Duchi, John C, Hinder, Oliver, and Sidford, Aaron. Accelerated methods for nonconvex optimization. arXiv preprint arXiv:1611.00756, 2016.

Cho & Saul (2009)
Cho, Youngmin and Saul, Lawrence K.
Kernel methods for deep learning.
In Advances in neural information processing systems, pp. 342–350, 2009.  Choromanska et al. (2015) Choromanska, Anna, Henaff, Mikael, Mathieu, Michael, Arous, Gérard Ben, and LeCun, Yann. The loss surfaces of multilayer networks. In Artificial Intelligence and Statistics, pp. 192–204, 2015.
 Dauphin et al. (2016) Dauphin, Yann N, Fan, Angela, Auli, Michael, and Grangier, David. Language modeling with gated convolutional networks. arXiv preprint arXiv:1612.08083, 2016.
 Du et al. (2017a) Du, Simon S, Jin, Chi, Lee, Jason D, Jordan, Michael I, Poczos, Barnabas, and Singh, Aarti. Gradient descent can take exponential time to escape saddle points. arXiv preprint arXiv:1705.10412, 2017a.
 Du et al. (2017b) Du, Simon S, Lee, Jason D, and Tian, Yuandong. When is a convolutional filter easy to learn? arXiv preprint arXiv:1709.06129, 2017b.
 Feizi et al. (2017) Feizi, Soheil, Javadi, Hamid, Zhang, Jesse, and Tse, David. Porcupine neural networks:(almost) all local optima are global. arXiv preprint arXiv:1710.02196, 2017.
 Freeman & Bruna (2016) Freeman, C Daniel and Bruna, Joan. Topology and geometry of halfrectified network optimization. arXiv preprint arXiv:1611.01540, 2016.
 Ge et al. (2015) Ge, Rong, Huang, Furong, Jin, Chi, and Yuan, Yang. Escaping from saddle points online stochastic gradient for tensor decomposition. In Proceedings of The 28th Conference on Learning Theory, pp. 797–842, 2015.
 Ge et al. (2016) Ge, Rong, Lee, Jason D, and Ma, Tengyu. Matrix completion has no spurious local minimum. In Advances in Neural Information Processing Systems, pp. 2973–2981, 2016.

Ge et al. (2017a)
Ge, Rong, Jin, Chi, and Zheng, Yi.
No spurious local minima in nonconvex low rank problems: A unified
geometric analysis.
In
Proceedings of the 34th International Conference on Machine Learning
, pp. 1233–1242, 2017a.  Ge et al. (2017b) Ge, Rong, Lee, Jason D, and Ma, Tengyu. Learning onehiddenlayer neural networks with landscape design. arXiv preprint arXiv:1711.00501, 2017b.
 Glorot & Bengio (2010) Glorot, Xavier and Bengio, Yoshua. Understanding the difficulty of training deep feedforward neural networks. In Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, pp. 249–256, 2010.
 Goel & Klivans (2017a) Goel, Surbhi and Klivans, Adam. Eigenvalue decay implies polynomialtime learnability for neural networks. arXiv preprint arXiv:1708.03708, 2017a.
 Goel & Klivans (2017b) Goel, Surbhi and Klivans, Adam. Learning depththree neural networks in polynomial time. arXiv preprint arXiv:1709.06010, 2017b.
 Goel et al. (2016) Goel, Surbhi, Kanade, Varun, Klivans, Adam, and Thaler, Justin. Reliably learning the ReLU in polynomial time. arXiv preprint arXiv:1611.10258, 2016.
 Haeffele & Vidal (2015) Haeffele, Benjamin D and Vidal, René. Global optimality in tensor factorization, deep learning, and beyond. arXiv preprint arXiv:1506.07540, 2015.
 Hardt & Ma (2016) Hardt, Moritz and Ma, Tengyu. Identity matters in deep learning. arXiv preprint arXiv:1611.04231, 2016.
 Hardt & Price (2014) Hardt, Moritz and Price, Eric. The noisy power method: A meta algorithm with applications. In Advances in Neural Information Processing Systems, pp. 2861–2869, 2014.

He et al. (2015)
He, Kaiming, Zhang, Xiangyu, Ren, Shaoqing, and Sun, Jian.
Delving deep into rectifiers: Surpassing humanlevel performance on imagenet classification.
In Proceedings of the IEEE international conference on computer vision, pp. 1026–1034, 2015.  Janzamin et al. (2015) Janzamin, Majid, Sedghi, Hanie, and Anandkumar, Anima. Beating the perils of nonconvexity: Guaranteed training of neural networks using tensor methods. arXiv preprint arXiv:1506.08473, 2015.
 Kawaguchi (2016) Kawaguchi, Kenji. Deep learning without poor local minima. In Advances In Neural Information Processing Systems, pp. 586–594, 2016.
 Krizhevsky et al. (2012) Krizhevsky, Alex, Sutskever, Ilya, and Hinton, Geoffrey E. Imagenet classification with deep convolutional neural networks. In Advances in neural information processing systems, pp. 1097–1105, 2012.
 LeCun et al. (1998) LeCun, Yann, Bottou, Léon, Orr, Genevieve B, and Müller, KlausRobert. Efficient backprop. In Neural networks: Tricks of the trade, pp. 9–50. Springer, 1998.
 Lee et al. (2016) Lee, Jason D, Simchowitz, Max, Jordan, Michael I, and Recht, Benjamin. Gradient descent only converges to minimizers. In Conference on Learning Theory, pp. 1246–1257, 2016.
 Li et al. (2016) Li, Xingguo, Wang, Zhaoran, Lu, Junwei, Arora, Raman, Haupt, Jarvis, Liu, Han, and Zhao, Tuo. Symmetry, saddle points, and global geometry of nonconvex matrix factorization. arXiv preprint arXiv:1612.09296, 2016.
 Li & Yuan (2017) Li, Yuanzhi and Yuan, Yang. Convergence analysis of twolayer neural networks with ReLU activation. arXiv preprint arXiv:1705.09886, 2017.
 Livni et al. (2014) Livni, Roi, ShalevShwartz, Shai, and Shamir, Ohad. On the computational efficiency of training neural networks. In Advances in Neural Information Processing Systems, pp. 855–863, 2014.
 Mei et al. (2016) Mei, Song, Bai, Yu, and Montanari, Andrea. The landscape of empirical risk for nonconvex losses. arXiv preprint arXiv:1607.06534, 2016.
 Neyshabur et al. (2015) Neyshabur, Behnam, Salakhutdinov, Ruslan R, and Srebro, Nati. PathSGD: Pathnormalized optimization in deep neural networks. In Advances in Neural Information Processing Systems, pp. 2422–2430, 2015.
 Nguyen & Hein (2017a) Nguyen, Quynh and Hein, Matthias. The loss surface of deep and wide neural networks. arXiv preprint arXiv:1704.08045, 2017a.
 Nguyen & Hein (2017b) Nguyen, Quynh and Hein, Matthias. The loss surface and expressivity of deep convolutional neural networks. arXiv preprint arXiv:1710.10928, 2017b.
 Panigrahy et al. (2018) Panigrahy, Rina, Rahimi, Ali, Sachdeva, Sushant, and Zhang, Qiuyi. Convergence results for neural networks via electrodynamics. In LIPIcsLeibniz International Proceedings in Informatics, volume 94. Schloss DagstuhlLeibnizZentrum fuer Informatik, 2018.
 Park et al. (2017) Park, Dohyung, Kyrillidis, Anastasios, Carmanis, Constantine, and Sanghavi, Sujay. Nonsquare matrix sensing without spurious local minima via the BurerMonteiro approach. In Artificial Intelligence and Statistics, pp. 65–74, 2017.
 Safran & Shamir (2016) Safran, Itay and Shamir, Ohad. On the quality of the initial basin in overspecified neural networks. In International Conference on Machine Learning, pp. 774–782, 2016.
 Safran & Shamir (2017) Safran, Itay and Shamir, Ohad. Spurious local minima are common in twolayer relu neural networks. arXiv preprint arXiv:1712.08968, 2017.
 Salimans & Kingma (2016) Salimans, Tim and Kingma, Diederik P. Weight normalization: A simple reparameterization to accelerate training of deep neural networks. In Advances in Neural Information Processing Systems, pp. 901–909, 2016.
 Sedghi & Anandkumar (2014) Sedghi, Hanie and Anandkumar, Anima. Provable methods for training neural networks with sparse connectivity. arXiv preprint arXiv:1412.2693, 2014.
 ShalevShwartz et al. (2017a) ShalevShwartz, Shai, Shamir, Ohad, and Shammah, Shaked. Failures of gradientbased deep learning. In International Conference on Machine Learning, pp. 3067–3075, 2017a.
 ShalevShwartz et al. (2017b) ShalevShwartz, Shai, Shamir, Ohad, and Shammah, Shaked. Weight sharing is crucial to succesful optimization. arXiv preprint arXiv:1706.00687, 2017b.
 Shamir (2016) Shamir, Ohad. Distributionspecific hardness of learning neural networks. arXiv preprint arXiv:1609.01037, 2016.
 Silver et al. (2016) Silver, David, Huang, Aja, Maddison, Chris J, Guez, Arthur, Sifre, Laurent, Van Den Driessche, George, Schrittwieser, Julian, Antonoglou, Ioannis, Panneershelvam, Veda, Lanctot, Marc, et al. Mastering the game of go with deep neural networks and tree search. Nature, 529(7587):484–489, 2016.
 Šíma (2002) Šíma, Jiří. Training a single sigmoidal neuron is hard. Neural Computation, 14(11):2709–2728, 2002.
 Soltanolkotabi (2017) Soltanolkotabi, Mahdi. Learning ReLUs via gradient descent. arXiv preprint arXiv:1705.04591, 2017.
 Sun et al. (2017) Sun, Ju, Qu, Qing, and Wright, John. Complete dictionary recovery over the sphere I: Overview and the geometric picture. IEEE Transactions on Information Theory, 63(2):853–884, 2017.
 Tian (2017) Tian, Yuandong. An analytical formula of population gradient for twolayered ReLU network and its applications in convergence and critical point analysis. arXiv preprint arXiv:1703.00560, 2017.
 Xie et al. (2017) Xie, Bo, Liang, Yingyu, and Song, Le. Diverse neural network learns true target functions. In Artificial Intelligence and Statistics, pp. 1216–1224, 2017.
 Zhang et al. (2015) Zhang, Yuchen, Lee, Jason D, Wainwright, Martin J, and Jordan, Michael I. Learning halfspaces and neural networks with random initialization. arXiv preprint arXiv:1511.07948, 2015.
 Zhong et al. (2017a) Zhong, Kai, Song, Zhao, and Dhillon, Inderjit S. Learning nonoverlapping convolutional neural networks with multiple kernels. arXiv preprint arXiv:1711.03440, 2017a.
 Zhong et al. (2017b) Zhong, Kai, Song, Zhao, Jain, Prateek, Bartlett, Peter L, and Dhillon, Inderjit S. Recovery guarantees for onehiddenlayer neural networks. arXiv preprint arXiv:1706.03175, 2017b.
 Zhou & Feng (2017) Zhou, Pan and Feng, Jiashi. The landscape of deep learning algorithms. arXiv preprint arXiv:1705.07038, 2017.
Appendix A Proofs of Section 3
Proof of Theorem 3.1.
We first expand the loss function directly.
where for simplicity, we denote
(5)  
(6) 
For , using the second identity of Lemma A.1, we can compute
For
, using the second moment formula of halfGaussian distribution we can compute
Therefore
Now let us compute . For , similar to , using the independence property of Gaussian, we have
Next, using the fourth identity of Lemma A.1, we have
Therefore, we can also write in a compact form
Plugging in the formulas of and and , we obtain the desired result. ∎
Comments
There are no comments yet.