1 Introduction
Deep neural networks have achieved great success in many applications like image processing (Krizhevsky et al., 2012), speech recognition (Hinton et al., 2012) and Go games (Silver et al., 2016). However, the reason why deep networks work well in these fields remains a mystery for long time. Different lines of research try to understand the mechanism of deep neural networks from different aspects. For example, a series of work tries to understand how the expressive power of deep neural networks are related to their architecture, including the width of each layer and depth of the network (Telgarsky, 2015, 2016; Lu et al., 2017; Liang and Srikant, 2016; Yarotsky, 2017, 2018; Hanin, 2017; Hanin and Sellke, 2017). These work shows that multilayer networks with wide layers can approximate arbitrary continuous function.
In this paper, we mainly focus on the optimization perspective of deep neural networks. It is well known that without any additional assumption, even training a shallow neural network is an NPhard problem (Blum and Rivest, 1989). Researchers have made various assumptions to get a better theoretical understanding of training neural networks, such as Gaussian input assumption (Brutzkus et al., 2017; Du et al., 2017a; Zhong et al., 2017) and independent activation assumption (Choromanska et al., 2015; Kawaguchi, 2016). A recent line of work tries to understand the optimization process of training deep neural networks from two aspects: overparameterization and random weight initialization. It has been observed that overparameterization and proper random initialization can help the optimization in training neural networks, and various theoretical results have been established (Safran and Shamir, 2017; Du and Lee, 2018; Arora et al., 2018a; AllenZhu et al., 2018c; Du et al., 2018b; Li and Liang, 2018). More specifically, Safran and Shamir (2017) showed that overparameterization can help reduce the spurious local minima in onehiddenlayer neural networks with Rectified Linear Unit (ReLU) activation function. Du and Lee (2018) showed that with overparameterization, all local minima in onehiddenlayer networks with quardratic activation function are global minima. Arora et al. (2018b) showed that overparameterization introduced by depth can accelerate the training process using gradient descent (GD). AllenZhu et al. (2018c)
showed that with overparameterization and random weight initialization, both gradient descent and stochastic gradient descent (SGD) can find the global minima of recurrent neural networks.
The most related work to ours are Li and Liang (2018) and Du et al. (2018b). Li and Liang (2018) showed that for a onehiddenlayer network with ReLU activation function using overparameterization and random initialization, GD and SGD can find the near globaloptimal solutions in polynomial time with respect to the accuracy parameter , training sample size and the data separation parameter ^{1}^{1}1More precisely, Li and Liang (2018) assumed that each data point is generated from distributions , and is defined as .. Du et al. (2018b) showed that under the assumption that the population Gram matrix is not degenerate^{2}^{2}2More precisely, Du et al. (2018b)
assumed that the minimal singular value of
is greater than a constant, where is defined as and are data points., randomly initialized GD converges to a globally optimal solution of a onehiddenlayer network with ReLU activation function and quadratic loss function. However, both Li and Liang (2018) and Du et al. (2018b) only characterized the behavior of gradientbased method on onehiddenlayer shallow neural networks rather than on deep neural networks that are widely used in practice.In this paper, we aim to advance this line of research by studying the optimization properties of gradientbased methods for deep ReLU neural networks. In specific, we consider an hiddenlayer fullyconnected neural network with ReLU activation function. Similar to the onehiddenlayer case studied in Li and Liang (2018) and Du et al. (2018b), we study binary classification problem and show that both GD and SGD can achieve global minima of the training loss for any , with the aid of overparameterization and random initialization. At the core of our analysis is to show that Gaussian random initialization followed by (stochastic) gradient descent generates a sequence of iterates within a small perturbation region centering around the initial weights. In addition, we will show that the empirical loss function of deep ReLU networks has very good local curvature properties inside the perturbation region, which guarantees the global convergence of (stochastic) gradient descent. More specifically, our main contributions are summarized as follows:

We show that with Gaussian random initialization on each layer, when the number of hidden nodes per layer is at least , GD can achieve zero training error within iterations, where is the data separation distance, is the number of training examples, and is the number of hidden layers. Our result can be applied to a broad family of loss functions, as opposed to cross entropy loss studied in Li and Liang (2018) and quadratic loss considered in Du et al. (2018b).

We also prove a similar convergence result for SGD. We show that with Gaussian random initialization on each layer, when the number of hidden nodes per layer is at least , SGD can also achieve zero training error within iterations.

In terms of data distribution, we only make the socalled data separation assumption, which is more realistic than the assumption on the gram matrix made in Du et al. (2018b). The data separation assumption in this work is similar, but slightly milder, than that in Li and Yuan (2017) in the sense that it holds as long as the data are sampled from a distribution with a constant margin separating different classes.
When we were preparing this manuscript, we were informed that two concurrent work (AllenZhu et al., 2018b; Du et al., 2018a) has appeared online very recently. Our work bears a similarity to AllenZhu et al. (2018b) in the highlevel proof idea, which is to extend the results for twolayer ReLU networks in Li and Liang (2018) to deep ReLU networks. However, while AllenZhu et al. (2018b) mainly focuses on the regression problems with least square loss, we study the classification problems for a broad class of loss functions based on a milder data distribution assumption. Du et al. (2018a) also studies the regression problem. Compared to their work, our work is based on a different assumption on the training data and is able to deal with the nonsmooth ReLU activation function.
The remainder of this paper is organized as follows: In Section 2, we discuss the literature that are most related to our work. In Section 3, we introduce the problem setup and preliminaries of our work. In Sections 4 and 5, we present our main theoretical results and their proofs respectively. We conclude our work and discuss some future work in Section 6.
2 Related Work
Due to the huge amount of literature on deep learning theory, we are not able to include all papers in this big vein here. Instead, we review the following three major lines of research, which are most related to our work.
Onehiddenlayer neural networks with ground truth parameters Recently a series of work (Tian, 2017; Brutzkus and Globerson, 2017; Li and Yuan, 2017; Du et al., 2017a, b; Zhang et al., 2018) study a specific class of shallow twolayer (onehiddenlayer) neural networks, whose training data are generated by a ground truth network called “teacher network”. This series of work aim to provide recovery guarantee for gradientbased methods to learn the teacher networks based on either the population or empirical loss functions. More specifically, Tian (2017)
proved that for twolayer ReLU networks with only one hidden neuron, GD with arbitrary initialization on the population loss is able to recover the hidden teacher network.
Brutzkus and Globerson (2017) proved that GD can learn the true parameters of a twolayer network with a convolution filter. Li and Yuan (2017) proved that SGD can recover the underlying parameters of a twolayer residual network in polynomial time. Moreover, Du et al. (2017a, b) proved that both GD and SGD can recover the teacher network of a twolayer CNN with ReLU activation function. Zhang et al. (2018) showed that GD on the empirical loss function can recover the ground truth parameters of onehiddenlayer ReLU networks at a linear rate.Deep linear networks Beyond shallow onehiddenlayer neural networks, a series of recent work (Hardt and Ma, 2016; Kawaguchi, 2016; Bartlett et al., 2018; Arora et al., 2018a, b) focus on the optimization landscape of deep linear networks. More specifically, Hardt and Ma (2016) showed that deep linear residual networks have no spurious local minima. Kawaguchi (2016) proved that all local minima are global minima in deep linear networks. Arora et al. (2018b) showed that depth can accelerate the optimization of deep linear networks. Bartlett et al. (2018) proved that with identity initialization and proper regularizer, GD can converge to the least square solution on a residual linear network with quadratic loss function, while Arora et al. (2018a) proved the same properties for general deep linear networks.
Generalization bounds for deep neural networks The phenomenon that deep neural networks generalize better than shallow neural networks have been observed in practice for a long time (Langford and Caruana, 2002). Besides classical VCdimension based results (Vapnik, 2013; Anthony and Bartlett, 2009), a vast literature have recently studied the connection between the generalization performance of deep neural networks and their architectures (Neyshabur et al., 2015, 2017a, 2017b; Bartlett et al., 2017; Golowich et al., 2017; Arora et al., 2018c; AllenZhu et al., 2018a). More specifically, Neyshabur et al. (2015)
derived Rademacher complexity for a class of normconstrained feedforward neural networks with ReLU activation function.
Bartlett et al. (2017) derived margin bounds for deep ReLU networks based on Rademacher complexity and covering number. Neyshabur et al. (2017a, b) also derived similar spectrallynormalized margin bounds for deep neural networks with ReLU activation function using PACBayes approach. Golowich et al. (2017) studied sizeindependent sample complexity of deep neural networks and showed that the sample complexity can be independent of both depth and width under additional assumptions. Arora et al. (2018c) proved generalization bounds via compressionbased framework. AllenZhu et al. (2018a) showed that an overparameterized onehiddenlayer neural network can learn a onehiddenlayer neural network with fewer parameters using SGD up to a small generalization error, while similar results also hold for overparameterized twohiddenlayer neural networks.3 Problem Setup and Preliminaries
3.1 Notation
We use lower case, lower case bold face, and upper case bold face letters to denote scalars, vectors and matrices respectively. For a positive integer
, we denote . For a vector , we denote by the norm of , the norm of , and the norm of . We use to denote a square diagonal matrix with the elements of vector on the main diagonal. For a matrix , we use to denote the Frobenius norm of , to denote the spectral norm (maximum singular value), and to denote the number of nonzero entries. We denote by the unit sphere in .For two sequences and , we use to denote that for some absolute constant , and use to denote that for some absolute constant . In addition, we also use and to hide some logarithmic terms in BigO and BigOmega notations. We also use the following matrix product notation. For indices and a collection of matrices , we denote
(1) 
3.2 Problem Setup
Let be a set of training examples. Let . We consider hiddenlayer neural networks as follows:
where is the entrywise ReLU activation function, , are the weight matrices, and is the fixed output layer weight vector with half and half entries. Let be the collection of matrices , we consider solving the following empirical risk minimization problem:
(2) 
where . Regarding the loss function , we make the following assumptions.
The loss function is continuous, and satisfies , as well as . Assumption 3.2
has been widely made in the studies of training binary classifiers
(Soudry et al., 2017; Nacson et al., 2018; Ji and Telgarsky, 2018). In addition, we require the following assumption which provides an upper bound on the derivative of . There exists positive constants and , such that for any we havewhere is a positive constant. Note that can be . This assumption holds for a large class of loss functions including hinge loss, crossentropy loss and exponential loss. It is worthy noting that when and , this reduces to PolyakŁukojasiewicz (PL) condition (Polyak, 1963).
The loss function is smooth, i.e., for all .
In addition, we make the following assumptions on the training data. and for all , where is a constant. As is shown in the assumption above, the last entry of input is considered to be a constant , which introduces the bias term in the input layer of the network.
For all , if , then for some . Assumption 3.2 is a weaker version of Assumption 2.1 in AllenZhu et al. (2018b), which assumes that every two different data points are separated by a constant. In comparison, Assumption 3.2 only requires that inputs with different labels are separated, which is a much more practical assumption since it holds for all data distributions with margin , while the data separation distance in AllenZhu et al. (2018b) is usually dependent on the sample size when the examples are generated independently.
Define , . We assume that . Assumption 3.2 states that the number of nodes at all layers are of the same order. The constant is not essential and can be replaced with an arbitrary constant greater than or equal to .
3.3 Optimization Algorithms
In this paper, we consider training a deep neural network with Gaussian initialization followed by gradient descent/stochastic gradient descent.
Gaussian initialization. We say that the weight matrices are generated from Gaussian initialization if each column of
is generated independently from the Gaussian distribution
for all .Gradient descent. We consider solving the empirical risk minimization problem (2) with gradient descent with Gaussian initialization: let be weight matrices generated from Gaussian initialization, we consider the following gradient descent update rule:
where is the step size (a.k.a., learning rate).
Stochastic gradient descent. We also consider solving (2) using stochastic gradient descent with Gaussian initialization. Again, let be generated from Gaussian initialization. At the th iteration, a minibatch of training examples with batch size is sampled from the training set, and the stochastic gradient is calculated as follows:
The update rule for stochastic gradient descent is then defined as follows:
where is the step size.
3.4 Preliminaries
Here we briefly introduce some useful notations and provide some basic calculations regarding the neural network under our setting.

Output after the th layer: Given an input , the output of the neural network after the th layer is
where ^{3}^{3}3Here we slightly abuse the notation and denote for a vector ., and for .

Output of the neural network: The output of the neural network with input is as follows:
where we define and the last equality holds for any .

Gradient of the neural network: The partial gradient of the training loss with respect to is as follows:
where
4 Main Theory
In this section, we show that with random Gaussian initialization, overparameterization helps gradient based algorithms, including gradient descent and stochastic gradient descent, converge to the global minimum, i.e., find some points with arbitrary small training loss.
4.1 Gradient Descent
We provide the following theorem which characterizes the required numbers of hidden nodes and iterations such that the gradient descent can attain the global minimum of the empirical training loss function. Suppose are generated by Gaussian initialization. Then under Assumptions 3.23.2, if set the step size , the number of hidden nodes per layer
and the maximum number of iteration
then with high probability, gradient descent can find a point
such that . Theorem 4.1 suggests that the required number of hidden nodes and the number of iterations are both polynomial in the number of training examples , and the separation parameter . This is consistent with the recent work on the global convergence in training neural networks (Li and Yuan, 2017; Du et al., 2018b; AllenZhu et al., 2018c; Du et al., 2018a; AllenZhu et al., 2018b). Moreover, we prove that the dependence on the number of hidden layers is also polynomial, which is similar to AllenZhu et al. (2018b) and strictly better than Du et al. (2018a), where the dependence on is proved to be . Regarding different loss functions (depending on and according to Assumption 3.2), the dependence in ranges from to .Based on the results in Theorem 4.1, we are able to characterize the required number of hidden nodes per layer that gradient descent can find a point with zero training error in the following corollary.
Under the same assumptions as in Theorem 4.1, if , then gradient descent can find a point with zero training error if the number of hidden nodes per layer is at least .
4.2 Stochastic Gradient Descent
Regarding stochastic gradient descent, we make the following additional assumption on the derivative of the loss function , which is necessary to control the optimization trajectory of SGD. There exist positive constants with , such that for any , we have
where is a positive constant and . Apparently, this assumption is stronger than Assumption 3.2, since in addition to the lower bound of , we also require that is upper bounded by a function of the loss with the same order as the lower bound. Moreover, if , it follows that , and the assumption reduces to . If , we have , which implies that is Lipschitz.
Suppose are generated by Gaussian random. Then under Assumptions 3.23.2 and 4.2, if the step size , the number of hidden nodes per layer satisfies
and the number of iteration
then with high probability, stochastic gradient descent can find a point such that .
Similar to gradient descent, the following corollary characterizes the required number of hidden nodes per layer that stochastic gradient descent can achieve zero training error. Under the same assumptions as in Theorem 4.2, if , then stochastic gradient descent can find a point with zero training error if the number of hidden nodes per layer is at least . Theorem 4.2 suggests that, to find the global minimum, both the required number of hidden nodes and the number of iterations for stochastic gradient descent are also polynomial in , and , which matches the result in AllenZhu et al. (2018b) for the regression problem. In addition, as it cannot be directly observed in Corollaries 4.1 and 4.2, we remark here that compared with gradient descent, the required numbers of hidden nodes and iterations of stochastic gradient to achieve zero training error is worse by a factor ranging from to . The detailed comparison can be found in the proofs of Theorems 4.1 and 4.2.
5 Proof of the Main Theory
In this section, we provide the proof of the main theory, including Theorems 4.1 and 4.2. Our proofs for these two theorems can be decomposed into the following five steps:

We prove the basic properties for Gaussian random matrices in Theorem 5, which constitutes a basic structure of the neural network after Gaussian random initialization.

We show that as long as the product of iteration number and step size is smaller than some quantity , (stochastic) gradient descent with iterations remains in the perturbation region centering around the Gaussian initialization , which justifies the application of Theorem 5 to the iterates of (stochastic) gradient descent.

We finalize the proof by ensuring that (stochastic) gradient descent converges before exceeds by setting on the number of hidden nodes in each layer to be large enough.
The following theorem summarizes some high probability results of neural networks with Gaussian random initialization, which is pivotal to establish the subsequent theoretical analyses. Suppose that are generated by Gaussian initialization. Then under Assumptions 3.2, 3.2 and 3.2, there exist absolute constants such that for any , and positive integer , as long as
(3) 
and , with probability at least , all the following results hold:

[label=()]

for all and .

for all and such that .

for all .

for all and .

for all and .

for all , and all with .

for all , and all with .

For any , there exist at least nodes satisfying
Theorem 5 summarizes all the properties we need for Gaussian initialization. In the sequel, we always assume that results 18 hold for the Gaussian initialization. The parameters and in Theorem 5 are introduced to characterize the activation pattern of the ReLU activation functions in each layer. Their values that directly help the final convergence proof is derived during the proof of Theorem 5 as and , where is the perturbation level. Therefore, the condition on the rate of given by (3) is satisfied under the final assumptions on given in Theorem 4.1 and Theorem 4.2.
We perform perturbation on the collection of random matrices with perturbation level , which formulates a perturbation region centering at with radius . Let and be two collections of weight matrices. For , denote , be the output of the th hidden layer of the ReLU network with input and weight matrices and respectively. Define , and
for all . We summarize their properties in the following theorem. Suppose that are generated via Gaussian initialization, and all results 18 in Theorem 5 hold. Let , be perturbed weight matrices satisfying , . Then under Assumptions 3.2 and 3.2, there exist absolute constants , such that as long as , the following results hold:

[label=()]

for all .

for all and .

for all and .

.

for all .

for all satisfying , and any .

The squared Frobenius norm of the partial gradient with respect to the weight matrix in the last hidden layer has the following lower bound:
where .

The spectral norms of gradients and stochastic gradients at each layer have the following upper bounds:
where , denotes the minibatch size in SGD.
The gradient lower bound provided in 7 implies that within the perturbation region, the empirical loss function of deep neural network enjoys good local curvature properties, which play an essential role in the convergence proof of (stochastic) gradient descent. The gradient upper bound in 8 quantifies how much the weight matrices of the neural network would change during (stochastic) gradient descent, which is utilized to guarantee that the weight matrices won’t escape from the perturbation region during the training process.
5.1 Proof of Theorem 4.1
We organize our proof as the following three steps: (1) we first assume that during gradient descent, each iterate is in the preset perturbation region centering at with radius , and use the results in Theorem 5 to establish the convergence guarantee; (2) we prove the upper bound of the number of iteration such that the distance between the iterate and the initial point does not exceed ; (3) we compute the minimum number of hidden nodes such that gradient descent achieves the target accuracy before exceeding the upper bound derived in step (2).
For step (1), the following lemma provides the convergence guarantee of gradient descent while assuming all iterates are in the preset perturbation region, i.e., for all and .
Suppose that are generated via Gaussian initialization, and all results 18 in Theorem 5 hold. Under Assumptions 3.23.2, if for all and with perturbation level , the step size and
when ,
when , then gradient descent is able to find a point such that .
The following lemma provides the upper bound of the iteration number such that the distance between the iterate and the initial point does not exceed the perturbation radius .
Suppose that are generated by Gaussian initialization, and all results 18 in Theorem 5 hold. Then there exist a constant such for all iteration number and step size satisfying , it holds that for all .
Proof of Theorem 4.1.
The proof is straightforward. By Lemma 5.1, it suffices to show that the lower bound of derived in Lemma 5.1 is smaller than , where we plug in the assumption that . Therefore, we can derive the following lower bound on the number of hidden nodes per layer, i.e., :

:

:
Moreover, the required number of iterations, i.e., can be directly derived by combining the results of in Lemma 5.1 and the choice of the step size , thus we omit the detail here. ∎
5.2 Proof of Theorem 4.2
Similar to the proof for gradient descent, we first deliver the following lemma which characterizes the convergence of stochastic gradient descent for the training of ReLU network under the assumption that all iterates are in the preset perturbation region.
Comments
There are no comments yet.