Learning Overparameterized Neural Networks via Stochastic Gradient Descent on Structured Data

08/03/2018 ∙ by Yuanzhi Li, et al. ∙ University of Wisconsin-Madison Stanford University 0

Neural networks have many successful applications, while much less theoretical understanding has been gained. Towards bridging this gap, we study the problem of learning a two-layer overparameterized ReLU neural network for multi-class classification via stochastic gradient descent (SGD) from random initialization. In the overparameterized setting, when the data comes from mixtures of well-separated distributions, we prove that SGD learns a network with a small generalization error, albeit the network has enough capacity to fit arbitrary labels. Furthermore, the analysis provides interesting insights into several aspects of learning neural networks and can be verified based on empirical studies on synthetic data and on the MNIST dataset.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Neural networks have achieved great success in many applications, but despite a recent increase of theoretical studies, much remains to be explained. For example, it is empirically observed that learning with stochastic gradient descent (SGD) in the overparameterized setting (i.e., learning a large network with number of parameters larger than the number of training data points) does not lead to overfitting neyshabur2014search ; zhang2016understanding . Some recent studies use the low complexity of the learned solution to explain the generalization, but usually do not explain how the SGD or its variants favors low complexity solutions (i.e., the inductive bias or implicit regularization) bartlett2017spectrally ; neyshabur2017pac . It is also observed that overparameterization and proper random initialization can help the optimization sutskever2013importance ; hardt2016identity ; soudry2016no ; livni2014computational , but it is also not well understood why a particular initialization can improve learning. Moreover, most of the existing works trying to explain these phenomenons in general rely on unrealistic assumptions about the data distribution, such as Gaussian-ness and/or linear separability zhong2017recovery ; soltanolkotabi2017theoretical ; ge2017learning ; li2017convergence ; brutzkus2017sgd .

This paper thus proposes to study the problem of learning a two-layer overparameterized neural network using SGD for classification, on data with a more realistic structure. In particular, the data in each class is a mixture of several components, and components from different classes are well separated in distance (but the components in each class can be close to each other). This is motivated by practical data. For example, on the dataset MNIST lecun1998gradient , each class corresponds to a digit and can have several components corresponding to different writing styles of the digit, and an image in it is a small perturbation of one of the components. On the other hand, images that belong to the same component are closer to each other than to an image of another digit. Analysis in this setting can then help understand how the structure of the practical data affects the optimization and generalization.

In this setting, we prove that when the network is sufficiently overparameterized, SGD provably learns a network close to the random initialization and with a small generalization error. This result shows that in the overparameterized setting and when the data is well structured, though in principle the network can overfit, SGD with random initialization introduces a strong inductive bias and leads to good generalization.

Our result also shows that the overparameterization requirement and the learning time depends on the parameters inherent to the structure of the data but not on the ambient dimension of the data. More importantly, the analysis to obtain the result also provides some interesting theoretical insights for various aspects of learning neural networks. It reveals that the success of learning crucially relies on overparameterization and random initialization. These two combined together lead to a tight coupling around the initialization between the SGD and another learning process that has a benign optimization landscape. This coupling, together with the structure of the data, allows SGD to find a solution that has a low generalization error, while still remains in the aforementioned neighborhood of the initialization. Our work makes a step towrads explaining how overparameterization and random initialization help optimization, and how the inductive bias and good generalization arise from the SGD dynamics on structured data. Some other more technical implications of our analysis will be discussed in later sections, such as the existence of a good solution close to the initialization, and the low-rankness of the weights learned. Complementary empirical studies on synthetic data and on the benchmark dataset MNIST provide positive support for the analysis and insights.

2 Related Work

Generalization of neural networks. Empirical studies show interesting phenomena about the generalization of neural networks: practical neural networks have the capacity to fit random labels of the training data, yet they still have good generalization when trained on practical data neyshabur2014search ; zhang2016understanding ; arpit2017closer . These networks are overparameterized in that they have more parameters than statistically necessary, and their good generalization cannot be explained by naïvely applying traditional theory. Several lines of work have proposed certain low complexity measures of the learned network and derived generalization bounds to better explain the phenomena. bartlett2017spectrally ; neyshabur2017pac ; moustapha2017parseval proved spectrally-normalized margin-based generalization bounds, dziugaite2017computing ; neyshabur2017pac derived bounds from a PAC-Bayes approach, and arora2018stronger ; zhou2018compressibility ; baykal2018data derived bounds from the compression point of view. They, in general, do not address why the low complexity arises. This paper takes a step towards this direction, though on two-layer networks and a simplified model of the data.

Overparameterization and implicit regularization. The training objectives of overparameterized networks in principle have many (approximate) global optima and some generalize better than the others keskar2016large ; dinh2017sharp ; arpit2017closer

, while empirical observations imply that the optimization process in practice prefers those with better generalization. It is then an interesting question how this implicit regularization or inductive bias arises from the optimization and the structure of the data. Recent studies are on SGD for different tasks, such as logistic regression 

soudry2017implicit and matrix factorization gunasekar2017implicit ; ma2017implicit ; li2017algorithmic . More related to our work is brutzkus2017sgd , which studies the problem of learning a two-layer overparameterized network on linearly separable data and shows that SGD converges to a global optimum with good generalization. Our work studies the problem on data with a well clustered (and potentially not linearly separable) structure that we believe is closer to practical scenarios and thus can advance this line of research.

Theoretical analysis of learning neural networks. There also exists a large body of work that analyzes the optimization landscape of learning neural networks kawaguchi2016deep ; soudry2016no ; xie2016diversity ; ge2017learning ; soltanolkotabi2017theoretical ; tian2017analytical ; brutzkus2017globally ; zhong2017recovery ; li2017convergence ; boob2017theoretical . They in general need to assume unrealistic assumptions about the data such as Gaussian-ness, and/or have strong assumptions about the network such as using only linear activation. They also do not study the implicit regularization by the optimization algorithms.

3 Problem Setup

In this work, a two-layer neural network with ReLU activation for -classes classification is given by such that for each :

where are the weights for the neurons in the hidden layer, are the weights of the top layer, and .

Assumptions about the data. The data is generated from a distribution as follows. There are unknown distributions over

and probabilities

such that . Each data point is i.i.d. generated by: (1) Sample such that ; (2) Set label , and sample from . Assume we sample points .

Let us define the support of a distribution with density over as the distance between two sets as and the diameter of a set as Then we are ready to make the assumptions about the data.

  1. (Separability) There exists such that for every and every , Moreover, for every ,111The assumption can be made to for any by paying a large polynomial in in the sample complexity. We will not prove it in this paper because we would like to highlight the key factors.

  2. (Normalization) Any from the distribution has .

A few remarks are worthy. Instead of having one distribution for one class, we allow an arbitrary distributions in each class, which we believe is a better fit to the real data. For example, in MNIST, a class can be the number 1, and can be the different styles of writing ( or or ).

Assumption (A2) is for simplicity, while (A1) is our key assumption. With distributions inside each class, our assumption allows data that is not linearly separable, e.g., XOR type data in where there are two classes, one consisting of two balls of diameter with centers and and the other consisting of two of the same diameter with centers and . See Figure 3 in Appendix C for an illustration. Moreover, essentially the only assumption we have here is . When , , which is the minimal requirement on the order of for the distribution to be efficiently learnable. Our work allows larger , so that the data can be more complicated inside each class. In this case, we require the separation to also be higher. When we increase to refine the distributions inside each class, we should expect the diameters of each distribution become smaller as well. As long as the rate of diameter decreasing in each distribution is greater than the total number of distributions, then our assumption will hold.

Assumptions about the learning process. We will only learn the weight to simplify the analysis. Since the ReLU activation is positive homogeneous, the effect of overparameterization can still be studied, and a similar approach has been adopted in previous work brutzkus2017sgd . So the network is also written as for .

We assume the learning is from a random initialization:

  1. (Random initialization) , , with .

The learning process minimizes the cross entropy loss over the softmax, defined as:

Let denote the cross entropy loss for a particular point .

We consider a minibatch SGD of batch size , number of iterations and learning rate as the following process: Randomly divide the total training examples into batches, each of size . Let the indices of the examples in the -th batch be . At each iteration, the update is222Strictly speaking, does not have gradient everywhere due to the non-smoothness of ReLU. One can view as a convenient notation for the right hand side of (1).

(1)

4 Main Result

For notation simplicity, for a target error (to be specified later), with high probability (or w.h.p.) means with probability for a sufficiently large polynomial poly, and hides factors of .

Theorem 4.1.

Suppose the assumptions (A1)(A2)(A3) are satisfied. Then for every , there is such that for every , after doing a minibatch SGD with batch size and learning rate for iterations, with high probability:

Our theorem implies if the data satisfies our assumptions, and we parametrize the network properly, then we only need polynomial in many samples to achieve a good prediction error. This error is measured directly on the true distribution , not merely on the input data used to train this network. Our result is also dimension free: There is no dependency on the underlying dimension of the data, the complexity is fully captured by . Moreover, no matter how much the network is overparameterized, it will only increase the total iterations by factors of . So we can overparameterize by an sub-exponential amount without significantly increasing the complexity.

Furthermore, we can always treat each input example as an individual distribution, thus is always zero. In this case, if we use batch size for iterations, we would have . Then our theorem indicate that as long as , where is the minimal distance between each examples, we can actually fit arbitrary labels of the input data. However, since the total iteration only depends on , when but the input data is actually structured (with small and large ), then SGD can actually achieve a small generalization error, even when the network has enough capacity to fit arbitrary labels of the training examples (and can also be done by SGD). Thus, we prove that SGD has a strong inductive bias on structured data: Instead of finding a bad global optima that can fit arbitrary labels, it actually finds those with good generalization guarantees. This gives more thorough explanation to the empirical observations in neyshabur2014search ; zhang2016understanding .

5 Intuition and Proof Sketch for A Simplified Case

To train a neural network with ReLU activations, there are two questions need to be addressed:

  1. Why can SGD optimize the training loss? Or even finding a critical point? Since the underlying network is highly non-smooth, existing theorems do not give any finite convergence rate of SGD for training neural network with ReLUs activations.

  2. Why can the trained network generalize? Even when the capacity is large enough to fit random labels of the input data? This is known as the inductive bias of SGD.

This work takes a step towards answering these two questions. We show that when the network is overparameterized, it becomes more “pseudo smooth”, which makes it easir for SGD to minimize the training loss, and furthermore, it will not hurt the generalization error. Our proof is based on the following important observation:

  • [topsep=0pt,itemsep=0pt,partopsep=0pt, parsep=0pt]

  • The more we overparameterize the network, the less likely the activation pattern for one neuron and one data point will change in a fixed number of iterations.

This observation allows us to couple the gradient of the true neural network with a “pseudo gradient” where the activation pattern for each data point and each neuron is fixed. That is, when computing the “pseudo gradient”, for fixed , whether the -th hidden node is activated on the -th data point will always be the same for different . (But for fixed , for different or , the sign can be different.) We are able to prove that unless the generalization error is small, the “pseudo gradient” will always be large. Moreover, we show that the network is actually smooth thus SGD can minimize the loss.

We then show that when the number of hidden neurons increases, with a properly decreasing learning rate, the total number of iterations it takes to minimize the loss is roughly not changed. However, the total number of iterations that we can couple the true gradient with the pseudo one increases. Thus, there is a polynomially large so that we can couple these two gradients until the network reaches a small generalization error.

5.1 A Simplified Case: No Variance

Here we illustrate the proof sketch for a simplified case and Appendix A provides the proof. The proof for the general case is provided in Appendix B. In the simplified case, we further assume:

  1. (No variance) Each

    is a single data point , and also we are doing full batch gradient descent as opposite to the minibatch SGD.

Then we reload the loss notation as and the gradient is

Following the intuition above, we define the pseudo gradient as

where it uses instead of as in the true gradient. That is, the activation pattern is set to be that in the initialization. Intuitively, the pseudo gradient is similar to the gradient for a pseudo network (but not exactly the same), defined as Coupling the gradients is then similar to coupling the networks and .

For simplicity, let and when , Roughly, if is small, then is relatively larger compared to the other , so the classification error is small.

We prove the following two main lemmas. The first says that at each iteration, the total number of hidden units whose gradient can be coupled with the pseudo one is quite large.

Lemma 5.1 (Coupling).

W.h.p. over the random initialization, for every , for every , we have that for at least fraction of :

The second lemma says that the pseudo gradient is large unless the error is small.

Lemma 5.2.

For , for every (that depends on , etc.) with , there exists at least fraction of such that

We now illustrate how to use these two lemmas to show the convergence for a small enough learning rate . For simplicity, let us assume that and . Thus, by Lemma 5.2 we know that unless , there are fraction of such that . Moreover, by Lemma 5.1 we know that we can pick so , which implies that there are fraction of such that as well. For small enough learning rate , doing one step of gradient descent will thus decrease by , so it converges in iterations. In the end, we just need to make sure that so we can always apply the coupling Lemma 5.1. By we know that this is true as long as . A small can be shown to lead to a small generalization error.

6 Discussion of Insights from the Analysis

Our analysis, though for learning two-layer networks on well structured data, also sheds some light upon learning neural networks in more general settings.

Generalization. Several lines of recent work explain the generalization phenomenon of overparameterized networks by low complexity of the learned networks, from the point views of spectrally-normalized margins bartlett2017spectrally ; neyshabur2017pac ; moustapha2017parseval , compression arora2018stronger ; zhou2018compressibility ; baykal2018data , and PAC-Bayes dziugaite2017computing ; neyshabur2017pac .

Our analysis has partially explained how SGD (with proper random initialization) on structured data leads to the low complexity from the compression and PCA-Bayes point views. We have shown that in a neighborhood of the random initialization, w.h.p. the gradients are similar to those of another benign learning process, and thus SGD can reduce the error and reach a good solution while still in the neighborhood. The closeness to the initialization then means the weights (or more precisely the difference between the learned weights and the initialization) can be easily compressed. In fact, empirical observations have been made and connected to generalization in nagarajan2017implicit ; arora2018stronger . Furthermore, arora2018stronger explicitly point out such a compression using a helper string (corresponding to the initialization in our setting). arora2018stronger also point out that the compression view can be regarded as a more explicit form of the PAC-Bayes view, and thus our intuition also applies to the latter.

The existence of a solution of a small generalization error near the initialization is itself not obvious. Intuitively, on structured data, the updates are structured signals spread out across the weights of the hidden neurons. Then for prediction, the random initialized part in the weights has strong cancellation, while the structured signal part in the weights collectively affects the output. Therefore, the latter can be much smaller than the former while the network can still give accurate predictions. In other words, there can be a solution not far from the initialization with high probability.

Some insight is provided on the low rank of the weights. More precisely, when the data are well clustered around a few patterns, the accumulated updates (difference between the learned weights and the initialization) should be approximately low rank, which can be seen from checking the SGD updates. However, when the difference is small compared to the initialization, the spectrum of the final weight matrix is dominated by that of the initialization and thus will tend to closer to that of a random matrix. Again, such observations/intuitions have been made in the literature and connected to compression and generalization (e.g., 

arora2018stronger ).

Implicit regularization v.s. structure of the data. Existing work has analyzed the implicit regularization of SGD on logistic regression soudry2017implicit , matrix factorization gunasekar2017implicit ; ma2017implicit ; li2017algorithmic , and learning two-layer networks on linearly separable data brutzkus2017sgd . Our setting and also the analysis techniques are novel compared to the existing work. One motivation to study on structured data is to understand the role of structured data play in the implicit regularization, i.e., the observation that the solution learned on less structured or even random data is further away from the initialization. Indeed, our analysis shows that when the network size is fixed (and sufficiently overparameterized), learning over poorly structured data (larger and ) needs more iterations and thus the solution can deviate more from the initialization and has higher complexity. An extreme and especially interesting case is when the network is overparameterized so that in principle it can fit the training data by viewing each point as a component while actually they come from structured distributions with small number of components. In this case, we can show that it still learns a network with a small generalization error; see the more technical discussion in Section 4.

We also note that our analysis is under the assumption that the network is sufficiently overparameterized, i.e., is a sufficiently large polynomial of , and other related parameters measuring the structure of the data. There could be the case that is smaller than this polynomial but is more than sufficient to fit the data, i.e., the network is still overparameterized. Though in this case the analysis still provides useful insight, it does not fully apply; see our experiments with relatively small . On the other hand, the empirical observations neyshabur2014search ; zhang2016understanding suggest that practical networks are highly overparameterized, so our intuition may still be helpful there.

Effect of random initialization. Our analysis also shows how proper random initializations helps the optimization and consequently generalization. Essentially, this guarantees that w.h.p. for weights close to the initialization, many hidden ReLU units will have the same activation patterns (i.e., activated or not) as for the initializations, which means the gradients in the neighborhood look like those when the hidden units have fixed activation patterns. This allows SGD makes progress when the loss is large, and eventually learns a good solution. We also note that it is essential to carefully set the scale of the initialization, which is a extensively studied topic martens2010deep ; sutskever2013importance . Our initialization has a scale related to the number of hidden units, which is particularly useful when the network size is varying, and thus can be of interest in such practical settings.

7 Experiments

(a)
(b)
(c)
(d)
Figure 1: Results on the synthetic data.
(a)
(b)
(c)
(d)
Figure 2: Results on the MNIST data.

This section aims at verifying some key implications: (1) the activation patterns of the hidden units couple with those at initialization; (2) The distance from the learned solution from the initialization is relatively small compared to the size of initialization; (3) The accumulated updates (i.e., the difference between the learned weight matrix and the initialization) have approximately low rank. These are indeed supported by the results on the synthetic and the MNIST data. Additional experiments are presented in Appendix D.

Setup. The synthetic data are of 1000 dimension and consist of classes, each having components. Each component is of equal probability , and is a Gaussian with covariance

and its mean is i.i.d. sampled from a Gaussian distribution

, where and . training data points and test data points are sampled.

The network structure and the learning process follow those in Section 3; the number of hidden units varies in the experiments, and the weights are initialized with . On the synthetic data, the SGD is run for steps with batch size and learning rate . On MNIST, the SGD is run for steps with batch size and learning rate .

Besides the test accuracy, we report three quantities corresponding to the three observations/implications to be verified. First, for coupling, we compute the fraction of hidden units whose activation pattern changed compared to the time at initialization. Here, the activation pattern is defined as if the input to the ReLU is positive and otherwise. Second, for distance, we compute the relative ratio , where is the weight matrix at time

. Finally, for the rank of the accumulated updates, we plot the singular values of

where

is the final step. All experiments are repeated 5 times, and the mean and standard deviation are reported.

Results. Figure 1 shows the results on the synthetic data. The test accuracy quickly converges to , which is even more significant with larger number of hidden units, showing that the overparameterization helps the optimization and generalization. Recall that our analysis shows that for a learning rate linearly decreasing with the number of hidden nodes , the number of iterations to get the accuracy to achieve a desired accuracy should be roughly the same, which is also verified here. The activation pattern difference ratio is less than , indicating a strong coupling. The relative distance is less than , so the final solution is indeed close to the initialization. Finally, the top 20 singular values of the accumulated updates are much larger than the rest while the spectrum of the weight matrix do not have such structure, which is also consistent with our analysis.

Figure 2 shows the results on MNIST. The observation in general is similar to those on the synthetic data (though less significant), and also the observed trend become more evident with more overparameterization. Some additional results (e.g., varying the variance of the synthetic data) are provided in the appendix that also support our theory.

8 Conclusion

This work studied the problem of learning a two-layer overparameterized ReLU neural network via stochastic gradient descent (SGD) from random initialization, on data with structure inspired by practical datasets. While our work makes a step towards theoretical understanding of SGD for training neural networs, it is far from being conclusive. In particular, the real data could be separable with respect to different metric than , or even a non-convex distance given by some manifold. We view this an important open direction.

Acknowledgements

We would like to thank the anonymous reviewers of NeurIPS’18 and Jason Lee for helpful comments. This work was supported in part by FA9550-18-1-0166, NSF grants CCF-1527371, DMS-1317308, Simons Investigator Award, Simons Collaboration Grant, and ONR-N00014-16-1-2329. Yingyu Liang would also like to acknowledge that support for this research was provided by the Office of the Vice Chancellor for Research and Graduate Education at the University of Wisconsin Madison with funding from the Wisconsin Alumni Research Foundation.

References

Appendix A Proofs for the Simplified Case

In the simplified case, we make the following simplifying assumption:

  1. (No variance) Each is a single data point , and also we are doing full batch gradient descent as opposite to the minibatch SGD.

Recall that the loss is then The gradient descent update on is given by

and the gradient is

where . The pseudo gradient is defined as

Let us call

When clear from the context, we write as . Then we can simplify the above expression as:

By definition, ’s satisfy:

  1. .

  2. .

Furthermore, indicates the “classification error”. The smaller is, the smaller the classification error is.

In the following subsections, we first show that the gradient is coupled with the pseudo gradient, then show that if the classification error is large then the pseudo gradient is large, and finally prove the convergence.

a.1 Coupling

We will show that is close to in the following sense:

Lemma A.1 (Coupling, Lemma 5.1 restated).

W.h.p. over the random initialization, for every , for every , we have that for at least fraction of :

Proof.

W.h.p. we know that every . Thus, for every and every , we have

which implies that .

Now, for every , we consider the set such that

For every and every , we know that for every :

which implies that

This implies that .

Now, we need to bound the size of . Since , by standard property of Gaussian we directly have that for . ∎

a.2 Error Large Gradient Large

The pseudo gradient can be rewritten as the following summation:

where

We would like to show that if some is large, a good fraction of will have large pseudo gradient. Now, the first step is to show that for any fixed (that does not depend on the random initialization ), with good probability (over the random choice of ) we have that is large; see Lemma A.2. Then we will take a union bound over an epsilon net on to show that for every (that can depend on ), at least a good fraction of of is large; See Lemma A.3.

Lemma A.2 (The geometry of ).

For any possible fixed such that , we have:

Clearly, without can be arbitrarily small if, say, and . However, would prevent the cancellation of those ’s.

Proof of Lemma a.2.

We will first prove that

is large with good probability.

Let us decompose into:

where . For every , consider the event defined as

  1. , and

  2. for all : .

By the definition of initialization , we know that:

and

By assumption we know that for every :

This implies that

Thus if we pick , taking a union bound we know that

Moreover, since and is independent of , we know that .

The following proof will conditional on this event , and then treat as fixed and let

be the only random variable. In this way, we will have: for every

such that and for every , since and ,

which is a linear function of . With this information, we can rewrite as:

where and is some linear function in . Thus, we know that

is a convex function with . Then applying Lemma A.5 gives

Since for , conditional on the density , which implies that

Thus we have:

(2)

Now we can look at . By the random initialization of , and since by our assumption are not functions of , a standard tail bound of Gaussian random variables shows that for every fixed and every :

Taking and putting together with inequality (2) with complete the proof. ∎

Now, we can take an epsilon net and switch the order of the quantifiers in Lemma A.2 as shown in the following lemma.

Lemma A.3 (Lemma 5.2 restated).

For , for every (that depends on , etc.) with , there exists at least fraction of such that

This lemma implies that if the classification error is large, then many will have a large gradient.

Proof of Lemma a.3.

We first consider fixed . First of all, using the randomness of we know that with probability at least ,

Now, apply Lemma A.2 we know that

which implies that for fixed the probability that there are less than of such that is is no more than a value given by:

Moreover, for every , for two different such that for all : . Moreover, since w.h.p. we know that every , it shows:

which implies that we can take an -net over with . Thus, the probability that there exists , such that there are no more than fraction of with is no more than:

With