A Comparative Analysis of the Optimization and Generalization Property of Two-layer Neural Network and Random Feature Models Under Gradient Descent Dynamics

04/08/2019 ∙ by Weinan E, et al. ∙ Princeton University 25

A fairly comprehensive analysis is presented for the gradient descent dynamics for training two-layer neural network models in the situation when the parameters in both layers are updated. General initialization schemes as well as general regimes for the network width and training data size are considered. In the over-parametrized regime, it is shown that gradient descent dynamics can achieve zero training loss exponentially fast regardless of the quality of the labels. In addition, it is proved that throughout the training process the functions represented by the neural network model are uniformly close to that of a kernel method. For general values of the network width and training data size, sharp estimates of the generalization error is established for target functions in the appropriate reproducing kernel Hilbert space. Our analysis suggests strongly that in terms of `implicit regularization', two-layer neural network models do not outperform the kernel method.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Optimization and generalization are two central issues in the theoretical analysis of machine learning models. These issues are of special interest for modern neural network models, not only because of their practical success

[18, 19], but also because of the fact that these neural network models are often heavily over-parametrized and traditional machine learning theory does not seem to work directly [21, 30]. For this reason, there has been a lot of recent theoretical work centered on these issues [15, 16, 12, 11, 2, 8, 10, 31, 29, 28, 25, 27]. One issue of particular interest is whether the gradient descent (GD) algorithm can produce models that optimize the empirical risk and at the same time generalize well for the population risk. In the case of over-parametrized two-layer neural network models, which will be the focus of this paper, it is generally understood that as a result of the non-degeneracy of the associated Gram matrix [29, 12], optimization can be accomplished using the gradient descent algorithm regardless of the quality of the labels, in spite of the fact that the empirical risk function is non-convex. In this regard, one can say that over-parametrization facilitates optimization.

The situation with generalization is a different story. There has been a lot of interest on the so-called “implicit regularization” effect [21], i.e. by tuning the parameters in the optimization algorithms, one might be able to guide the algorithm to move towards network models that generalize well, without the need to add any explicit regularization terms (see below for a review of the existing literature). But despite these efforts, it is fair to say that the general picture has yet to emerge.

In this paper, we perform a rather thorough analysis of the gradient descent algorithm for training two-layer neural network models. We study the case in which the parameters in both the input and output layers are updated – the case found in practice. In the heavily over-parametrized regime, for general initializations, we prove that the results of [12] still hold, namely, the gradient descent dynamics still converges to a global minimum exponentially fast, regardless of the quality of the labels. However, we also prove that the functions obtained are uniformly close to the ones found in an associated kernel method, with the kernel defined by the initialization. In the second part of the paper, we study the more general situation when the assumption of over-parametrization is relaxed. We provide sharp estimates for both the empirical and population risks. In particular, we prove that for target functions in the appropriate reproducing kernel Hilbert space (RKHS) [3], the generalization error can be made small if certain early stopping strategy is adopted for the gradient descent algorithm.

Our results imply that in the absence of explicit regularization over-parametrized two-layer neural networks are a lot like the kernel methods: They can always fit any set of random labels, but in order to generalize, the target functions have to be in the right RKHS. In light of the optimal generalization error bounds proved in [13] for regularized models, one is tempted to conclude that explicit regularization is necessary for two-layer neural network models to fully realize their potential in expressing complex functional relationships.

1.1 Related work

The seminal work of [30] presented both numerical and theoretical evidence that over-parametrized neural networks can fit random labels. Building upon earlier work on the non-degeneracy of some Gram matrices [29], Du et al. went a step further by proving that the GD algorithm can find global minima of the empirical risk for sufficiently over-parametrized two-layer neural networks [12]. This result was extended to multi-layer networks in [11, 2]. The related result for infinitely wide neural networks was obtained in [14]. The similar result for a general setting also appears in [9].

The issue of generalization is less clear. [10]

established generalization error bounds for solutions produced by the online stochastic gradient descent (SGD) algorithm with early stopping when the target function is in a certain RKHS. Similar results were proved in

[20] for the classification problem, and in [8] for offline SGD algorithms. In [1], generalization results were proved for the GD algorithm for target functions that can be represented by the underlying neural network models. More recently in [4]

, a generalization bound was derived for GD solutions using a data-dependent norm. This norm is bounded if the target function belongs to the appropriate RKHS. However, their error bounds are not strong enough to rule out the possibility of curse of dimensionality. Indeed the results of the present paper do suggest that curse of dimensionality does occur in their setting (see Theorem 

3.4).

2 Preliminaries

Throughout this paper, we will use the following notation , if is a positive integer. We use and to denote the and Frobenius norms for matrices, respectively. We let , and use

to denote the uniform distribution over

. We use to indicate that there exists an absolute constant such that , and is similarly defined. If is a function defined on and

is a probability distribution on

, we let .

2.1 Problem setup

We focus on the regression problem with a training data set given by , i.i.d. samples drawn from a distribution , which is assumed fixed but only known through the samples. In this paper, we assume and . We are interested in fitting the data by a two-layer neural network:

(1)

where and denote all the parameters. Here

is the nonlinear activation function. We will omit the subscript

in the notation for if there is no danger of confusion. In formula (1), we omit the bias term for notational simplicity. The effect of the bias term can be incorporated if we think of as .

The ultimate goal is to minimize the population risk defined by

But in practice, we can only work with the following empirical risk

Gradient Descent

We are interested in analyzing the property of the following gradient descent algorithm: where is the learning rate. For simplicity, we will focus on its continuous version, the gradient descent (GD) dynamics:

(2)
Initialization

. We assume that

are i.i.d. random variables drawn from

, and are i.i.d. random variables drawn from the distribution defined by . Here controls the magnitude of the initialization, and it may depend on , e.g. or . Other initialization schemes can also be considered (e.g. distributions other than , other ways of initializing ). The needed argument does not change much from the ones for this special case.

2.2 Assumption on the input data

With the activation function and the distribution , we can define two positive definite (PD) functions 111We say that a continuous symmetric function is positive definite if and only if for any , the kernel matrix with is positive definite.

For a fixed training sample, the corresponding normalized kernel matrices are defined by

(3)

Throughout this paper, we make the following assumption on the training set.

Assumption 1.

For the given training set

, we assume that the smallest eigenvalues of the two kernel matrices defined above are both positive, i.e.

Let .

Remark 1.

Note that . In general, depend on the data set. For any PD functions , the Hilbert-Schmidt integral operator is defined by

Let denote its -th largest eigenvalue. If are independently drawn from , it was proved in [6] that with high probability and . Using the similar idea, [29] provided lower bounds for based on some geometric discrepancy, which quantifies the uniformity degree of . In this paper, we leave as our basic assumption.

2.3 The random feature model

We introduce the following random feature model [22] as a reference for the two-layer neural network model

(4)

where . Here is fixed at the corresponding initial values for the neural network model, and is not part of the parameters to be trained. The corresponding gradient descent dynamics is given by

(5)

This dynamics is relatively simple since it is linear.

3 Analysis of the over-parameterized case

In this section, we consider the optimization and generalization properties of the GD dynamics in the over-parametrized regime. We introduce two Gram matrices , defined by

Let and , it is easy to see that

(6)

Since , we have

3.1 Properties of the initialization

Lemma 1.

For any fixed , with probability at least over the random initialization, we have

where .

The proof of this lemma can be found in Appendix C.

In addition, at the initialization, the Gram matrices satisfy

In fact, we have

Lemma 2.

For , if , we have, with probability at least over the random choice of

The proof of this lemma is deferred to Appendix D.

3.2 Gradient descent near the initialization

We define a neighborhood of the initialization by

(7)

Using the lemma above, we conclude that for any fixed , with probability at least over the random choices of , we must have

for all .

For the GD dynamics, we define the exit time of by

(8)
Lemma 3.

For any fixed , assume that . Then with probability at least over the random choices of , we have the following holds for any ,

Proof.

We have

where the last inequality is due to the fact that . This completes the proof. ∎

We define two quantities:

(9)

The following is the most crucial characterization of the GD dynamics.

Proposition 3.1.

For any , assume . Then, with probability at least , we have the following holds for any ,

Proof.

First, we have

To facilitate the analysis, we define the following two quantities,

Using Lemma 3, we have

(10)

Combining the two inequalities above, we get

Using Lemma 1 and the fact that , we have

(11)

Therefore,

Inserting the above estimates back to (10), we obtain

Since , we have

(12)

Therefore we have , which leads to

The following lemma provides that how and depend on and .

Lemma 4.

For any , assume . Let . If , we have

(13)

If , we have

(14)

3.3 Global convergence for arbitrary labels

Proposition 3.1 and Lemma 4 tell us that no matter how large is, we have

This actually implies that the GD dynamics always stays in , i.e. .

Theorem 3.2.

For any , assume . Then with probability at least over the random initialization, we have

for any .

Proof.

According to Lemma 3, we only need to prove that . Assume .

Let us first consider the Gram matrix . Since is Lipschitz and , we have

This leads to

(15)

Next we turn to the Gram matrix . Define the event

Since

is ReLU, this event happens only if

. By the fact that and is drawn from the uniform distribution over the sphere, we have . Therefore the entry-wise deviation of satisfies,

where

Note that . In addition, by Proposition 3.1, we have

Hence using , we obtain

(16)

By the Markov inequality, with probability we have

Consequently, with probability we have

(17)

Combining (15) and (17), we get

where the last inequality comes from Lemma (4). Taking , we get

The above result contradicts the definition of . Therefore . ∎

Remark 2.

Compared with Proposition 3.1, the above theorem imposes a stronger assumption on the network width: . This is due to the lack of continuity of when handling . If is continuous, we can get rid of the dependence on . In addition, it is also possible to remove this assumption for the case when , since in this case the Gram matrix is dominated by .

Remark 3.

Theorem 3.2 is closely related to the result of Du et al. [12] where exponential convergence to global minima was first proved for over-parametrized two-layer neural networks. But it improves the result of [12] in two aspects. First, as is done in practice, we allow the parameters in both layers to be updated, while [12] chooses to freeze the parameters in the first layer. Secondly, our analysis does not impose any specific requirement on the scale of the initialization whereas the proof of [12] relies on the specific scaling: .

3.4 Characterization of the whole GD trajectory

In the last subsection, we showed that very wide networks can fit arbitrary labels. In this subsection, we study the functions represented by such networks. We show that for highly over-parametrized two-layer neural networks, the solution of the GD dynamics is uniformly close to the solution for the random feature model starting from the same initial function.

Theorem 3.3.

Assume . Denote the solution of GD dynamics for the random feature model by

where is the solution of GD dynamics (5). For any , assume that . Then with probability at least we have,

(18)

where .

Remark 4.

Again the factor in the condition for can be removed if is assumed to be smooth or is assumed to be small (see the remark at the end of Theorem 3.2).

Remark 5.

If , the right-hand-side of (18) goes to as . For example, if we take , we have

(19)

Hence this theorem says that the GD trajectory of a very wide network is uniformly close to the GD trajectory of the related kernel method (5).

Proof of Theorem 3.3

We define

(20)

Recall the definition of in Section 3, we know that . For any , let be two

-dimensional vectors defined by

(21)

For GD dynamics (2), define . Then we have,

(22)

For GD dynamics (5) of the random feature model, we define . Then, we have

(23)