Generalization-Puzzles-in-Deep-Networks
None
view repo
Given two networks with the same training loss on a dataset, when would they have drastically different test losses and errors? Better understanding of this question of generalization may improve practical applications of deep networks. In this paper we show that with cross-entropy loss it is surprisingly simple to induce significantly different generalization performances for two networks that have the same architecture, the same meta parameters and the same training error: one can either pretrain the networks with different levels of "corrupted" data or simply initialize the networks with weights of different Gaussian standard deviations. A corollary of recent theoretical results on overfitting shows that these effects are due to an intrinsic problem of measuring test performance with a cross-entropy/exponential-type loss, which can be decomposed into two components both minimized by SGD -- one of which is not related to expected classification performance. However, if we factor out this component of the loss, a linear relationship emerges between training and test losses. Under this transformation, classical generalization bounds are surprisingly tight: the empirical/training loss is very close to the expected/test loss. Furthermore, the empirical relation between classification error and normalized cross-entropy loss seem to be approximately monotonic
READ FULL TEXT VIEW PDFNone
Generalization Puzzles in Deep Networks
Despite many successes of deep networks and a growing amount of research, several fundamental questions remain unanswered, among which is a key puzzle is about the apparent lack of generalization, defined as convergence of the training performance to the test performance with increasing size of the training set. How can a network predict well without generalization? What is the relationship between training and test performances?
In this paper, we investigate the question of why two deep networks with the same training loss have different testing performances. This question is valuable in theory and in practice since training loss is an important clue for choosing among different deep learning architectures. It is therefore worth studying how much training loss can tell us about generalization. In the rest of the paper, we will use the term “error” to mean “classification error” and the term “loss” to mean “cross-entropy loss”, the latter being the objective minimized by stochastic gradient descent (SGD).
In addition to training error and loss, there are many factors (such as choices of network architecture) that can affect generalization performance and we cannot exhaustively study them in this paper. Therefore, we restrict our models to have the same architecture and training settings within each experiment. We tried different architectures in different experiments and observed consistent results.
First we start with a common observation: even when two networks have the same architecture, same optimization meta parameters and same training loss, they usually have different test performances (i.e. error and loss), presumably because the stochastic nature of the minimization process converge to different minima among the many existing in the loss landscape [11, 13, 12].
With standard settings the differences are usually small (though significant, as shown later). We propose therefore two approaches to magnify the effect:
Initialize networks with different levels of “random pretraining”: the network is pretrained for a specified number of epochs on “corrupted” training data — the labels of a portion of the examples are swapped with each other in a random fashion.
Initialize the weights of the networks with different standard deviations of a diagonal Gaussian distribution. As it turns out, different standard deviations yield different test performance.
In the previous section, we observed that it is possible to obtain networks that have the same training loss (and error) but very different test performances. This indicates that training loss is not a good proxy of test loss. In general, it is quite common to get zero training error (and a very small training loss) when using overparametrized networks. Therefore, comparing models based on training statistics becomes very difficult. In an extreme case of this situation, recent work [15] shows that an overparametrized deep network can fit a randomly labeled training set and of course fail to generalize at all. In some sense, deep nets seem to “fool” training loss: extremely low training loss can be achieved without any generalization.
Previous theoretical work[12] has provided an answer to the above puzzle. In this paper, we show a corollary of the theory and motivate it in a simple way showing why this “super-fitting” phenomenon happens and under which conditions generalization holds for deep networks.
Notation: We define (as in [12]) a deep network with
layers with the usual elementwise scalar activation functions
as the set of functions , where the input is , the weights are given by the matrices , one per layer, with matching dimensions. We use the symbol as a shorthand for the set of matrices . For simplicity we consider here the case of binary classification in which takes scalar values, implying that the last layer matrix is. There are no biases apart form the input layer where the bias is instantiated by one of the input dimensions being a constant. The activation function in this paper is the ReLU activation.
Consider different zero minima of the empirical risk obtained with the same network on the same training set. Can we predict their expected error from empirical properties only? A natural way to approach the problem of ranking two different minimizers of the empirical risk starts with the “positive homogeneity” property of ReLU networks (see [12]) :
(1) |
where and .
This property is valid for layerwise normalization under any norm. Note that and have the same classification performance on any given (test) set. It follows that different empirical minimizers should be compared in terms of their normalized form: the factors affect an exponential type loss – driving it to zero by increasing the s to infinity – but do not change the classification performance which only depends on the sign of . Consider the cross-entropy for two classes, given by
(2) |
What is the right norm to be used to normalize the capacity of deep networks? The “right” normalization should make different minimizers equivalent from the point of view of their intrinsic capacity – for instance equivalent in terms of their Rademacher complexity. The positive homogeneity property implies that the correct norm for a multilayer network should be based on a product of norms. Since different norms are equivalent in it is enough to use .
The argument can be seen more directly looking at a typical generalization bound (out of many) of the form [4], where now
is the bounded loss function
:(3) |
where is the expected loss, is the empirical loss, is the size of the training set, is an empirical complexity of the class of functions on the unit sphere (with respect to each weight per layer) to which belongs and and are constant depending on properties of the loss function. The prototypical example of such a complexity measure is the empirical Rademacher average, typically used for classification in a bound of the form 3. Another example involves covering numbers. We call the bound “tight” if the right hand side – that we call “offset” – is small. Notice that a very small offset, assuming that the bound holds, implies a linear relation with slope between and . Thus a very tight bound implies linearity with slope . Many learning algorithms have the closely related property of “generalization”:
(a notable exception is the nearest neighbor classifier), since in general we hope that
is such to decrease as increases.We expect layerwise normalization to yield the same complexity for the different minimizers. The case of linear functions and binary classification is an example. The Rademacher complexity of a set of linear functions can be bounded as where is an
bound on the vectors
and is an bound on the vectors with and being dual norms, that is . This includes for both and but also and . Since different norms are equivalent in finite dimensional spaces (in fact when , with opposite relations holding with appropriate constants, e.g. for ), Equation 3 holds under normalization with different norms for (in the case of linear networks). Notice that the other term in the bound (the last term on the right-hand side of Equation 3) is the same for all networks trained on the same dataset.The results of [14] – which, as mentioned earlier, was the original motivation behind the arguments above and the experiments below– imply that the weight matrices at each layer converge to the minimum Frobenius norm for each minimizer.
Running SGD on the cross-entropy loss in deep networks, as usually done, is a typical approach in machine learning: minimize a convex and differentiable
surrogate of the loss function (in the case of binary classification). The approach is based on the fact that the logistic loss (for two classes, cross-entropy becomes the logistic loss) is an upper bound to the binary classification error: thus minimization of the loss implies minimization of the error. One way to formalize these upper bounds is to consider the excess classification risk , where is the classification loss associated with and is the Bayes risk [3]. Let us call the cross-entropy risk and the optimal cross entropy risk. Then the following bound holds in terms of the so-called -transform of the logistic loss :(4) |
where the function for the exponential loss – a good proxy for the logistic loss – can be computed analytically as . Interestingly the shape of for is quite similar to the empirical dependence of classification error on normalized cross-entropy loss in figures such as the bottom right of Figure 4.
Unfortunately lower bounds are not available. As a consequence, we cannot prove that among two networks the one with better normalized training cross-entropy (which implies a similar loss at test) guarantees a lower classification error at test. This difficulty is not specific to deep networks or to our main result. It is common to all optimization techniques using a surrogate loss function.
In this section we discuss the experiment results after normalizing the capacity of the networks, as discussed in the previous section. What we observe is a linear relationship between the train loss and the test loss, implying that the expected loss is very close to the empirical loss.
Our observations are that the linear relationship between train loss and test loss hold in a robust way under several conditions:
Independence from Initialization: The linear relationship is independent of whether the initialization is via pretraining on randomly labeled natural images or whether it is via larger initialization, as shown by Figures 4 and 5. In particular, notice that the left of Figure 14 and Figure 15
clearly shows a linear relation for minimizers obtained with the default initialization in Pytorch
[10, 7]. This point is important because it shows that the linear relationship is not an artifact of pretraining on random labels.Independence from Network Architecture: The linear relationship of the test loss and train loss is independent from the network architectures we tried. Figure 3, 16 show the linear relationship for a layer network without batch normalization while Figures 4, 5 show the linear relationship for a layer network with batch normalization on CIFAR10. Additional evidence can be found in the appendix in Figures 13, 14 and 17.
Norm independence: Figures 13 show that the norm used for normalization does not matter – as expected.
Normalization is independent of training loss: Figure 3 shows that networks with different cross-entropy training losses (which are sufficiently large to guarantee zero classification error), once normalized, show the same linear relationship between train loss and test loss.
Since the empirical complexity in Equation 3 does not depend on the labels of the training set, Equation 3 should also hold in predicting expected error when the training is on randomly labeled data [16] (though different stricter bounds may hold separately for each of the two cases). The experiments in Figures 17, 4 and 11 show exactly this result. The new datapoint (corresponding to the randomly trained network) is still a tight bound: the empirical loss is very similar to the test loss. This is quite surprising: the corresponding classification error is zero for training and at chance level for testing!
Our arguments so far imply that among two unnormalized minimizers of the exponential loss that achieve the same given small loss , the minimizer with higher product of the norms has the higher capacity and thus the highest expected loss. Experiments support this claim, see Figure 18. Notice the linear relationship of test loss with increasing capacity on the top right panels of Figure 4, 5, 11, 12.
Our results support the theoretical arguments that an appropriate measure of complexity for a deep network should be based on a product norm[1], [2], [8]. The most relevant recent results are by [5] where the generalization bounds depend on the products of layerwise norms without the dependence on the number of layers present in previous bounds. It is also interesting to notice that layerwise Frobenius norm equal to as in our normalized networks avoids the implicit dependence of the network complexity on the number of layers. The results from [12], which triggered this paper, imply products of layerwise Frobenius norms. As we discussed earlier, all norms are equivalent in our setup.
The linear relation we found is quite surprising since it implies that the classical generalization bound of Equation 3 is not only valid but is tight: the offset in the right-hand side of the Equation is quite small, as shown by Figures 6 and 16 (this also supported by Figures 3, 13 and 15). For instance, the offset in Figure 6 is only . This implies of course linearity in our plots of training vs testing normalized cross-entropy loss. It shows that there is indeed generalization - defined as expected loss converging to training loss for large data sets - in deep neural networks. This result contradicts the claims in the title of the recent and influential paper “Understanding deep learning requires rethinking generalization” [16] though those claims do apply to the unnormalized loss.
Though it is impossible to claim that better expected cross-entropy loss implies better test error (because the latter is only an upper bound to the classification error), our empirical results in various of the figures show an approximatevely monotonic relation between normalized test (or training) cross-entropy loss and test classification error with roughly the shape of the transform of the logistic loss function[3]. Notice, in particular, that the normalized cross-entropy loss in training for the randomly labeled set is close to which is the cross-entropy loss for chance performance found at test.
Though more experiments are necessary the linear relationship we found seems to hold in a robust way across different types of networks, different data sets and different initializations.
Our results, which are mostly relevant for theory, yield a recommendation for practitioners: it is better to monitor during training the empirical cross-entropy loss of the normalized network instead of the unnormalized cross-entropy loss. The former is the one that matters in terms of stopping time and test performance (see Figures 13 and 14).
More significantly for the theory of Deep Learning, the observations of this paper clearly demand a critical discussion of several commonly held ideas related to generalization such as dropout, SGD and flat minima.
We thank Lorenzo Rosasco, Yuan Yao, Misha Belkin, Youssef Mroueh, Amnon Sashua and especially Sasha Rakhlin for illuminating discussions. Xavier Boix suggested the idea of random pretraining to obtain get different test performances. NSF funding provided by CBMM.
Introduction to statistical learning theory.
pages 169–207, 2003.This section verifies that the effectiveness of layerwise normalization does not depend on which norm is used. Figure 13 provides evidence for this. The constants underlying the equivalence of different norms depend on the dimensionality of the vector spaces.
This section provides additional experiments showing a linear relationship between the test loss and the training loss after layerwise normalization.
Figure 14 shows that the linear relationship holds if all models are stopped at approximately the same train loss.
Figure 18 shows that when the capacity of a network (as measured by the product norm of the layers)increases, so does the test error.
In this section we discuss very briefly some of the intriguing properties that we observe after layerwise normalization of the neural network. Why are all the cross-entropy loss values close to chance (e.g. for a class data set) in all the plots showing the linear relationship? This is of course because most of the (correct) outputs of the normalized neural networks are close to zero as shown by Figure 19. The reason for this is that we would roughly expect the norm of the network to be bounded by a norm without the ReLU activations , and the data is usually pre-processed to have mean 0 and a standard deviation of 1. In fact, for the MNIST experiments, the average value of the most likely class according to the normalized neural network is with a standard deviation . This means that significant differences directly reflecting the predicted class of each point are between and . This in turn implies that the exponentials in the cross-entropy loss are all very close to 1.
As a control experiment refer to Figure 20 for the histogram of the final layer for the most likely class for the unnormalized network. Note that not only are the differences between loss values far from machine precision , but also that the output values of the normalized neural networks are far from machine precision too as shown by Figure 19. Notice that normalization does not affect the classification loss. In summary, though the values of the points in the figures differ slightly in terms of loss from each other, those differences are highly significant and reproducible.
The model is a -layer convolutional ReLU network with the first two layers containing filters of size by ; the final layer is fully connected; only the first layer has biases. There is no pooling.
The network is overparametrized: it has parameters (compared to training examples).
The model is the same -layer convolutional ReLU network as in section G.1.1 except it had units.
The network was still overparametrized: it has parameters (compared to training examples).
The model is a -layer convolutional ReLU network with (with no pooling). It has in the five layers filters of size by ; the final layer is fully connected; batch-normalization is used during training.
The network is overparametrized with about parameters (compared to training examples).
Consider a shallow linear network, trained once with GD on a set of data and separately on a different set of data . Assume that in both cases the problem is linearly separable and thus zero error is achieved on the training set. The question is which of the two solutions will generalize better. The natural approach is to look at the norm of the solutions:
(5) |
In both cases GD converges to zero loss with the minimum norm, maximum margin solution for . Thus the solution with the larger margin (and the smaller norm) should have a lower expected error. In fact, generalization bounds ([6]) appropriate for this case depend on the product of the norm and of a bound on the norm of the data.
Comments
There are no comments yet.