A Surprising Linear Relationship Predicts Test Performance in Deep Networks

07/25/2018
by   Qianli Liao, et al.
10

Given two networks with the same training loss on a dataset, when would they have drastically different test losses and errors? Better understanding of this question of generalization may improve practical applications of deep networks. In this paper we show that with cross-entropy loss it is surprisingly simple to induce significantly different generalization performances for two networks that have the same architecture, the same meta parameters and the same training error: one can either pretrain the networks with different levels of "corrupted" data or simply initialize the networks with weights of different Gaussian standard deviations. A corollary of recent theoretical results on overfitting shows that these effects are due to an intrinsic problem of measuring test performance with a cross-entropy/exponential-type loss, which can be decomposed into two components both minimized by SGD -- one of which is not related to expected classification performance. However, if we factor out this component of the loss, a linear relationship emerges between training and test losses. Under this transformation, classical generalization bounds are surprisingly tight: the empirical/training loss is very close to the expected/test loss. Furthermore, the empirical relation between classification error and normalized cross-entropy loss seem to be approximately monotonic

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 24

page 25

10/11/2018

Taming the Cross Entropy Loss

We present the Tamed Cross Entropy (TCE) loss function, a robust derivat...
04/01/2021

No Cost Likelihood Manipulation at Test Time for Making Better Mistakes in Deep Networks

There has been increasing interest in building deep hierarchy-aware clas...
12/07/2021

Understanding Square Loss in Training Overparametrized Neural Network Classifiers

Deep learning has achieved many breakthroughs in modern classification t...
02/17/2021

Dissecting Supervised Constrastive Learning

Minimizing cross-entropy over the softmax scores of a linear map compose...
05/20/2018

Algorithms and Theory for Multiple-Source Adaptation

This work includes a number of novel contributions for the multiple-sour...
12/24/2021

Is Importance Weighting Incompatible with Interpolating Classifiers?

Importance weighting is a classic technique to handle distribution shift...
09/28/2018

Predicting the Generalization Gap in Deep Networks with Margin Distributions

As shown in recent research, deep neural networks can perfectly fit rand...

Code Repositories

Generalization-Puzzles-in-Deep-Networks

None


view repo

Generalization-Puzzles-in-Deep-Networks

Generalization Puzzles in Deep Networks


view repo
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Despite many successes of deep networks and a growing amount of research, several fundamental questions remain unanswered, among which is a key puzzle is about the apparent lack of generalization, defined as convergence of the training performance to the test performance with increasing size of the training set. How can a network predict well without generalization? What is the relationship between training and test performances?

In this paper, we investigate the question of why two deep networks with the same training loss have different testing performances. This question is valuable in theory and in practice since training loss is an important clue for choosing among different deep learning architectures. It is therefore worth studying how much training loss can tell us about generalization. In the rest of the paper, we will use the term “error” to mean “classification error” and the term “loss” to mean “cross-entropy loss”, the latter being the objective minimized by stochastic gradient descent (SGD).

In addition to training error and loss, there are many factors (such as choices of network architecture) that can affect generalization performance and we cannot exhaustively study them in this paper. Therefore, we restrict our models to have the same architecture and training settings within each experiment. We tried different architectures in different experiments and observed consistent results.

2 Observation: Networks with the Same Training Performance Show Different Test Performance

First we start with a common observation: even when two networks have the same architecture, same optimization meta parameters and same training loss, they usually have different test performances (i.e. error and loss), presumably because the stochastic nature of the minimization process converge to different minima among the many existing in the loss landscape [11, 13, 12].

With standard settings the differences are usually small (though significant, as shown later). We propose therefore two approaches to magnify the effect:

  • Initialize networks with different levels of “random pretraining”: the network is pretrained for a specified number of epochs on “corrupted” training data — the labels of a portion of the examples are swapped with each other in a random fashion.

  • Initialize the weights of the networks with different standard deviations of a diagonal Gaussian distribution. As it turns out, different standard deviations yield different test performance.

We show the results of “random pretraining” with networks on CIFAR-10 (Figure 1) and CIFAR-100 (Figure 9) and initialization with different standard deviations on CIFAR-10 (Figure 2) and CIFAR-100 (Figure 10).

Figure 1: Random Pretraining vs Generalization Performance on CIFAR-10: a 5-layer ConvNet (described in section G.2

) is pretrained on training data with partially “corrupted” labels for 30 epochs. It is then trained on normal data for 80 epochs. Among the network snapshots saved from all the epochs we pick a network that is closest to an arbitrarily (but low enough) chosen reference training loss (0.006 here). The number on the x axis indicates the percentage of labels that are swapped randomly. As pretraining data gets increasingly “corrupted”, the generalization performance of the resultant model becomes increasingly worse, even though they have similar training losses and the same zero classification error in training. Batch normalization (BN) is used. After training, the means and standard deviations of BN are “absorbed” into the network’s weights and biases. No data augmentation is performed.

Figure 2: Standard Deviation in weight initialization vs generalization performance on CIFAR-10: the network is initialized with weights of different standard deviations. The other settings are the same as in Figure 1. As the norm of the initial weights becomes larger, the generalization performance of the resulting model is worse, even though all models have the same classification error in training.

3 Theory: Measuring Generalization Correctly

In the previous section, we observed that it is possible to obtain networks that have the same training loss (and error) but very different test performances. This indicates that training loss is not a good proxy of test loss. In general, it is quite common to get zero training error (and a very small training loss) when using overparametrized networks. Therefore, comparing models based on training statistics becomes very difficult. In an extreme case of this situation, recent work [15] shows that an overparametrized deep network can fit a randomly labeled training set and of course fail to generalize at all. In some sense, deep nets seem to “fool” training loss: extremely low training loss can be achieved without any generalization.

Previous theoretical work[12] has provided an answer to the above puzzle. In this paper, we show a corollary of the theory and motivate it in a simple way showing why this “super-fitting” phenomenon happens and under which conditions generalization holds for deep networks.

3.1 The Normalized Cross-Entropy Loss

Notation: We define (as in [12]) a deep network with

layers with the usual elementwise scalar activation functions

as the set of functions , where the input is , the weights are given by the matrices , one per layer, with matching dimensions. We use the symbol as a shorthand for the set of matrices . For simplicity we consider here the case of binary classification in which takes scalar values, implying that the last layer matrix is

. There are no biases apart form the input layer where the bias is instantiated by one of the input dimensions being a constant. The activation function in this paper is the ReLU activation.

Consider different zero minima of the empirical risk obtained with the same network on the same training set. Can we predict their expected error from empirical properties only? A natural way to approach the problem of ranking two different minimizers of the empirical risk starts with the “positive homogeneity” property of ReLU networks (see [12]) :

(1)

where and .

This property is valid for layerwise normalization under any norm. Note that and have the same classification performance on any given (test) set. It follows that different empirical minimizers should be compared in terms of their normalized form: the factors affect an exponential type loss – driving it to zero by increasing the s to infinity – but do not change the classification performance which only depends on the sign of . Consider the cross-entropy for two classes, given by

(2)

What is the right norm to be used to normalize the capacity of deep networks? The “right” normalization should make different minimizers equivalent from the point of view of their intrinsic capacity – for instance equivalent in terms of their Rademacher complexity. The positive homogeneity property implies that the correct norm for a multilayer network should be based on a product of norms. Since different norms are equivalent in it is enough to use .

The argument can be seen more directly looking at a typical generalization bound (out of many) of the form [4], where now

is the bounded loss function

:

With probability

(3)

where is the expected loss, is the empirical loss, is the size of the training set, is an empirical complexity of the class of functions on the unit sphere (with respect to each weight per layer) to which belongs and and are constant depending on properties of the loss function. The prototypical example of such a complexity measure is the empirical Rademacher average, typically used for classification in a bound of the form 3. Another example involves covering numbers. We call the bound “tight” if the right hand side – that we call “offset” – is small. Notice that a very small offset, assuming that the bound holds, implies a linear relation with slope between and . Thus a very tight bound implies linearity with slope . Many learning algorithms have the closely related property of “generalization”:

(a notable exception is the nearest neighbor classifier), since in general we hope that

is such to decrease as increases.

We expect layerwise normalization to yield the same complexity for the different minimizers. The case of linear functions and binary classification is an example. The Rademacher complexity of a set of linear functions can be bounded as where is an

bound on the vectors

and is an bound on the vectors with and being dual norms, that is . This includes for both and but also and . Since different norms are equivalent in finite dimensional spaces (in fact when , with opposite relations holding with appropriate constants, e.g. for ), Equation 3 holds under normalization with different norms for (in the case of linear networks). Notice that the other term in the bound (the last term on the right-hand side of Equation 3) is the same for all networks trained on the same dataset.

The results of [14] – which, as mentioned earlier, was the original motivation behind the arguments above and the experiments below– imply that the weight matrices at each layer converge to the minimum Frobenius norm for each minimizer.

3.2 Bounds on the Cross-Entropy Loss imply Bounds on the Classification Error

Running SGD on the cross-entropy loss in deep networks, as usually done, is a typical approach in machine learning: minimize a convex and differentiable

surrogate of the loss function (in the case of binary classification). The approach is based on the fact that the logistic loss (for two classes, cross-entropy becomes the logistic loss) is an upper bound to the binary classification error: thus minimization of the loss implies minimization of the error. One way to formalize these upper bounds is to consider the excess classification risk , where is the classification loss associated with and is the Bayes risk [3]. Let us call the cross-entropy risk and the optimal cross entropy risk. Then the following bound holds in terms of the so-called -transform of the logistic loss :

(4)

where the function for the exponential loss – a good proxy for the logistic loss – can be computed analytically as . Interestingly the shape of for is quite similar to the empirical dependence of classification error on normalized cross-entropy loss in figures such as the bottom right of Figure 4.

Unfortunately lower bounds are not available. As a consequence, we cannot prove that among two networks the one with better normalized training cross-entropy (which implies a similar loss at test) guarantees a lower classification error at test. This difficulty is not specific to deep networks or to our main result. It is common to all optimization techniques using a surrogate loss function.

4 Experiments: Normalization Leads to Surprising Tight Generalization

In this section we discuss the experiment results after normalizing the capacity of the networks, as discussed in the previous section. What we observe is a linear relationship between the train loss and the test loss, implying that the expected loss is very close to the empirical loss.

4.1 Tight Linearity

Our observations are that the linear relationship between train loss and test loss hold in a robust way under several conditions:

  • Independence from Initialization: The linear relationship is independent of whether the initialization is via pretraining on randomly labeled natural images or whether it is via larger initialization, as shown by Figures 4 and 5. In particular, notice that the left of Figure 14 and Figure 15

    clearly shows a linear relation for minimizers obtained with the default initialization in Pytorch

    [10, 7]. This point is important because it shows that the linear relationship is not an artifact of pretraining on random labels.

  • Independence from Network Architecture: The linear relationship of the test loss and train loss is independent from the network architectures we tried. Figure 3, 16 show the linear relationship for a layer network without batch normalization while Figures 4, 5 show the linear relationship for a layer network with batch normalization on CIFAR10. Additional evidence can be found in the appendix in Figures 13, 14 and 17.

  • Independence from Data Set: Figures 3, 4, 5, 16, 13, 14, 15, 17 show the linear relationship on CIFAR10 while Figures 11 and 12 show the linear relationship on CIFAR100.

  • Norm independence: Figures 13 show that the norm used for normalization does not matter – as expected.

  • Normalization is independent of training loss: Figure 3 shows that networks with different cross-entropy training losses (which are sufficiently large to guarantee zero classification error), once normalized, show the same linear relationship between train loss and test loss.

4.2 Randomly Labeled Training Data

Since the empirical complexity in Equation 3 does not depend on the labels of the training set, Equation 3 should also hold in predicting expected error when the training is on randomly labeled data [16] (though different stricter bounds may hold separately for each of the two cases). The experiments in Figures 17, 4 and 11 show exactly this result. The new datapoint (corresponding to the randomly trained network) is still a tight bound: the empirical loss is very similar to the test loss. This is quite surprising: the corresponding classification error is zero for training and at chance level for testing!

4.3 Higher Capacity leads to Higher Test Error

Our arguments so far imply that among two unnormalized minimizers of the exponential loss that achieve the same given small loss , the minimizer with higher product of the norms has the higher capacity and thus the highest expected loss. Experiments support this claim, see Figure 18. Notice the linear relationship of test loss with increasing capacity on the top right panels of Figure 4, 5, 11, 12.

Figure 3:

Left: test loss vs training loss with all networks normalized layerwise by the Frobenius norm. Right: test loss vs training loss with all unnormalized networks. The model was a 3 layer neural network described in section

G.1.1 and was trained with 50K examples on CIFAR10. In this experiments the networks converged (and had zero train error) but not to the same loss. All networks were trained for epochs. The losses range approximately from to . The numbers in the figure indicate the amount of corruption of random labels used during pretraining. The slope and intercept of the line of best fit are and respectively. The ordinary and adjusted values are both while the root mean square (RMSE) is .
Figure 4: Random pretraining experiment with batch normalization on CIFAR-10 using a 5 layer neural network as described in section G.2. The red numbers in the figures indicate the percentages of “corrupted labels” used in pretraining. The green stars in the figures indicate the precise locations of the points. Top left: Training and test losses of unnormalized networks: there is no apparent relationship. Top right: the product of L2 norms from all layers of the network. We observe a positive correlation between the norm of the weights and the testing loss . Bottom: under layerwise normalization of the weights (using the Frobenius norm), the classification error does not change (bottom right) while the cross-entropy loss changes (bottom left). There is a surprisingly good linear relationship between training and testing losses, implying tight generalization.
Figure 5: Same as Figure 4 but using different standard deviations for initialization of the weights instead of “random pretraining”. The red numbers in the figures indicate the standard deviations used in initializing weights. The “RL” point (initialized with standard deviation 0.05) refers to training and testing on completely random labels.

5 Discussion

Our results support the theoretical arguments that an appropriate measure of complexity for a deep network should be based on a product norm[1], [2], [8]. The most relevant recent results are by [5] where the generalization bounds depend on the products of layerwise norms without the dependence on the number of layers present in previous bounds. It is also interesting to notice that layerwise Frobenius norm equal to as in our normalized networks avoids the implicit dependence of the network complexity on the number of layers. The results from [12], which triggered this paper, imply products of layerwise Frobenius norms. As we discussed earlier, all norms are equivalent in our setup.

The linear relation we found is quite surprising since it implies that the classical generalization bound of Equation 3 is not only valid but is tight: the offset in the right-hand side of the Equation is quite small, as shown by Figures 6 and 16 (this also supported by Figures 3, 13 and 15). For instance, the offset in Figure 6 is only . This implies of course linearity in our plots of training vs testing normalized cross-entropy loss. It shows that there is indeed generalization - defined as expected loss converging to training loss for large data sets - in deep neural networks. This result contradicts the claims in the title of the recent and influential paper “Understanding deep learning requires rethinking generalization” [16] though those claims do apply to the unnormalized loss.

Though it is impossible to claim that better expected cross-entropy loss implies better test error (because the latter is only an upper bound to the classification error), our empirical results in various of the figures show an approximatevely monotonic relation between normalized test (or training) cross-entropy loss and test classification error with roughly the shape of the transform of the logistic loss function[3]. Notice, in particular, that the normalized cross-entropy loss in training for the randomly labeled set is close to which is the cross-entropy loss for chance performance found at test.

Though more experiments are necessary the linear relationship we found seems to hold in a robust way across different types of networks, different data sets and different initializations.

Our results, which are mostly relevant for theory, yield a recommendation for practitioners: it is better to monitor during training the empirical cross-entropy loss of the normalized network instead of the unnormalized cross-entropy loss. The former is the one that matters in terms of stopping time and test performance (see Figures 13 and 14).

More significantly for the theory of Deep Learning, the observations of this paper clearly demand a critical discussion of several commonly held ideas related to generalization such as dropout, SGD and flat minima.

Figure 6: The left part of the figure shows the test cross-entropy loss plotted against the training loss for the normal, unnormalized network. Notice that all points have zero classification error at training. The RL points have zero classification error at training and chance test error. Once the trained networks are normalized layerwise, the training cross-entropy loss is a very good proxy for the test cross-entropy loss (right plot). The line – regressed just on the and RL points – with slope , offset , both ordinary and adjusted , root mean square (RMSE) was seems to mirror well Equation 3.

Acknowledgments

We thank Lorenzo Rosasco, Yuan Yao, Misha Belkin, Youssef Mroueh, Amnon Sashua and especially Sasha Rakhlin for illuminating discussions. Xavier Boix suggested the idea of random pretraining to obtain get different test performances. NSF funding provided by CBMM.

References

Appendix A Results on MNIST

This section shows figures replicating the main results on the MNIST data set. Figures 7 and 8 show that the linear relationship holds after normalization on the MNIST data set. Figure 8 shows the linear relationship holds after adding the point trained only on random labels.

Figure 7: The figure shows the cross-entropy loss on the test set vs the training loss for networks normalized layerwise in terms of the Frobenius norm. The model was a 3 layer neural network described in section G.1.2 and was trained with 50K examples on MNIST. All networks were trained for epochs. In this experiments the networks converged (and had zero train error) but not to the same loss. The slope and intercept of the line of best fit are and respectively. The ordinary and adjusted values are both while the root mean square (RMSE) was . The makers indicate the size of the standard deviation of the normal used for initialization.
Figure 8: The figure shows the cross-entropy loss on the test set vs the training loss for networks normalized layerwise in terms of the Frobenius norm. The model was a 3 layer neural network described in section G.1.1 and was trained with 50K examples on CIFAR10. All networks were trained for epochs. In this experiments the networks converged (and had zero train error) but not to the same loss. The slope and intercept of the line of best fit are and respectively. The ordinary and adjusted values are both while the root mean square (RMSE) was . The points labeled

were trained on random labels; the training loss was estimated on the same randomly labeled data set. The points marked with values less than

were only trained on natural labels and those makers indicate the size of the standard deviation of the normal used for initialization.

Appendix B Results on CIFAR-100

This section shows figures replicating the main results on CIFAR-100. Figure 9 shows how different test performance can be obtained with pretraining on random labels while Figure 10 shows that different increasing initializations are also effective.

More importantly, Figures 11 and 12 show that the linear relationship holds after normalization, regardless of whether the training was done with pretraining on random labels or with large initialization.

Figure 9: Same as Figure 1, but on CIFAR-100.
Figure 10: Same as Figure 2, but on CIFAR-100.
Figure 11: Same as Figure 4 but on CIFAR-100
Figure 12: Same as Figure 5 but on CIFAR-100.

Appendix C norms for normalization

This section verifies that the effectiveness of layerwise normalization does not depend on which norm is used. Figure 13 provides evidence for this. The constants underlying the equivalence of different norms depend on the dimensionality of the vector spaces.

Figure 13: Test loss/error vs training loss with all networks normalized layerwise by the L1 norm (divided by to avoid numerical issues because the L1 norms are here very large). The model was a 3 layer neural network described in section G.1.1 and was trained with 50K examples on CIFAR10. The networks were normalized after training each up to epoch and thus different points on the plot correspond to different training losses. Figure 3 shows that the normalized network does not depend on the value of the training loss before normalization. The slope and intercept of the line of best fit are and respectively. The ordinary and adjusted values are both while the root mean square (RMSE) was . The numbers in the figure indicate the amount of corruption of random labels used during the pretraining.

Appendix D Additional Evidence for a Linear Relationship

This section provides additional experiments showing a linear relationship between the test loss and the training loss after layerwise normalization.

Figure 14 shows that the linear relationship holds if all models are stopped at approximately the same train loss.

Figure 15 is the same as Figure 6, except that the randomly pretrained networks were removed. The important thing to note is that the offset is , which implies that the 3 is surprisingly tight.

Figure 14: Test loss/error vs training loss with all networks normalized layerwise by the Frobenius norm of the weights. The model was a 3 layer neural network described in section G.1.1 and was trained with 50K examples on CIFAR10. The models were obtained by pretraining on random labels and then by fine tuning on natural labels. SGD without batch normalization was run on all networks in this plot until each reached approximately cross-entropy loss on the training data. The numbers in the figure indicate the amount of corruption of the random labels used in pretraining.
Figure 15: Left: test loss vs training loss for networks normalized layerwise by the Frobenius norm. Right: test loss vs training loss for unnormalized networks. The model was a 3 layer neural network described in section G.1.1 and was trained with 50K examples on CIFAR10. In this experiments the networks converged (and had zero train error) but not to the same loss. The networks were trained for epochs. The cross entropy training losses range approximately from to . The s in the figure indicate that there was no random pretraining. The slope and intercept of the line of best fit are and respectively. The ordinary is and the adjusted value is while the root mean square (RMSE) was .
Figure 16: The figure shows the cross-entropy loss on the test set vs the training loss for networks normalized layerwise in terms of the Frobenius norm. The model was a 3 layer neural network described in section G.1.1 and was trained with 50K examples on CIFAR10. All networks were trained for epochs. In this experiments the networks converged (and had zero train error) but not to the same loss. The slope and intercept of the line of best fit are and respectively. The ordinary and adjusted values are both while the root mean square (RMSE) was . The points labeled were trained on random labels; the training loss was estimated on the same randomly labeled data set. The points marked with were only trained on natural labels.
Figure 17: The figure shows cross-entropy loss on the test set vs the training loss for networks normalized layerwise in terms of the Frobenius norm. The model was a 3 layer neural network described in section G.1.1 and was trained with 50K examples on CIFAR10. All networks were trained for epochs. In this experiments the networks converged (and had zero train error) but not to the same loss. The slope and intercept of the line of best fit are and respectively. The ordinary and adjusted values are both while the root mean square (RMSE) was . See Figure 16 for other details.

Appendix E Higher Capacity leads to higher Test Error

Figure 18 shows that when the capacity of a network (as measured by the product norm of the layers)increases, so does the test error.

Figure 18: Plot of test error vs the product of the Frobenius norms of the layers . The model was a 3 layer neural network described in section G.1.1 and trained with 50K examples on CIFAR10. The models were obtained by pretraining on random labels and then fine tuning on natural labels. SGD without batch normalization was run on all networks in this plot until each reached approximately cross-entropy loss on the training data.

Appendix F Numerical values of normalized loss

In this section we discuss very briefly some of the intriguing properties that we observe after layerwise normalization of the neural network. Why are all the cross-entropy loss values close to chance (e.g. for a class data set) in all the plots showing the linear relationship? This is of course because most of the (correct) outputs of the normalized neural networks are close to zero as shown by Figure 19. The reason for this is that we would roughly expect the norm of the network to be bounded by a norm without the ReLU activations , and the data is usually pre-processed to have mean 0 and a standard deviation of 1. In fact, for the MNIST experiments, the average value of the most likely class according to the normalized neural network is with a standard deviation . This means that significant differences directly reflecting the predicted class of each point are between and . This in turn implies that the exponentials in the cross-entropy loss are all very close to 1.

As a control experiment refer to Figure 20 for the histogram of the final layer for the most likely class for the unnormalized network. Note that not only are the differences between loss values far from machine precision , but also that the output values of the normalized neural networks are far from machine precision too as shown by Figure 19. Notice that normalization does not affect the classification loss. In summary, though the values of the points in the figures differ slightly in terms of loss from each other, those differences are highly significant and reproducible.

Figure 19: Histogram of the values of for the most likely class of the layerwise normalized neural network over the 50K images of the MNIST training set. The average value of the most likely class according to the normalized neural network is with standard deviation .
Figure 20: Histogram of the values of for the most likely class of the unnormalized neural network over the 50K images of the MNIST training set. The average value of the most likely class according to the unnormalized neural network is with standard deviation .

Appendix G Deep Neural Network Architecture

g.1 Three layer network

g.1.1 Network with filters

The model is a -layer convolutional ReLU network with the first two layers containing filters of size by ; the final layer is fully connected; only the first layer has biases. There is no pooling.

The network is overparametrized: it has parameters (compared to training examples).

g.1.2 Network with filters

The model is the same -layer convolutional ReLU network as in section G.1.1 except it had units.

The network was still overparametrized: it has parameters (compared to training examples).

g.2 Five layer network

The model is a -layer convolutional ReLU network with (with no pooling). It has in the five layers filters of size by ; the final layer is fully connected; batch-normalization is used during training.

The network is overparametrized with about parameters (compared to training examples).

Appendix H Intuition: Shallow Linear Network

Consider a shallow linear network, trained once with GD on a set of data and separately on a different set of data . Assume that in both cases the problem is linearly separable and thus zero error is achieved on the training set. The question is which of the two solutions will generalize better. The natural approach is to look at the norm of the solutions:

(5)

In both cases GD converges to zero loss with the minimum norm, maximum margin solution for . Thus the solution with the larger margin (and the smaller norm) should have a lower expected error. In fact, generalization bounds ([6]) appropriate for this case depend on the product of the norm and of a bound on the norm of the data.