# Compressibility and Generalization in Large-Scale Deep Learning

Modern neural networks are highly overparameterized, with capacity to substantially overfit to training data. Nevertheless, these networks often generalize well in practice. It has also been observed that trained networks can often be "compressed" to much smaller representations. The purpose of this paper is to connect these two empirical observations. Our main technical result is a generalization bound for compressed networks based on the compressed size. Combined with off-the-shelf compression algorithms, the bound leads to state of the art generalization guarantees; in particular, we provide the first non-vacuous generalization guarantees for realistic architectures applied to the ImageNet classification problem. As additional evidence connecting compression and generalization, we show that compressibility of models that tend to overfit is limited: We establish an absolute limit on expected compressibility as a function of expected generalization error, where the expectations are over the random choice of training examples. The bounds are complemented by empirical results that show an increase in overfitting implies an increase in the number of bits required to describe a trained network.

• 12 publications
• 18 publications
• 10 publications
• 53 publications
• 9 publications
09/25/2019

### Compression based bound for non-compressed network: unified generalization error analysis of large compressible deep neural network

One of biggest issues in deep learning theory is its generalization abil...
06/15/2021

### Compression Implies Generalization

Explaining the surprising generalization performance of deep neural netw...
01/14/2020

### Understanding Generalization in Deep Learning via Tensor Methods

Deep neural networks generalize well on unseen data though the number of...
01/06/2022

### Grokking: Generalization Beyond Overfitting on Small Algorithmic Datasets

In this paper we propose to study generalization of neural networks on s...
01/27/2019

### Information-Theoretic Understanding of Population Risk Improvement with Model Compression

We show that model compression can improve the population risk of a pre-...
04/15/2018

### Data-Dependent Coresets for Compressing Neural Networks with Applications to Generalization Bounds

The deployment of state-of-the-art neural networks containing millions o...
10/16/2020

### Failures of model-dependent generalization bounds for least-norm interpolation

We consider bounds on the generalization performance of the least-norm l...

## 1 Introduction

A pivotal question in machine learning is why deep networks perform well despite overparameterization. These models often have many more parameters than the number of examples they are trained on, which enables them to drastically overfit to training data

(Zhang et al., 2017a). In common practice, however, such networks perform well on previously unseen data.

Explaining the generalization performance of neural networks is an active area of current research. Attempts have been made at adapting classical measures such as VC-dimension (Harvey et al., 2017) or margin/norm bounds (Neyshabur et al., 2018; Bartlett et al., 2017), but such approaches have yielded bounds that are vacuous by orders of magnitudes. Other authors have explored modifications of the training procedure to obtain networks with provable generalization guarantees (Dziugaite & Roy, 2017, 2018). Such procedures often differ substantially from standard procedures used by practitioners, and empirical evidence suggests that they fail to improve performance in practice (Wilson et al., 2017).

We begin with an empirical observation: it is often possible to “compress” trained neural networks by finding essentially equivalent models that can be described in a much smaller number of bits; see Cheng et al. (2018) for a survey. Inspired by classical results relating small model size and generalization performance (often known as Occam’s razor

), we establish a new generalization bound based on the effective compressed size of a trained neural network. Combining this bound with off-the-shelf compression schemes yields the first non-vacuous generalization bounds in practical problems. The main contribution of the present paper is the demonstration that, unlike many other measures, this measure is effective in the deep-learning regime.

Generalization bound arguments typically identify some notion of complexity of a learning problem, and bound generalization error in terms of that complexity. Conceptually, the notion of complexity we identify is:

 complexity=compressed size−remaining structure. (1)

The first term on the right-hand side represents the link between generalization and explicit compression. The second term corrects for superfluous structure that remains in the compressed representation. For instance, the predictions of trained neural networks are often robust to perturbations of the network weights. Thus, a representation of a neural network by its weights carries some irrelevant information. We show that accounting for this robustness can substantially reduce effective complexity.

Our results allow us to derive explicit generalization guarantees using off-the-shelf neural network compression schemes. In particular:

• [leftmargin=.25in]

• The generalization bound can be evaluated by compressing a trained network, measuring the effective compressed size, and substituting this value into the bound.

• Using off-the-shelf neural network compression schemes with this recipe yields bounds that are state-of-the-art, including the first non-vacuous bounds for modern convolutional neural nets.

The above result takes a compression algorithm and outputs a generalization bound on nets compressed by that algorithm. We provide a complementary result by showing that if a model tends to overfit then there is an absolute limit on how much it can be compressed. We consider a classifier as a (measurable) function of a random training set, so the classifier is viewed as a random variable. We show that the entropy of this random variable is lower bounded by a function of the expected degree of overfitting. Additionally, we use the randomization tests of

Zhang et al. (2017a) to show empirically that increased overfitting implies worse compressibility, for a fixed compression scheme.

The relationship between small model size and generalization is hardly new: the idea is a variant of Occam’s razor, and has been used explicitly in classical generalization theory (Rissanen, 1986; Blumer et al., 1987; MacKay, 1992; Hinton & van Camp, 1993; Rasmussen & Ghahramani, 2001). However, the use of highly overparameterized models in deep learning seems to directly contradict the Occam principle. Indeed, the study of generalization and the study of compression in deep learning has been largely disjoint; the later has been primarily motivated by computational and storage limitations, such as those arising from applications on mobile devices (Cheng et al., 2018). Our results show that Occam type arguments remain powerful in the deep learning regime. The link between compression and generalization is also used in work by Arora et al. (2018), who study compressibility arising from a form of noise stability. Our results are substantially different, and closer in spirit to the work of Dziugaite & Roy (2017); see Section 3 for a detailed discussion.

Zhang et al. (2017a) study the problem of generalization in deep learning empirically. They observe that standard deep net architectures—which generalize well on real-world data—are able to achieve perfect training accuracy on randomly labelled data. Of course, in this case the test error is no better than random guessing. Accordingly, any approach to controlling generalization error of deep nets must selectively and preferentially bound the generalization error of models that are actually plausible outputs of the training procedure applied to real-world data. Following Langford & Caruana (2002); Dziugaite & Roy (2017); Neyshabur et al. (2018), we make use of the PAC-Bayesian framework (McAllester, 1999; Catoni, 2007; McAllester, 2013). This framework allows us to encode prior beliefs about which learned models are plausible as a (prior) distribution over possible parameter settings. The main challenge in developing a bound in the PAC-Bayes framework bound is to articulate a distribution

that encodes the relative plausibilities of possible outputs of the training procedure. The key insight is that, implicitly, any compression scheme is a statement about model plausibilities: good compression is achieved by assigning short codes to the most probable models, and so the probable models are those with short codes.

## 2 Generalization and the PAC-Bayesian Principle

In this section, we recall some background and notation from statistical learning theory. Our aim is to learn a classifier using data examples. Each example

consists of some features  and a label . It is assumed that the data are drawn identically and independently from some data generating distribution, . The goal of learning is to choose a hypothesis

that predicts the label from the features. The quality of the prediction is measured by specifying some loss function

; the value  is a measure of the failure of hypothesis  to explain example . The overall quality of a hypothesis  is measured by the risk under the data generating distribution:

 L(h)=E(X,Y)∼D[L(h(X),Y)].

Generally, the data generating distribution is unknown. Instead, we assume access to training data , a sample of

points drawn i.i.d. from the data generating distribution. The true risk is estimated by the empirical risk:

 ^L(h)=1n∑(x,y)∈SL(h(x),y).

The task of the learner is to use the training data to choose a hypothesis  from among some pre-specified set of possible hypothesis , the hypothesis class. The standard approach to learning is to choose a hypothesis  that (approximately) minimizes the empirical risk. This induces a dependency between the choice of hypothesis and the estimate of the hypothesis’ quality. Because of this, it can happen that  overfits to the training data: . The generalization error  measures the degree of overfitting. In this paper, we consider an image classification problem, where  is an image and  the associated label for that image. The selected hypothesis is a deep neural network. We mostly consider the  loss, that is,  if the prediction is correct and  otherwise.

We use the PAC-Bayesian framework to establish bounds on generalization error. In general, a PAC-Bayesian bound attempts to control the generalization error of a stochastic classifier by measuring the discrepancy between a pre-specified random classifier (often called prior), and the classifier of interest. Conceptually, PAC-Bayes bounds have the form:

 generalization error of ρ≤O(√KL(ρ,π)/n), (2)

where is the number of training examples, denotes the prior, and denotes the classifier of interest (often called posterior).

More formally, we write  for the risk of the random estimator. The fundamental bound in PAC-Bayesian theory is (Catoni, 2007, Thm. 1.2.6):

###### Theorem 2.1 (PAC-Bayes).

Let be a -valued loss function, let be some probability measure on the hypothesis class, and let . Then, with probability at least  over the distribution of the sample:

 L(ρ)≤infλ>1Φ−1λ/n{^L(ρ)+αλ[KL(ρ,π)−logϵ+2log(log(α2λ)logα)]}, (3)

where we define as:

 Φ−1γ(x)=1−e−γx1−e−γ. (4)
###### Remark 2.2.

The above formulation of the PAC-Bayesian theorem is somewhat more opaque than other formulations (e.g., McAllester, 2003, 2013; Neyshabur et al., 2018). This form is significantly tighter when is large. See Bégin et al. (2014); Laviolette (2017) for a unified treatment of PAC-Bayesian bounds.

The quality of a PAC-Bayes bound depends on the discrepancy between the PAC-Bayes prior —encoding the learned models we think are plausible—and , which is the actual output of the learning procedure. The main challenge is finding good choices for the PAC-Bayes prior , for which the value of is both small and computable.

## 3 Relationship to Previous Work

#### Generalization.

The question of which properties of real-world networks explain good generalization behavior has attracted considerable attention (Langford, 2002; Langford & Caruana, 2002; Hinton & van Camp, 1993; Hochreiter & Schmidhuber, 1997; Baldassi et al., 2015, 2016; Chaudhari et al., 2017; Keskar et al., 2017; Dziugaite & Roy, 2017; Schmidt-Hieber, 2017; Neyshabur et al., 2017, 2018; Arora et al., 2018); see Arora (2017) for a review of recent advances. Such results typically identify a property of real-world networks, formalize it as a mathematical definition, and then use this definition to prove a generalization bound. Generally, the bounds are very loose relative to the true generalization error, which can be estimated by evaluating performance on held-out data. Their purpose is not to quantify the actual generalization error, but rather to give qualitative evidence that the property underpinning the generalization bound is indeed relevant to generalization performance. The present paper can be seen in this tradition: we propose compressibility as a key signature of performant real-world deep nets, and we give qualitative evidence for this thesis in the form of a generalization bound.

The idea that compressibility leads to generalization has a long history in machine learning. Minimum description length (MDL) is an early formalization of the idea (Rissanen, 1986). Hinton & van Camp (1993) applied MDL to very small networks, already recognizing the importance of weight quantization and stochasticity. More recently, Arora et al. (2018) consider the connection between compression and generalization in large-scale deep learning. The main idea is to compute a measure of noise-stability of the network, and show that it implies the existence of a simpler network with nearly the same performance. A variant of a known compression bound (see (McAllester, 2013) for a PAC-Bayesian formulation) is then applied to bound the generalization error of this simpler network in terms of its code length. In contrast, the present paper develops a tool to leverage existing neural network compression algorithms to obtain strong generalization bounds. The two papers are complementary: we establish non-vacuous bounds, and hence establish a quantitative connection between generalization and compression. An important contribution of Arora et al. (2018) is obtaining a quantity measuring the compressibility of a neural network; in contrast, we apply a compression algorithm and witness its performance. We note that their compression scheme is very different from the sparsity-inducing compression schemes (Cheng et al., 2018) we use in our experiments. Which properties of deep networks allow them to be sparsely compressed remains an open question.

To strengthen a naïve Occam bound, we use the idea that deep networks are insensitive to mild perturbations of their weights, and that this insensitivity leads to good generalization behavior. This concept has also been widely studied (e.g., Langford, 2002; Langford & Caruana, 2002; Hinton & van Camp, 1993; Hochreiter & Schmidhuber, 1997; Baldassi et al., 2015, 2016; Chaudhari et al., 2017; Keskar et al., 2017; Dziugaite & Roy, 2017; Neyshabur et al., 2018). As we do, some of these papers use a PAC-Bayes approach (Langford & Caruana, 2002; Dziugaite & Roy, 2017; Neyshabur et al., 2018). Neyshabur et al. (2018) arrive at a bound for non-random classifiers by computing the tolerance of a given deep net to noise, and bounding the difference between that net and a stochastic net to which they apply a PAC-Bayes bound. Like the present paper, Langford & Caruana (2002); Dziugaite & Roy (2017)

work with a random classifier given by considering a normal distribution over the weights centered at the output of the training procedure. We borrow the observation of

Dziugaite & Roy (2017) that the stochastic network is a convenient formalization of perturbation robustness.

The approaches to generalization most closely related to ours are, in summary:

Reference Structure Non-Vacuous
MNIST ImageNet
Dziugaite & Roy (2017) Perturbation Robustness
Neyshabur et al. (2018) Perturbation Robustness
Arora et al. (2018) Compressibility (from Perturbation Robustness)
Present paper Compressibility and Perturbation Robustness

These represent the best known generalization guarantees for deep neural networks. Our bound provides the first non-vacuous generalization guarantee for the ImageNet classification task, the de facto standard problem for which deep learning dominates. It is also largely agnostic to model architecture: we apply the same argument to both fully connected and convolutional networks. This is in contrast to some existing approaches that require extra analysis to extend bounds for fully connected networks to bounds for convolutional networks (Neyshabur et al., 2018; Konstantinos et al., ; Arora et al., 2018).

#### Compression.

The effectiveness of our work relies on the existence of good neural network compression algorithms. Neural network compression has been the subject of extensive interest in the last few years, motivated by engineering requirements such as computational or power constraints. We apply a relatively simple strategy in this paper in the line of Han et al. (2016), but we note that our bound is compatible with most forms of compression. See Cheng et al. (2018) for a survey of recent results in this field.

## 4 Main Result

We first describe a simple Occam’s razor type bound that translates the quality of a compression into a generalization bound for the compressed model. The idea is to choose the PAC-Bayes prior such that greater probability mass is assigned to models with short code length. In fact, the bound stated in this section may be obtained as a simple weighted union bound, and a variation is reported in McAllester (2013). However, embedding this bound in the PAC-Bayesian framework allows us to combine this idea, reflecting the explicit compressible structure of trained networks, with other ideas reflecting different properties of trained networks.

We consider a non-random classifier by taking the PAC-Bayes posterior to be a point mass at , the output of the training (plus compression) procedure. Recall that computing the PAC-Bayes bound effectively reduces to computing .

###### Theorem 4.1.

Let denote the number of bits required to represent hypothesis using some pre-specified coding . Let denote the point mass at the compressed model . Let denote any probability measure on the positive integers. There exists a prior such that:

 KL(ρ,πc)≤|^h|clog2−log(m(|^h|c)). (5)

This result relies only on the quality of the chosen coding and is agnostic to whether a lossy compression is applied to the model ahead of time. In practice, the code is chosen to reflect some explicit structure—e.g., sparsity—that is imposed by a lossy compression.

###### Proof.

Let denote the set of estimators that correspond to decoded points, and note that by construction. Consider the measure on :

 πc(h)=1Zm(|h|c)2−|h|c, % where Z=∑h∈Hcm(|h|c)2−|h|c. (6)

As is injective on , we have that . We may thus directly compute the KL-divergence from the definition to obtain the claimed result. ∎

###### Remark 4.2.

To apply the bound in practice, we must make a choice of . A pragmatic solution is to simply consider a bound on the size of the model to be selected (e.g. in many cases it is reasonable to assume that the encoded model is smaller than bytes, which is bits), and then consider to be uniform over all possible lengths.

### 4.1 Using Robustness to Weight Perturbations

The simple bound above applies to an estimator that is compressible in the sense that its encoded length with respect to some fixed code is short. However, such a strategy does not consider any structure on the hypothesis space . In practice, compression schemes will often fail to exploit some structure, and generalization bounds can be (substantially) improved by accounting for this fact. We empirically observe that trained neural networks are often tolerant to low levels of discretization of the trained weights, and also tolerant to some low level of added noise in the trained weights. Additionally, quantization is an essential step in numerous compression strategies (Han et al., 2016). We construct a PAC-Bayes bound that reflects this structure.

This analysis requires a compression scheme specified in more detail. We assume that the output of the compression procedure is a triplet , where denotes the location of the non-zero weights, is a codebook, and  denotes the quantized values. Most state-of-the-art compression schemes can be formalized in this manner (Han et al., 2016).

Given such a triplet, we define the corresponding weight as:

 wi(S,Q,C)={cqj if i=sj,0 otherwise. (7)

Following Langford & Caruana (2002); Dziugaite & Roy (2017), we bound the generalization error of a stochastic estimator given by applying independent random normal noise to the non-zero weights of the network. Formally, we consider the (degenerate) multivariate normal centered at , with  being a diagonal matrix such that  if  and  otherwise.

###### Theorem 4.3.

Let be the output of a compression scheme, and let

be the stochastic estimator given by the weights decoded from the triplet and variance

. Let denote some arbitrary (fixed) coding scheme and let denote an arbitrary distribution on the positive integers. Then, for any , there is some PAC-Bayes prior such that:

 KL(ρS,C,Q,π)≤(k⌈logr⌉+|S|c+|C|c)log2−logm(k⌈logr⌉+|S|c+|C|c)+k∑i=1KL(Normal(cqi,σ2),r∑j=1Normal(cj,τ2)). (8)

Note that we have written the KL-divergence of a distribution with a unnormalized measure (the last term), and in particular this term may (and often will) be negative. We defer the construction of the prior and the proof of Theorem 4.3 to the supplementary material.

###### Remark 4.4.

We may obtain the first term from the simple Occam’s bound described in Theorem 4.1 by choosing the coding of the quantized values as a simple array of integers of the correct bit length. The second term thus describes the adjustment (or number of bits we “gain back”) from considering neighbouring estimators.

## 5 Generalization bounds in practice

In this section we present examples combining our theoretical arguments with state-of-the-art neural network compression schemes.111 Code to reproduce the experiments is available in the supplementary material. Recall that almost all other approaches to bounding generalization error of deep neural networks yield vacuous bounds for realistic problems. The one exception is Dziugaite & Roy (2017), which succeeds by retraining the network in order to optimize the generalization bound. We give two examples applying our generalization bounds to the models output by modern neural net compression schemes. In contrast to earlier results, this leads immediately to non-vacuous bounds on realistic problems. The strength of the Occam bound provides evidence that the connection between compressibility and generalization has substantive explanatory power.

We report confidence bounds based on the measured effective compressed size of the networks. The bounds are achieved by combining the PAC-Bayes bound Theorem 2.1 with Theorem 4.3, showing that is bounded by the “effective compressed size”. We note a small technical modification: we choose the prior variance layerwise by a grid search, this adds a negligible contribution to the effective size (see Section A.1 for the technical details of the bound).

#### LeNet-5 on MNIST.

Our first experiment is performed on the MNIST dataset, a dataset of 60k grayscale images of handwritten digits. We fit the LeNet-5 (LeCun et al., 1998) network, one of the first convolutional networks. LeNet-5 has two convolutional layers and two fully connected layers, for a total of 431k parameters.

We apply a pruning and quantization strategy similar to that described in Han et al. (2016). We prune the network using Dynamic Network Surgery (Guo et al., 2016), pruning all but of the network weights. We then quantize the non-zero weights using a codebook with 4 bits. The location of the non-zero coordinates are stored in compressed sparse row format, with the index differences encoded using arithmetic compression.

We consider the stochastic classifier given by adding Gaussian noise to each non-zero coordinate before each forward pass. We add Gaussian noise with standard deviation equal to

of the difference between the largest and smallest weight in the filter. This results in a negligible drop in classification performance.We obtain a bound on the training error of (with confidence). The effective size of the compressed model is measured to be .

#### ImageNet.

The ImageNet dataset (Russakovsky et al., 2015) is a dataset of about 1.2 million natural images, categorized into 1000 different classes. ImageNet is substantially more complex than the MNIST dataset, and classical architectures are correspondingly more complicated. For example, AlexNet (Krizhevsky et al., 2012) and VGG-16 (Simonoyan & Zisserman, 2014) contain 61 and 128 million parameters, respectively. Non-vacuous bounds for such models are still out of reach when applying our bound with current compression techniques. However, motivated by computational restrictions, there has been extensive interest in designing more parsimonious architectures that achieve comparable or better performance with significantly fewer parameters (Iandola et al., 2016; Howard et al., 2017; Zhang et al., 2017b). By combining neural net compression schemes with parsimonious models of this kind, we demonstrate a non-vacuous bounds on models with better performance than AlexNet.

Our simple Occam bound requires only minimal assumptions, and can be directly applied to existing compressed networks. For example, Iandola et al. (2016) introduce the SqueezeNet architecture, and explicitly study its compressibility. They obtain a model with better performance than AlexNet but that can be written in . A direct application of our naïve Occam bound yields non-vacuous bound on the test error of (with confidence). To apply our stronger bound—taking into account the noise robustness—we train and compress a network from scratch. We consider Mobilenet 0.5 (Howard et al., 2017), which in its uncompressed form has better performance and smaller size than SqueezeNet (Iandola et al., 2016).

Zhu & Gupta (2017) study pruning of MobileNet in the context of energy-efficient inference in resource-constrained environments. We use their pruning scheme with some small adjustments. In particular, we use Dynamic Network Surgery (Guo et al., 2016) as our pruning method but follow a similar schedule. We prune of the total parameters. The pruned model achieves a validation accuracy of . We quantize the weights using a codebook strategy (Han et al., 2016)

. We consider the stochastic classifier given by adding Gaussian noise to the non-zero weights, with the variance set in each layer so as not to degrade our prediction performance. For simplicity, we ignore biases and batch normalization parameters in our bound, as they represent a negligible fraction of the parameters. We consider top-1 accuracy (whether the most probable guess is correct) and top-5 accuracy (whether any of the 5 most probable guesses is correct).

The final “effective compressed size” is . The stochastic network has a top-1 accuracy of on the training data, and top-5 accuracy of  on the training data. The small effective compressed size and high training data accuracy yield non-vacuous bounds for top-1 and top-5 test error. See Appendix B for the details of the experiment.

## 6 Limits on compressibility

We have shown that compression results directly imply generalization bounds, and that these may be applied effectively to obtain non-vacuous bounds on neural networks. In this section, we provide a complementary view: overfitting implies a limit on compressibility.

#### Theory.

We first prove that the entropy of estimators that tend to overfit is bounded in terms of the expected degree of overfitting. That implies the estimators fail to compress on average. As previously, consider a sample sampled i.i.d. from some distribution , and an estimator (or selection procedure) , which we consider as a (random) function of the training data. The key observation is:

 P(L(^h(x),y)=1∣(x,y)∈Sn)=E(^L(^h)), P(L(^h(x),y)=1∣(x,y)∉Sn)=E(L(^h)).

That is, the probability of misclassifying an example in the training data is smaller than the probability of misclassifying a fresh example, and the expected strength of this difference is determined by the expected degree of overfitting. By Bayes’ rule, we thus see that the more overfits, the better it is able to distinguish a sample from the training and testing set. Such an estimator must thus “remember” a significant portion of the training data set, and its entropy is thus lower bounded by the entropy of its “memory”.

###### Theorem 6.1.

Let , , and be as in the text immediately preceeding the theorem. For simplicity, assume that both the sample space and the hypothesis set are discrete. Then,

 H(^h)≥ng(E[^L(^h)],E[L(^h)]), (9)

where denotes some non-negative function (given explicitly in the proof).

We defer the proof to the supplementary material.

#### Experiments.

We now study this effect empirically. The basic tool is the randomization test of Zhang et al. (2017a): we consider a fixed architecture and a number of datasets produced by randomly relabeling the categories of some fraction of examples from a real-world dataset.

If the model has sufficiently high capacity, it can be fit with approximately zero training loss on each dataset. In this case, the generalization error is given by the fraction of examples that have been randomly relabeled. We apply a standard neural net compression tool to each of the trained models, and we observe that the models with worse generalization require more bits to describe in practice.

For simplicity, we consider the CIFAR-10 dataset, a collection of 40000 images categorized into 10 classes. We fit the ResNet (He et al., 2016) architecture with 56 layers with no pre-processing and no penalization on the CIFAR-10 dataset where the labels are subjected to varying levels of randomization. As noted in Zhang et al. (2017a), the network is able to achieve training accuracy no matter the level of randomization.

We then compress the networks fitted on each level of label randomization by pruning to a given target sparsity. Surprisingly, all networks are able to achieve sparsity with essentially no loss of training accuracy, even on completely random labels. However, we observe that as the compression level increases further, the scenarios with more randomization exhibit a faster decay in training accuracy, see Fig. 1. This is consistent with the fact that network size controls generalization error.

## 7 Discussion

It has been a long standing observation by practitioners that despite the large capacity of models used in deep learning practice, empirical results demonstrate good generalization performance. We show that with no modifications, a standard engineering pipeline of training and compressing a network leads to demonstrable and non-vacuous generalization guarantees. These are the first such results on networks and problems at a practical scale, and mirror the experience of practitioners that best results are often achieved without heavy regularization or modifications to the optimizer (Wilson et al., 2017).

The connection between compression and generalization raises a number of important questions. Foremost, what are its limitations? The fact that our bounds are non-vacuous implies the link between compression and generalization is non-trivial. However, the bounds are far from tight. If significantly better compression rates were achievable, the resulting bounds would even be of practical value. For example, if a network trained on ImageNet to training and testing accuracy could be compressed to an effective size of —about one order of magnitude smaller than our current compression—that would yield a sharp bound on the generalization error.

## 8 Acknowledgements

We acknowledge computing resources from Columbia University’s Shared Research Computing Facility project, which is supported by NIH Research Facility Improvement Grant 1G20RR030893-01, and associated funds from the New York State Empire State Development, Division of Science Technology and Innovation (NYSTAR) Contract C090171, both awarded April 15, 2010

## References

• Arora et al. (2018) S. Arora, R. Ge, B. Neyshabur, and Y. Zhang. Stronger generalization bounds for deep nets via a compression approach, February 2018.
• Arora (2017) Sanjeev Arora. Generalization Theory and Deep Nets, An introduction.
• Baldassi et al. (2015) Carlo Baldassi, Alessandro Ingrosso, Carlo Lucibello, Luca Saglietti, and Riccardo Zecchina.

Subdominant dense clusters allow for simple learning and high computational performance in neural networks with discrete synapses.

Phys. Rev. Lett., 115:128101, Sep 2015.
• Baldassi et al. (2016) Carlo Baldassi, Christian Borgs, Jennifer T. Chayes, Alessandro Ingrosso, Carlo Lucibello, Luca Saglietti, and Riccardo Zecchina. Unreasonable effectiveness of learning neural networks: From accessible states and robust ensembles to basic algorithmic schemes. Proceedings of the National Academy of Sciences, 113(48):E7655–E7662, 2016.
• Bartlett et al. (2017) Peter Bartlett, Dylan J. Foster, and Matus Telgarsky. Spectrally-normalized margin bounds for neural networks, 2017.
• Blumer et al. (1987) Anselm Blumer, Andrzej Ehrenfeucht, David Haussler, and Manfred K. Warmuth. Occam’s razor. Information Processing Letters, 24(6):377 – 380, 1987.
• Bégin et al. (2014) Luc Bégin, Pascal Germain, François Laviolette, and Jean-Francis Roy. PAC-Bayesian Theory for Transductive Learning. In Samuel Kaski and Jukka Corander (eds.),

Proceedings of the Seventeenth International Conference on Artificial Intelligence and Statistics

, volume 33 of Proceedings of Machine Learning Research, pp. 105–113, Reykjavik, Iceland, 22–25 Apr 2014. PMLR.
• Catoni (2007) Olivier Catoni. Pac-Bayesian supervised classification: the thermodynamics of statistical learning, volume 56 of Institute of Mathematical Statistics Lecture Notes—Monograph Series. Institute of Mathematical Statistics, Beachwood, OH, 2007.
• Chaudhari et al. (2017) Pratik Chaudhari, Anna Choromanska, Stefano Soatto, Yann LeCun, Carlo Baldassi, Christian Borgs, Jennifer Chayes, Levent Sagun, and Riccardo Zecchina. Entropy-SGD: Biasing gradient descent into wide valleys. In International Conference on Learning Representations, 2017.
• Cheng et al. (2018) Y. Cheng, D. Wang, P. Zhou, and T. Zhang. Model compression and acceleration for deep neural networks: The principles, progress, and challenges. IEEE Signal Processing Magazine, 35(1):126–136, Jan 2018.
• Dziugaite & Roy (2017) Gintare Karolina Dziugaite and Daniel M. Roy. Computing nonvacuous generalization bounds for deep (stochastic) neural networks with many more parameters than training data. In Conference on Uncertainty in Artificial Intelligence, 2017.
• Dziugaite & Roy (2018) Gintare Karolina Dziugaite and Daniel M. Roy. Entropy-sgd optimizes the prior of a pac-bayes bound: Generalization properties of entropy-sgd and data-dependent priors. In International Conference on Machine Learning, 2018.
• Guo et al. (2016) Yiwen Guo, Anbang Yao, and Yurong Chen. Dynamic network surgery for efficient DNNs. In Advances in Neural Information Processing Systems, 2016.
• Han et al. (2016) Song Han, Huizi Mao, and William J Dally. Deep compression: Compressing deep neural networks with pruning, trained quantization and huffman coding. In International Conference on Learning Representations, 2016.
• Harvey et al. (2017) Nick Harvey, Christopher Liaw, and Abbas Mehrabian. Nearly-tight VC-dimension bounds for piecewise linear neural networks. In Satyen Kale and Ohad Shamir (eds.), Proceedings of the 2017 Conference on Learning Theory, volume 65 of Proceedings of Machine Learning Research, pp. 1064–1068, Amsterdam, Netherlands, 07–10 Jul 2017. PMLR.
• He et al. (2016) Kaiming He, Xiangyu Zhang, Shaoqin Ren, and Jian Sun. Deep residual learning for image recognition. In

IEEE Conference on Computer Vision and Pattern Recognition

, pp. 770–778, June 2016.
• Hinton & van Camp (1993) Geoffrey E. Hinton and Drew van Camp. Keeping the neural networks simple by minimizing the description length of the weights. In

Proceedings of the Sixth Annual Conference on Computational Learning Theory

, pp. 5–13, 1993.
• Hochreiter & Schmidhuber (1997) Sepp Hochreiter and Jürgen Schmidhuber. Flat minima. Neural Computation, 9(1):1–42, January 1997.
• Howard et al. (2017) Andrew G. Howard, Menglong Zhu, Bo Chen, Dmitry Kalenichenko, Weijun Wang, Tobias Weyand, Marco Andreetto, and Hartwig Adam.

Mobilenets: Efficient convolutional neural networks for mobile vision applications, 2017.

• Iandola et al. (2016) Forrest N Iandola, Song Han, Matthew W Moskewicz, Khalid Ashraf, William J Dally, and Kurt Keutzer. SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and < 0.5mb model size. arXiv:1602.07360, 2016.
• Keskar et al. (2017) Nitish Shirish Keskar, Dheevatsa Mudigere, Jorge Nocedal, Mikhail Smelyanskiy, and Ping Tak Peter Tang. On large-batch training for deep learning: Generalization gap and sharp minima. In International Conference on Learning Representations, 2017.
• (22) P. Konstantinos, M. Davies, and P. Vandergheynst. PAC-Bayesian Margin Bounds for Convolutional Neural Networks - Technical Report.
• Krizhevsky et al. (2012) Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. Imagenet classification with deep convolutional neural networks. In F. Pereira, C. J. C. Burges, L. Bottou, and K. Q. Weinberger (eds.), Advances in Neural Information Processing Systems, pp. 1097–1105. 2012.
• Langford (2002) John Langford. Quantitatively tight sample complexity bounds. PhD thesis, Carnegie Mellon University, 2002.
• Langford & Caruana (2002) John Langford and Rich Caruana. (not) bounding the true error. In Advances in Neural Information Processing Systems, pp. 809–816, 2002.
• Laviolette (2017) François Laviolette. A tutorial on pac-bayesian theory. Talk at the NIPS 2017 Workshop: (Almost) 50 Shades of PAC-Bayesian Learning: PAC-Bayesian trends and insights, 2017.
• LeCun et al. (1998) Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner. Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86(11):2278–2324, 11 1998.
• MacKay (1992) David MacKay. Bayesian model comparison and backprop nets. In Advances in Neural Information Processing Systems, pp. 839–846. 1992.
• McAllester (2003) David McAllester. Simplified pac-bayesian margin bounds. In Bernhard Schölkopf and Manfred K. Warmuth (eds.), Learning Theory and Kernel Machines, pp. 203–215, Berlin, Heidelberg, 2003. Springer Berlin Heidelberg. ISBN 978-3-540-45167-9.
• McAllester (1999) David A. McAllester. Some PAC-Bayesian theorems. Machine Learning, 37(3):355–363, Dec 1999.
• McAllester (2013) David A. McAllester. A PAC-Bayesian tutorial with A dropout bound, 2013.
• Neyshabur et al. (2017) Behnam Neyshabur, Srinadh Bhojanapalli, David Mcallester, and Nati Srebro. Exploring generalization in deep learning. In I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett (eds.), Advances in Neural Information Processing Systems, pp. 5947–5956. 2017.
• Neyshabur et al. (2018) Behnam Neyshabur, Srinadh Bhojanapalli, and Nathan Srebro. A PAC-bayesian approach to spectrally-normalized margin bounds for neural networks. In International Conference on Learning Representations, 2018.
• Rasmussen & Ghahramani (2001) Carl Edward Rasmussen and Zoubin Ghahramani. Occam’s razor. In Advances in Neural Information Processing Systems (NIPS), 2001.
• Rissanen (1986) Jorma Rissanen. Stochastic complexity and modeling. Ann. Statist., 14(3):1080–1100, 09 1986.
• Russakovsky et al. (2015) Olga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng Huang, Andrej Karpathy, Aditya Khosla, Michael Bernstein, Alexander C. Berg, and Li Fei-Fei. Imagenet large scale visual recognition challenge. International Journal of Computer Vision, 115(3):211–252, 12 2015.
• Schmidt-Hieber (2017) J. Schmidt-Hieber.

Nonparametric regression using deep neural networks with relu activation function, August 2017.

• Simonoyan & Zisserman (2014) Karen Simonoyan and Andrew Zisserman. Very deep convolutional networks for large-scale image recognition. In International Conference on Learning Representations. 2014.
• Wilson et al. (2017) Ashia C Wilson, Rebecca Roelofs, Mitchell Stern, Nati Srebro, and Benjamin Recht. The marginal value of adaptive gradient methods in machine learning. In I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett (eds.), Advances in Neural Information Processing Systems, pp. 4148–4158. 2017.
• Zhang et al. (2017a) Chiyuan Zhang, Samy Bengio, Moritz Hardt, Benjamin Recht, and Oriol Vinyals. Understanding deep learning requires rethinking generalization. In International Conference on Learning Representations, 2017a.
• Zhang et al. (2017b) Xiangyu Zhang, Xinyu Zhou, Mengxiao Lin, and Jian Sun. Shufflenet: An extremely efficient convolutional neural network for mobile devices, 2017b.
• Zhu & Gupta (2017) Michael Zhu and Suyog Gupta. To prune, or not to prune: exploring the efficacy of model pruning for model compression. In NIPS Workshop on Machine Learning on the Phone and other Consumer Devices, 12 2017.

## Appendix A Proof of Theorem 4.3

In this section we describe the construction of the prior and prove the bound on the KL-divergence claimed in Theorem 4.3. Intuitively, we would like to express our prior as a mixture over all possible decoded points of the compression algorithm. More precisely, define the mixture component associated with a triplet as:

 πS,Q,C=Normal(w(S,Q,C),τ2). (10)

We then define our prior as a weighted mixture over all triplets, weighted by the code length of the triplet:

 π∝∑S,Q,Cm(|S|c+|C|c+k⌈logr⌉)2−|S|c−|C|c−k⌈logr⌉πS,Q,C, (11)

where the sum is taken over all and which are representable by our code, and all . In practice, takes values in all possible subsets of , and takes values in , where is a chosen finite subset of representable real numbers (such as those that may be represented by IEEE-754 single precision numbers), and is a chosen quantization level. We now give the proof of Theorem 4.3.

###### Proof.

We have that:

 π=1Z∑S,Q,Cm(|S|c+|C|c+k⌈logr⌉)2−|S|c−|C|c−k⌈logr⌉πS,Q,C, (12)

where we must have by the same argument as in the proof of Theorem 4.1

Suppose that the output of our compression algorithm is a triplet . We recall that our posterior is given by a normal centered at with variance , and we may thus compute the KL-divergence:

 KL(ρ,π) ≤KL(ρ,∑S,Q,Cm(|S|c+|C|c+k⌈logr⌉)2−|S|c−|C|c−k⌈logr⌉πS,Q,C) ≤KL(ρ,∑Qm(|^S|c+|^C|c+^k⌈log^r⌉)2−|^S|c−|^C|c−^k⌈log^r⌉π^S,Q,^C) ≤(|^S|c+|^C|c+^k⌈log^r⌉)log2+logm(|^S|c+|^C|c+^k⌈log^r⌉)+KL(ρ,∑Qπ^S,Q,^C). (13)

We are now left with the mixture term, which is a mixture of many terms in dimension , and thus computationally untractable. However, we note that we are in a special case where the mixture itself is independent across coordinates. Indeed, let denote the density of the univariate normal distribution with mean 0 and variance , we note that we may write the mixture as:

 (∑Qπ^S,Q,^C)(x) =r∑q1,…,qk=1k∏i=1ϕτ(xi−^cqi) =k∏i=1r∑qi=1ϕτ(xi−^cqi).

Additionally, as our chosen stochastic estimator is independent over the coordinates, the KL-divergence decomposes over the coordinates, to obtain:

 (14)

Plugging the above computation into (13) gives the desired result. ∎

### a.1 Details in Practical Uses of the Bound

Although Theorem 4.3 contains the main mathematical contents of our bound, applying the bound in a fully correct fashion requires some amount of minutiae and book-keeping we detail in this section. In particular, we are required to select a number of parameters (such as the prior variances). We extend the bound to account for such unrestricted (and possibly data-dependent) parameter selection. Typically, such adjustments have a negligible effect on the computed bounds.

###### Theorem A.1 (Union Bound for Discrete Parameters).

Let , , denote a family of priors parameterized by a discrete parameter , which takes values in a finite set . There exists a prior such that for any posterior and any :

 KL(ρ,π)≤KL(ρ,πξ)+log|Ξ|. (15)
###### Proof.

We define as a uniform mixture of the :

 π=1|Ξ|∑ξ∈Ξπξ. (16)

We then have that:

 KL(ρ,π)=EX∼ρlogdρdπ, (17)

but we can note that , from which we deduce that:

 KL(ρ,π)≤KL(ρ,πξ)+log|Ξ|. (18)

We make liberal use of this variant to control a number of discrete parameters which are chosen empirically (such as the quantization resolution at each layer). We also use this bound to control a number of continuous quantities (such as the prior variances) by discretizing these quantities as IEEE-754 single precision (32 bit) floating point numbers.

## Appendix B Experiment details

Code used to run these experiments is available on github: https://github.com/wendazhou/nnet-compression-generalization.

### b.1 LeNet-5

We train the baseline model for LeNet-5 using stochastic gradient descent with momentum and no data augmentation. The batch size is set to 1024, and the learning rate is decayed using an inverse time decay starting at 0.01 and decaying every 125 steps. We apply a small

penalty of . We train a total of 20000 steps.

We carry out the pruning using Dynamic Network Surgery (Guo et al., 2016). The threshold is selected per layer as the mean of the layer coefficients offset by a constant multiple of the standard deviation of the coefficients, where the multiple is piecewise constant starting at 0.0 and ending at 4.0. We choose the pruning probability as a piecewise constant starting at 1.0 and decaying to . We train for 30000 steps using the ADAM optimizer.

We quantize all the weights using a 4 bit codebook (Han et al., 2016) per layer initialized using -means. A single cluster in each weight is given to be exactly zero and contains the pruned weights. The remaining clusters centers are learned using the ADAM optimizer over 1000 steps.

### b.2 MobileNet

MobileNets are a class of networks that make use of depthwise separable convolutions. Each layer is composed of two convolutions, with one depthwise convolution and one pointwise convolution. We use the pre-trained MobileNet model provided by Google as our baseline model. We then prune the pointwise (and fully connected) layers only, using Dynamic Network Surgery. The threshold is set for each weight as a quantile of the absolute values of the coordinates, which is increased according to the schedule given in

(Zhu & Gupta, 2017). As the lower layers are smaller and more sensitive, we scale the target sparsity for each layer according to the size of the layer. The target sparsity is scaled linearly between and as a proportion of the number of elements in the layer compared to the largest layer (the final layer). We use stochastic gradient descent with momentum and decay the learning with an inverse time decay schedule, starting at and decaying by every steps. We use a minibatch size of and train for a total of steps, but tune the pruning schedule so that the target sparsity is reached after steps.

We quantize the weights by using a codebook for each layer with bits for all layers except the last fully connected layer which only has bits. The pointwise and fully connected codebooks have a reserved encoding for exact , whereas the non-pruned depthwise codebooks are fully learned. We initialize the cluster assignment using -means and train the cluster centers for steps with stochastic gradient with momentum with a learning rate of . Note that we also modify the batch normalization moving average parameters in this step so that it adapts faster, choosing as the momentum parameter for the moving averages.

To witness noise robustness, we only add noise to the pointwise and fully connected layer. We are able to add Gaussian noise with standard deviation equal to of the difference in magnitude between the largest and smallest coordinate in the layer for the fully connected layer. For pointwise layers we add noise equal to of the difference scaled linearly by the relative size of the layer compared to the fully connected layer. These quantities were chosen to minimally degrade the training performance while obtaining good improvements on the generalization bound: in our case, we observe that the top-1 training accuracy is reduced to with noise applied from without noise.

## Appendix C Proof that overfitting implies high classifier entropy

As previously, consider a sample sampled i.i.d. from some distribution , and an estimator (or selection procedure) . The statement that overfits may then be captured in terms of the training and testing error of , namely that . We note that this statement depends on the randomness of the sample through its impact on , and we will make the interpretation precise momentarily.

Such an estimator that overfits may be transformed into a procedure which discriminates between samples from the training and testing set. Indeed, let

be drawn from an independent mixture of the uniform distribution on

and the data-generating distribution , where by independent we mean that is independent of . Then, we have by Bayes rule that:

 P((x,y)∈S∣L(^h(x),y)=1)=P(L(^h(x),y)=1∣(x,y)∈S)P(L(^h(x),y)=1∣(x,y)∈S)+P(L(^h(x),y)=1∣(x,y)∉S), (19)

where the probability is taken with respect to the distribution of . By the definition of in-sample and out-of-sample loss, we have by independence that:

 P(L(^h(x),y)=1∣(x,y)∈S)=E(^L(^h)), P(L(^h(x),y)=1∣(x,y)∉S)=E(L(^h)).

We may thus rewrite (19) (and its analogue conditional probability on the event ) to obtain:

 P((x,y)∈S∣L(^h(x),y)=1) =(1+E(L(^h))E(^L(^h)))−1, P((x,y)∈S∣L(^h(x),y)=0) =(1+1−E(L(^h))1−E(^L(^h)))−1.

We thus see that the more overfits, the better it is able to distinguish a sample from the training and testing set. Such an estimator must thus “remember” a significant portion of the training data set, and its entropy is thus lower bounded by the entropy of its “memory”. Quantitatively, we note that the quality of as a discriminator between the training and testing set is captured by the quantities

We may interpret as the average proportion of false positives and as the average proportion of true negatives when viewing as a classifier. We prove that if those quantities are substantially different from a random classifier, then must have high entropy. We formalize this statement and provide a proof below.

###### Theorem C.1.

Let be sampled i.i.d. from some distribution , and let be a selection procedure, which is only a function of the unordered set . Let us view as a random quantity through the distribution induced by the sample . For simplicity, we assume that both the sample space and the hypothesis set are discrete. We have that:

 H(^h)≥ng(pn,qn,ln), (20)

where denotes some non-negative function.

###### Proof.

Consider a sequence of pairs , where each is sampled independently according to the data generating distribution . Let denote the of sample pairs. Additionally, let denote i.i.d. Bernoulli random variables, and let denote the sequence. We may construct a sample by selecting elements of according to :

 S=((xbii,ybii))i=1,…,n, (21)

and we note that is an i.i.d. sample of size from the data generating distribution . Additionally, by independence of and , we have that . On the other hand, we have

 H(B∣E,^h) ≤n∑i=1H(Bi∣E,^h) ≤n∑i=1H(Bi∣^h(x0i),^h(x1i),y0i,y1i) ≤n∑i=1H(Bi∣L(^h(x0i),y0i)).

We compute the conditional distribution of given . In particular, we claim that is Bernoulli with parameter . Indeed, note that , and have the same distribution as if they were sampled from the procedure described before (19). Namely, sample i.i.d. according to the data generating distribution, and let be the corresponding estimator, an independent Bernoulli random variable, and where is sampled uniformly from S if and according to the data generating distribution if . Note that this distribution does not depend on due to the assumption that is measurable with respect to the unordered sample . By (19), we thus deduce that:

 P(Bi=0∣L0i=1)=pn (22)

which yields the desired result by taking expectation over the distribution of .

Similarly, we may compute the distribution of conditional on the event where , as . By definition, we now have that:

 H(Bi∣L0i)=lnhb(pn)+(1−ln)hb(qn), (23)

where

denotes the binary entropy function. Finally, we apply the chain rule for entropy. We note that

 H(B∣E,^h)=H(B,^h∣E)−H(^h∣E), (24)

and write and . In summary,

 H(^h) ≥H(^h∣E) =H(B,^h∣E)−H(B∣E,^h) ≥nlog2−n[lnhb(pn)−(1−ln)hb(qn)] ≥n[hb(1/2)−lnhb(pn)−(1−ln)hb(qn)],

which yields (20). ∎