1 Introduction
Deep models have proven to work well in applications such as computer vision
(Krizhevsky et al., 2012) (He et al., 2014) (Karpathy et al., 2014), speech recognition (Mohamed et al., 2012) (Hinton et al., 2012), and natural language processing
(Socher et al., 2013) (Graves, 2013) (McCann et al., 2018). Many deep models have millions of parameters, which is more than the number of training samples, but the models still generalize well (Huang et al., 2017).On the other hand, classical learning theory suggests the model generalization capability is closely related to the “complexity” of the hypothesis space. This seems to be a contradiction to the empirical observations that overparameterized models generalize well on the test data. Indeed, even if the hypothesis space is complex, the final solution learned from a given training set may still be simple. An example is, suppose the hypothesis space is the union of linear classifiers and some complex function spaces. As a union set the hypothesis space is complex in the worst case, but for some training set the best solution may be a linear classifier. This suggests the generalization capability of the model is also related to the property of the solution.
Keskar et al. (2016) and Chaudhari et al. (2016) empirically observe that the generalization ability of a model is related to the spectrum of the Hessian matrix
evaluated at the solution, and large eigenvalues of the
often leads to poor model generalization. Also, (Keskar et al., 2016), (Chaudhari et al., 2016) and (Novak et al., 2018b) introduce several different metrics to measure the “sharpness” of the solution, and demonstrate the connection between the sharpness metric and the generalization empirically. Dinh et al. (2017)later points out that most of the Hessianbased sharpness measures are problematic and cannot be applied directly to explain generalization. In particular, they show that the geometry of the parameters in RELUMLP can be modified drastically by reparameterization.
Another line of work originates from the theorists. (Langford and Caruana, 2001) and more recently (Harvey et al., 2017) (Neyshabur et al., 2017a) (Neyshabur et al., 2017b) use PACBayes bound to analysis the generalization behavior of the deep models. Since the PACBayes bound holds uniformly for all “posteriors”, it also holds for some particular “posteriors”, for example, the solution parameter perturbed with noise. This provides a natural way to incorporate the local property of the solution into the generalization analysis. In particular, Neyshabur et al. (2017a) suggests to use the difference between the perturbed loss and the empirical loss as the sharpness metric. Dziugaite and Roy (2017) tries to optimize the PACBayes bound instead for a better model generalization. Still some fundamental questions remain unanswered. In particular we are interested in the following question:
How is model generalization related to local “smoothness” of a solution?
In this paper we try to answer the question from the PACBayes perspective. Under mild assumptions on the Hessian of the loss function, we prove the generalization error of the model is related to this Hessian, the Lipschitz constant of the Hessian, the scales of the parameters, as well as the number of training samples. The analysis also gives rise to a new metric for generalization. Based on this, we can approximately select an optimal perturbation level to aid generalization which interestingly turns out to be related to Hessian as well. Inspired by this observation, we propose a perturbation based algorithm that makes use of the estimation of the Hessian to improve model generalization.
2 Sharp Minimum v.s. Flat Minimum  A Toy Example
Let us start with a toy example to demonstrate different behaviors of local optima. For training, we construct a small 2dimensional sample set from a mixture of
Gaussians, and then binarize the labels by thresholding them from their median value. The sample distribution is shown in Figure
0(b). Then we use a layer MLP model with sigmoid as the activation and cross entropy as the loss for training and prediction. The variables from different layers are shared so that the model only has two free parameters and .The model is trained using samples. Fixing the samples, we plot the loss function with respect to the model variables , as shown in Figure 0(a). Many local optima are observed even in this simple twodimensional toy example. In particular a sharp one, marked by the vertical green line, and a flat one, marked by the vertical red line. The colors on the loss surface display the values of the generalization metric scores (pacGen), which we will define in section 7. Smaller metric value indicates better generalization power.
As displayed in the figure, the metric score around the global optimum, indicated by the vertical green bar, is high, suggesting possible poor generalization capability as compared to the local optimum indicated by the red bar. We also plot a plane on the bottom of the figure. The color projected on the bottom plane indicates an approximated generalization bound, which considers both the loss and the generalization metric.^{1}^{1}1the bound was approximated with using inequality (13) The local optimum indicated by the red bar, though has a slightly higher loss, has a similar overall bound compared to the “sharp” global optimum.
On the other hand, fixing the parameter and , we may also plot the labels predicted by the model given the samples. Here we plot the prediction from both the sharp minimum (Figure 0(c)) and the flat minimum (Figure 0(d)). The sharp minimum, even though it approximates the true label better, has some complex structures in its predicted labels, while the flat minimum seems to produce a simpler classification boundary.
While it is easy to make observations on toy examples, it is less straightforward to make a quantitative statement when the model parameters and the number of training samples grow. In the following sections we try connect the local smoothness of the solution and model generalization capability. Section 3 briefly introduces some preliminaries on the learning theory. Section 4 talks about the assumptions and intuitions on how the model perturbation is related to the generalization as well as the Hessian of the solution. Section 5 dives into two specific types of perturbations: uniform and truncated Gaussian. Section 6 discusses the effect of reparameterization on the proposed bound. Some empirical approximations and experiments are shown in Section 7 and 8.
3 Model Generalization Theory
We consider the general machine learning scenario. Suppose we have a labeled data set
, where are sampled i.i.d. from a distribution . We try to learn a function , such that the expected lossis small, where is the loss function.
Since we do not know the distribution , the expected loss is hard to calculate directly. Instead usually the empirical loss
is evaluated during the training procedure.
3.1 Rademacher Complexity
Minimizing the empirical loss
may lead to issues such as overfitting. In general, by the law of large number, for a fixed function
, the empirical loss converges almost surely to the expected loss. However, when is not fixed, i.e., depends on the samples, and the number of samples is finite, classical learning theory suggests that the gap between the expected loss and the empirical loss is bounded by the sum of the Rademacher complexity and a concentration tail (ShalevShwartz and BenDavid, 2014). The Rademacher complexity is defined aswhere
s are i.i.d. Rademacher random variables.
Note the Rademacher complexity is only related to the function space , the sample distribution and the number of samples
. This seems to suggest when the function class is very complex, the gap between the empirical loss and the expected loss will be large. Though the learning theory based on Rademacher complexity can explain the overfitting effect to some extent, for example, when the hypothesis space is overly complex, the generalization tends to be worse, it is not easy to explain some wellknown empirical observations in today’s deep learning experiments including:

Overparameterization.
The hypothesis space of a deep learning network can easily get rich enough to represent any function on a finite sample set (Zhang et al., 2017). According to the bound based on the Rademacher complexity, the network may tend to overfit. However empirically those deep models generalize well.

Different generalization behaviors for different local optima.
The generalization bound based on Rademacher complexity holds uniformly for all hypothesis in the function class. On the other hand, it does not distinguish the generalization capabilities among different solutions. Obviously, there are “simple” solutions even if the whole function space is complex.
In this draft we will focus on the second empirical observations and give, to the best of our knowledge, a first explanation on behaviors of different local optima.
3.2 PACBayes
Another line of theory discussing model generalization is PACBayes (Mcallester, 2003) (McAllester, 1998) (McAllester, 1999) (Langford and ShaweTaylor, 2002)
. The PACBayes paradigm further assumes probability measures over the function class. In particular, it assumes a “posterior” distribution
as well as a “prior” distribution over the function class . In this way the function is assumed to be sampled from a “posterior” distribution over . As a consequence the expected loss is in terms of both the random draw of samples as well as the random draw of functions:Correspondingly, the empirical loss in the PACBayes paradigm is the expected loss over the draw of functions from the posterior:
PACBayes theory suggests the gap between the expected loss and the empirical loss is bounded by a term that is related to the KL divergence between and (McAllester, 1999) (Langford and ShaweTaylor, 2002). In particular, if the function is parameterized as with , when is perturbed around any , we have the following PACBayes bound (Seldin et al., 2012) (Seldin et al., 2011) (Neyshabur et al., 2017a) (Neyshabur et al., 2017b):
[PACBayesHoeffding Perturbation] Let , and be any fixed distribution over the parameters . For any and , with probability at least over the draw of samples, for any and any random perturbation ,
(1) 
One may further optimize to get a bound that scales approximately as (Seldin et al., 2011). ^{2}^{2}2Since cannot depend on the data, one has to build a grid and use the union bound. A nice property of the perturbation bound (1) is it connects the generalization with the local properties around the solution through some perturbation around . In particular, suppose is a local optima, when the perturbation level of is small, tends to be small, but may be large since the posterior is too “focused” on a small neighboring area around , and vice versa. As a consequence, we may need to search for an “optimal” perturbation level for so that the bound is minimized.
4 Local Smoothness Assumptions
Keskar et al. (2016) investigate the local structures of the converged points for deep learning networks, and find that empirically the “sharpness” of the minima is closely related to the generalization property of the classifier. The sharp minimizers, which led to lack of generalization ability, are characterized by a significant number of large positive eigenvalues in . In particular, they propose a local sharpness metric: [Sharpness Metric] (Keskar et al., 2016) Given , and , the sharpness of at is defined as:
(2) 
where , and is the pseudo inverse of . Other variants of the model generalization metrics are also proposed by Chaudhari et al. (2016) and Novak et al. (2018b).
Neyshabur et al. (2017a) suggests an “expected sharpness” based on the PACBayes bound:
(3) 
They also point out the sharpness itself may not be enough to determine the generalization capability, but combining scales with sharpness one may get a control of the generalization. Similar connections are also found by Dziugaite and Roy (2017).
4.1 Smoothness Assumption over Hessian
While some researchers have discovered empirically the generalization ability of the models is related to the second order information around the local optima, to the best of our knowledge there is no work on how to connect the Hessian matrix with the model generalization. In this section we introduce the assumption about the secondorder smoothness, which is later used in our generalization bound.
[Hessian Lipschitz]
A twice differentiable function is Hessian Lipschitz if:
(4) 
where is the operator norm.
The Hessian Lipschitz condition has been used in the numeric optimization community to model the secondorder smoothness (Nesterov and Polyak, 2006) (AllenZhu and Orecchia, 2014). For the deep models it could be unrealistic to assume the Hessian Lipschitz condition holds for all . Instead we make a local Hessian Lipschitz assumption:
[Local Hessian Lipschitz] Function is Hessian Lipschitz in , where
is a neighborhood around defined by two positive constants and .
To simplify the notation in the draft we denote .
4.2 Connecting Generalization and Hessian
Suppose the empirical loss function satisfies the local Hessian Lipschitz condition, then by Lemma in (Nesterov and Polyak, 2006), the perturbation of the function around a fixed point can be bounded by terms up to the thirdorder,
(5) 
For perturbations with zero expectation, i.e., , the linear term in (5), . Because the perturbation for different parameters are independent, the second order term can also be simplified.
(6) 
where is simply the th diagonal element in Hessian. The following lemma is straightforward given (1),(5), and (6).
Suppose the loss function . Let be any distribution on the parameters that is independent from the data. For any and , with probability at least over the draw of samples, for any such that satisfies the local Hessian Lipschitz condition in , and any random perturbation , s.t., , , and are independent for any , we have
(7) 
where is the th diagonal element of .
Note by extrema of the Rayleigh quotient, the quadratic term on the right hand side of inequality (5) is further bounded by
(8) 
This is consistent with the empirical observations of Keskar et al. (2016) that the generalization ability of the model is related to the eigenvalues of . The inequality (8) still holds even if the perturbations and are correlated. We add another lemma about correlated perturbations in Appendix (Lemma D).
4.3 Tradeoff between Sharpness Metric and Generalization Power
If we look at the right hand side of the inequality (7), and compare it with (3) (Neyshabur et al., 2017a), we see
(9) 
can be interpreted as the sharpness metric of the empirical loss. It is closely related to the Hessian , but it is also related to the perturbation distributions. Figure (2) shows when the perturbation is fixed how can affect the term .
The other term
(10) 
is related to the model generalization power in the original PACBayes bound.
Ideally we would like both and to be small for better generalization capability. However, generally the perturbation distribution that leads to small tends to have large for a given prior. As we will see in the following sections, in the end we have to make tradeoffs between the two terms.
5 Bounded Perturbations
Adding noise to the model for better generalization has proven successful both empirically and theoretically (Zhu et al., 2018) (Hoffer et al., 2017) (Jastrzȩbski et al., 2017) (Dziugaite and Roy, 2017) (Novak et al., 2018a). Instead of only minimizing the empirical loss, (Langford and Caruana, 2001) and (Dziugaite and Roy, 2017) assume different perturbation levels on different parameters, and minimize the generalization bound led by PACBayes for better model generalization. However how to connect the noise distribution with the local optima structures, for example, , and how that is related to the generalization power have not been examined.
Since the assumptions in Lemma (4.2) are local, the distributions of interest for the perturbation are necessarily bounded. In this section we investigate two special forms of perturbations, the uniform perturbation and truncated Gaussian, and provide closedform scale estimation for the perturbation levels.
5.1 Uniform Distribution
Suppose , and
. That is, the “posterior” distribution of the model parameters are uniform distribution, and the distribution supports vary for different parameters. We also assume the perturbed parameters are bounded, i.e.,
.^{3}^{3}3One may also assume the same for all parameters for a simpler argument. The proof procedure goes through in a similar way. If we choose the priors to be , and then(11) 
Note . Also we simplify the third order term in (7) by
where we use the inequality and is the number of parameters. By Lemma (4.2), we get
(12) 
If we assume is locally convex around so that for all . Solve for that minimizes the right hand side, and we have the following lemma: Suppose the loss function , and model weights are bounded . For any and , with probability at least over the draw of samples, for any such that is locally convex in and satisfies the local Hessian Lipschitz condition in ,
(13) 
where are i.i.d. uniformly perturbed random variables, and
(14) 
In our experiment, we simply treat as a hyperparameter. Other other hand, one may further build a weighted grid over and optimize for the best (Seldin et al., 2011). In this way we reach the following theorem: Under the conditions of Lemma 5.1, for any , with probability at least over the draw of samples, for any such that in , is locally convex and satisfies the local Hessian Lipschitz condition,
where are i.i.d. uniformly perturbed random variables, and
(15) 
Please see the appendix for the details of the proof.
5.2 Truncated Gaussian
Because the Gaussian distribution is not bounded but Lemma (
4.2) requires bounded perturbation, we first truncate the distribution. The procedure of truncation is similar to the proof in (Neyshabur et al., 2017b) and (Mcallester, 2003).Let , where is a diagonal covariance matrix. Denote the truncated Gaussian as . If then
(16) 
If , by union bound . Here is the inverse Gaussian error function defined as , and is the number of parameters. Following a similar procedure as in the proof of Lemma 1 in (Neyshabur et al., 2017b),
(17) 
Suppose the coefficients are bounded such that , where is a constant. Choose the prior as , and we have
(18) 
Notice that after the truncation the variance only becomes smaller, so the bound of (
7) for the truncated Gaussian becomes(19) 
Again when is convex around such that , solve for the best and we get the following lemma:
Suppose the loss function , and model weights are bounded . For any and , with probability at least over the draw of samples, for any such that in , is convex and satisfies the local Hessian Lipschitz condition,
(20) 
where are random variables distributed as truncated Gaussian,
(21) 
and is the th diagonal element in .
Again We have an extra term , which may be further optimized over a grid to get a tighter bound. In our algorithm we treat as a hyperparameter instead.
6 On the Reparameterization of RELUMLP
Dinh et al. (2017) points out the spectrum of
itself is not enough to determine the generalization power. One particular example is the multiple layer perceptron with RELU as the activations (RELUMLP). For a twolayer RELUMLP, denote
, and as the linear coefficients for the first and second layer. Clearly(22) 
If cross entropy (negative log likelihood) is used as the loss function, under certain regularization conditions, if , i.e., is the “true” parameter of the sample distribution, the change in Hessian to reparameterization can be calculated as the outer product of the gradients, in this case
(23) 
In general our bound does not assume the loss function to be cross entropy loss. Also we do not assume the model is RELUMLP. As a result we would not expect our bound stays exactly the same during the reparameterization.
On the other hand, the optimal perturbation levels in our bound scales inversely during the scaling of parameters, so the bound only changes approximately with a speed of logarithmic factor. According to Lemma (5.1) and (5.2), if we use the optimal on the right hand side of the bound, , , and are all behind the logarithmic terms. As a consequence, for RELUMLP, if we do the reparameterization trick as in Dinh et al. (2017), the change of the bound is small.
Disclaim: Section 7 and 8
will be heuristicbased experiments and approximations. They are not rigorous.
7 An Approximate Generalization Metric
Assuming is locally convex around , so that for all . If we look at Lemma 5.1, for fixed and , the only relevant term is . Replacing the optimal , and using to approximate , we come up with PACBayes based Generalization metric, called pacGen,^{4}^{4}4Even though we assume the local convexity in our metric, in application we may calculate the metric on every points. When we simply treat it as .
(24) 
as a function of epochs on MNIST for different batch sizes. SGD is used as the optimizer, and the learning rate is set as
for all configurations. As the batch size grows, gets larger. The trend is consistent with the true gap of losses.To calculate the metric on realworld data we need to estimate the diagonal elements of the Hessian as well as the Lipschitz constant of the Hessian. For efficiency concern we follow Adam (Kingma and Ba, 2014) and approximate by . Also we use the exponential smoothing technique with as in (Kingma and Ba, 2014).
To estimate , we first estimate the Hessian of a randomly perturbed model ^{5}^{5}5In the experiment the gradients are taken w.r.t. instead of , and we ignore the difference between and ., and then approximate by .
We used the same model without dropout from the PyTorch example
^{6}^{6}6https://github.com/pytorch/examples/tree/master/mnist. We fix the learning rate as and vary the batch size for training. The gap between the test loss and the training loss, and the metric are plotted in Figure 3. We had the same observation as in (Keskar et al., 2016) that as the batch size grows, the gap between the test loss and the training loss tends to get larger. Our proposed metric also shows the exact same trend. Note we do not use LR annealing heuristics as in (Goyal et al., 2017) which enables large batch training.Similarly we also carry out experiment by fixing the training batch size as , and varying the learning rate. Figure 5 shows generalization gap and as a function of epochs. It is observed that as the learning rate decreases, the gap between the test loss and the training loss increases. And the proposed metric shows similar trend compared to the actual generalization gap.
We also run the same model and experiment on CIFAR10 (Krizhevsky et al., ) just to demonstrate the effectiveness of the metric. We observed similar trends on CIFAR10 as shown in Figure 4 and Figure 6.
8 A Perturbed Optimization Algorithm
The right hand side of (1) has . This suggests rather than minimizing the empirical loss , we should optimize the perturbed empirical loss instead for a better model generalization power. Adding perturbation to the model is not a new trick. Most of the perturbationbased methods (Zhu et al., 2018) (Hoffer et al., 2017) (Jastrzȩbski et al., 2017) (Novak et al., 2018a) (Khan et al., 2018) are based on heuristic techniques and improvement in applications have already been observed empirically. Dziugaite and Roy (2017) first proposes to optimize for a better perturbation level from the PACBayes bound, but their bound is not making use of the second order information. Also the best perturbation in (Dziugaite and Roy, 2017) is not closeform.
In this section we introduce a systematic way to perturb the model weights based on the PACBayes bound. Again we use the same exponential smoothing technique as in Adam (Kingma and Ba, 2014) to estimate the Hessian . To make the algorithm efficient, we ignore the third order part in the bound (7) so that we do not have to estimate the Lipschitz constant of Hessian. The details of the algorithm is presented in (Algorithm 1), where we treat as a hyperparameter to be optimized using the validation set.
Even though in theoretical analysis , in applications, won’t be zero especially when we only implement trial of perturbation. On the other hand, if the gradient is close to zero, then the first order term can be ignored. As a consequence, in (Algorithm 1) we only perturb the parameters that have small gradients whose absolute value is below . For efficiency issues we used a perparameter capturing the variation of the diagonal element of Hessian. Also we decrease the perturbation level with a log factor as the epoch increases.
We compare the perturbed algorithm against the original optimization method on CIFAR10, CIFAR100 (Krizhevsky et al., )
, and Tiny ImageNet
^{7}^{7}7https://tinyimagenet.herokuapp.com/. The results are shown in Figure 7. We use the WideResNet (Zagoruyko and Komodakis, 2016) as the prediction model.^{8}^{8}8https://github.com/meliketoy/wideresnet.pytorch/blob/master/networks/wide_resnet.py The depth of the chosen model is 58, and the widenfactor is set as 3. The dropout layers are turned off. For CIFAR10 and CIFAR100, we use Adam with a learning rate of , and the batch size is 128. For the perturbation parameters we use , , and =1e5. For Tiny ImageNet, we use SGD with learning rate , and the batch size is 156. For the perturbed SGD we set , , and =1e5. Also we use the validation set as the test set for the Tiny ImageNet. We observe the the effect with perturbation appears similar to regularization. With the perturbation, the accuracy on the training set tends to decrease, but the test or the validation set increases.9 Conclusion
We connect the smoothness of the solution with the model generalization in the PACBayes framework. We prove that the generalization power of a model is related to the Hessian and the smoothness of the solution, the scales of the parameters, as well as the number of training samples. In particular, we prove that the best perturbation level scales roughly as , which mostly cancels out scaling effect in the reparameterization suggested by (Dinh et al., 2017). To the best of our knowledge, this is the first work that integrate Hessian with the model generalization rigorously, and is also the first work explaining the effect of reparameterization over the generalization rigorously. Based on our generalization bound, we propose a new metric to test the model generalization and a new perturbation algorithm that adjusts the perturbation levels according to the Hessian. Finally, we empirically demonstrate the effect of our algorithm is similar to a regularizer in its ability to attain better performance on unseen data.
10 Acknowledgement
The authors are grateful to Tengyu Ma, James Bradbury, Yingbo Zhou, and Bryan McCann for their helpful comments and suggestions on the manuscript.
References
 AllenZhu and Orecchia (2014) Zeyuan AllenZhu and Lorenzo Orecchia. Linear Coupling: An Ultimate Unification of Gradient and Mirror Descent. pages 1–22, 2014. ISSN 24760757. doi: 10.23915/distill.00006. URL http://arxiv.org/abs/1407.1537.
 Chaudhari et al. (2016) Pratik Chaudhari, Anna Choromanska, Stefano Soatto, Yann LeCun, Carlo Baldassi, Christian Borgs, Jennifer T. Chayes, Levent Sagun, and Riccardo Zecchina. Entropysgd: Biasing gradient descent into wide valleys. CoRR, abs/1611.01838, 2016. URL http://arxiv.org/abs/1611.01838.
 Dinh et al. (2017) Laurent Dinh, Razvan Pascanu, Samy Bengio, and Yoshua Bengio. Sharp Minima Can Generalize For Deep Nets. 2017. ISSN 19387228. URL http://arxiv.org/abs/1703.04933.
 Dziugaite and Roy (2017) Gintare Karolina Dziugaite and Daniel M. Roy. Computing Nonvacuous Generalization Bounds for Deep (Stochastic) Neural Networks with Many More Parameters than Training Data. 2017. URL http://arxiv.org/abs/1703.11008.
 Goyal et al. (2017) Priya Goyal, Piotr Dollár, Ross B. Girshick, Pieter Noordhuis, Lukasz Wesolowski, Aapo Kyrola, Andrew Tulloch, Yangqing Jia, and Kaiming He. Accurate, large minibatch SGD: training imagenet in 1 hour. CoRR, abs/1706.02677, 2017. URL http://arxiv.org/abs/1706.02677.
 Graves (2013) Alex Graves. Generating sequences with recurrent neural networks. CoRR, abs/1308.0850, 2013. URL http://arxiv.org/abs/1308.0850.
 Harvey et al. (2017) Nick Harvey, Christopher Liaw, and Abbas Mehrabian. Nearlytight VCdimension bounds for piecewise linear neural networks. In Satyen Kale and Ohad Shamir, editors, Proceedings of the 2017 Conference on Learning Theory, volume 65 of Proceedings of Machine Learning Research, pages 1064–1068, Amsterdam, Netherlands, 07–10 Jul 2017. PMLR. URL http://proceedings.mlr.press/v65/harvey17a.html.
 He et al. (2014) Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Spatial pyramid pooling in deep convolutional networks for visual recognition. CoRR, abs/1406.4729, 2014. URL http://arxiv.org/abs/1406.4729.
 Hinton et al. (2012) Geoffrey Hinton, Li Deng, Dong Yu, George Dahl, Abdel rahman Mohamed, Navdeep Jaitly, Andrew Senior, Vincent Vanhoucke, Patrick Nguyen, Tara Sainath, and Brian Kingsbury. Deep neural networks for acoustic modeling in speech recognition. Signal Processing Magazine, 2012.
 Hoffer et al. (2017) Elad Hoffer, Itay Hubara, and Daniel Soudry. Train longer, generalize better: closing the generalization gap in large batch training of neural networks. 2017. ISSN 10495258. URL http://arxiv.org/abs/1705.08741.

Huang et al. (2017)
Gao Huang, Zhuang Liu, Laurens van der Maaten, and Kilian Q Weinberger.
Densely connected convolutional networks.
In
Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition
, 2017.  Jastrzȩbski et al. (2017) Stanisław Jastrzȩbski, Zachary Kenton, Devansh Arpit, Nicolas Ballas, Asja Fischer, Yoshua Bengio, and Amos Storkey. Three Factors Influencing Minima in SGD. pages 1–21, 2017. URL http://arxiv.org/abs/1711.04623.

Karpathy et al. (2014)
Andrej Karpathy, George Toderici, Sanketh Shetty, Thomas Leung, Rahul
Sukthankar, and Li FeiFei.
Largescale video classification with convolutional neural networks.
pages 1725–1732, 2014. doi: 10.1109/CVPR.2014.223. URL https://doi.org/10.1109/CVPR.2014.223.  Keskar et al. (2016) Nitish Shirish Keskar, Dheevatsa Mudigere, Jorge Nocedal, Mikhail Smelyanskiy, and Ping Tak Peter Tang. On largebatch training for deep learning: Generalization gap and sharp minima. CoRR, abs/1609.04836, 2016. URL http://arxiv.org/abs/1609.04836.
 Khan et al. (2018) Mohammad Emtiyaz Khan, Didrik Nielsen, Voot Tangkaratt, Wu Lin, Yarin Gal, and Akash Srivastava. Fast and scalable bayesian deep learning by weightperturbation in adam. pages 2616–2625, 2018. URL http://proceedings.mlr.press/v80/khan18a.html.
 Kingma and Ba (2014) Diederik P. Kingma and Jimmy Ba. Adam: A method for stochastic optimization. CoRR, abs/1412.6980, 2014. URL http://dblp.unitrier.de/db/journals/corr/corr1412.html#KingmaB14.
 (17) Alex Krizhevsky, Vinod Nair, and Geoffrey Hinton. Cifar10 (canadian institute for advanced research). URL http://www.cs.toronto.edu/~kriz/cifar.html.
 Krizhevsky et al. (2012) Alex Krizhevsky, Ilya Sutskever, and Geoffrey E. Hinton. Imagenet classification with deep convolutional neural networks. pages 1097–1105, 2012. URL http://dl.acm.org/citation.cfm?id=2999134.2999257.
 Langford and Caruana (2001) John Langford and Rich Caruana. (Not) Bounding the True Error. Advances in Neural …, 2001. ISSN 10495258. URL http://machinelearning.wustl.edu/mlpapers/paper{_}files/nips02AA54.pdf.
 Langford and ShaweTaylor (2002) John Langford and John ShaweTaylor. Pacbayes & margins. In Proceedings of the 15th International Conference on Neural Information Processing Systems, NIPS’02, pages 439–446, Cambridge, MA, USA, 2002. MIT Press. URL http://dl.acm.org/citation.cfm?id=2968618.2968674.
 Mcallester (2003) David Mcallester. Simplified pacbayesian margin bounds. In In COLT, pages 203–215, 2003.

McAllester (1998)
David A. McAllester.
Some pacbayesian theorems.
In
Proceedings of the Eleventh Annual Conference on Computational Learning Theory
, COLT’ 98, pages 230–234, New York, NY, USA, 1998. ACM. ISBN 1581130570. doi: 10.1145/279943.279989. URL http://doi.acm.org/10.1145/279943.279989.  McAllester (1999) David A. McAllester. Pacbayesian model averaging. In Proceedings of the Twelfth Annual Conference on Computational Learning Theory, COLT ’99, pages 164–170, New York, NY, USA, 1999. ACM. ISBN 1581131674. doi: 10.1145/307400.307435. URL http://doi.acm.org/10.1145/307400.307435.
 McCann et al. (2018) Bryan McCann, Nitish Shirish Keskar, Caiming Xiong, and Richard Socher. The natural language decathlon: Multitask learning as question answering. 2018. URL https://arxiv.org/abs/1806.08730. cite arxiv:1806.08730.

Mohamed et al. (2012)
A. Mohamed, G. E. Dahl, and G. Hinton.
Acoustic modeling using deep belief networks.
Trans. Audio, Speech and Lang. Proc., 20(1):14–22, January 2012. ISSN 15587916. doi: 10.1109/TASL.2011.2109382. URL https://doi.org/10.1109/TASL.2011.2109382.  Nesterov and Polyak (2006) Yurii Nesterov and B. T. Polyak. Cubic regularization of newton method and its global performance. Math. Program., 108(1):177–205, August 2006. ISSN 00255610. doi: 10.1007/s1010700607068. URL https://doi.org/10.1007/s1010700607068.
 Neyshabur et al. (2017a) Behnam Neyshabur, Srinadh Bhojanapalli, David McAllester, and Nathan Srebro. Exploring Generalization in Deep Learning. (Nips), 2017a. ISSN 10495258. URL http://arxiv.org/abs/1706.08947.
 Neyshabur et al. (2017b) Behnam Neyshabur, Srinadh Bhojanapalli, and Nathan Srebro. A PACBayesian Approach to SpectrallyNormalized Margin Bounds for Neural Networks. (2017):1–9, 2017b. URL http://arxiv.org/abs/1707.09564.
 Novak et al. (2018a) Roman Novak, Yasaman Bahri, Daniel A. Abolafia, Jeffrey Pennington, and Jascha SohlDickstein. Sensitivity and Generalization in Neural Networks: an Empirical Study. pages 1–21, 2018a. URL http://arxiv.org/abs/1802.08760.
 Novak et al. (2018b) Roman Novak, Yasaman Bahri, Daniel A. Abolafia, Jeffrey Pennington, and Jascha SohlDickstein. Sensitivity and generalization in neural networks: an empirical study. In International Conference on Learning Representations, 2018b. URL https://openreview.net/forum?id=HJC2SzZCW.

Seldin et al. (2012)
Y. Seldin, F. Laviolette, and J. ShaweTaylor.
Pacbayesian analysis of supervised, unsupervised, and reinforcement learning, 2012.
 Seldin et al. (2011) Yevgeny Seldin, François Laviolette, Nicolò CesaBianchi, John ShaweTaylor, and Peter Auer. Pacbayesian inequalities for martingales. CoRR, abs/1110.6886, 2011. URL http://arxiv.org/abs/1110.6886.
 ShalevShwartz and BenDavid (2014) Shai ShalevShwartz and Shai BenDavid. Understanding Machine Learning: From Theory to Algorithms. Cambridge University Press, New York, NY, USA, 2014. ISBN 1107057132, 9781107057135.
 Socher et al. (2013) Richard Socher, Alex Perelygin, Jean Wu, Jason Chuang, Christopher D. Manning, Andrew Ng, and Christopher Potts. Recursive deep models for semantic compositionality over a sentiment treebank. pages 1631–1642, October 2013. URL http://www.aclweb.org/anthology/D131170.
 Zagoruyko and Komodakis (2016) Sergey Zagoruyko and Nikos Komodakis. Wide residual networks. CoRR, abs/1605.07146, 2016. URL http://arxiv.org/abs/1605.07146.
 Zhang et al. (2017) Chiyuan Zhang, Samy Bengio, Moritz Hardt, Benjamin Recht, and Oriol Vinyals. Understanding deep learning requires rethinking generalization. 2017. URL https://arxiv.org/abs/1611.03530.
 Zhu et al. (2018) Zhanxing Zhu, Jingfeng Wu, Bing Yu, Lei Wu, and Jinwen Ma. The anisotropic noise in stochastic gradient descent: Its behavior of escaping from minima and regularization effects zhanxing. pages 1–15, 2018. URL http://arxiv.org/abs/1803.00195.
Appendix A Proof of Lemma 5.1
We rewrite the inequality (12) below
(25) 
The terms related to on the right hand side of (25) are
(26) 
Since the assumption is for all , . Solving for that minimizes the right hand side of (25), and we have
(27) 
The term on the right hand side of (12) is monotonically increasing w.r.t. , so
(28) 
Appendix B Proof of Theorem 5.1
The following proof is similar to the proof of Theorem 6 in (Seldin et al., 2011). Note the in Lemma (5.1) cannot depend on the data. In order to optimize we need to build a grid of the form
for .
For a given value of , we pick , such that
where is the largest integer value smaller than . Set , and take a weighted union bound over s with weights , and we have with probability at least ,
Simplify the right hand side and we complete the proof.
Appendix C Proof of Lemma 5.2
We first rewrite the inequality (19) below: