Deep neural networks (DNNs) have demonstrated good generalization ability and achieved state-of-the-art performance in many application domains. This is despite being massively over-parameterized, and despite the fact that modern neural networks are capable of getting near zero error on the training dataset Zhang et al. (2016). The reason for their success at generalization remains an open question.
The standard way of training DNNs involves minimizing a loss function using stochastic gradient descent (SGD) or one of its variantsBottou (1998). Since the loss functions of DNNs are typically non-convex functions of the parameters, with complex structure and potentially multiple minima and saddle points, SGD generally converges to different regions of parameter space, with different geometries and generalization properties, depending on optimization hyper-parameters and initialization.
Recently, several works Arpit & et al. (2017); Advani & Saxe (2017); Shirish Keskar et al. (2016) have investigated how SGD impacts on generalization in DNNs. It has been argued that wide minima tend to generalize better than sharp ones Hochreiter & Schmidhuber (1997); Shirish Keskar et al. (2016). One paper Shirish Keskar et al. (2016) empirically showed that a larger batch size correlates with sharper minima and worse generalization performance. On the other hand, Dinh et al. (2017) discuss the existence of sharp minima which behave similarly in terms of predictions compared with wide minima. We argue that, even though sharp minima that have similar performance exist, SGD does not target them. Instead it tends to find wider minima at higher noise levels in gradients and it seems to be that such wide minima found by SGD correlate with better generalization.
In this paper we find that the critical control parameter for SGD is not the batch size alone, but the ratio of the learning rate (LR) to batch size (BS), i.e. LR/BS. SGD performs similarly for different batch sizes but a constant LR/BS. On the other hand higher values for LR/BS result in convergence to wider minima, which indeed seem to result in better generalization.
Our main contributions are as follows:
We note that any SGD processes with the same LR/BS are discretizations of the same Stochastic Differential Equation.
We derive a relation between LR/BS and the width of the minimum found by SGD.
We verify experimentally that the dynamics are similar under rescaling of the LR and BS by the same amount. In particular, we investigate changing batch size, instead of learning rate, during training.
We verify experimentally that a larger LR/BS correlates with a wider endpoint of SGD and better generalization.
Let us consider a model parameterized by where the components are for , and denotes the number of parameters. For training examples , we define the loss function, , and the corresponding gradient , based on the sum over the loss values for all training examples.
Stochastic gradients arise when we consider a minibatch of size of random indices drawn uniformly from
and form an (unbiased) estimate of the gradient based on the corresponding subset of training examples.
We consider stochastic gradient descent with learning rate , as defined by the update rule
where indexes the discrete update steps.
2.1 SGD dynamics are determined by learning rate to batch size ratio
In this section we consider SGD as a discretization of a stochastic differential equation (SDE); in this underlying SDE, the learning rate and batch size only appear as the ratio LR/BS. In contrast to previous work (see related work, in Section 4, e.g. Mandt et al. (2017); Li et al. (2017)), we draw attention to the fact that SGD processes with different learning rates and batch sizes but the same ratio of learning rate to batch size are different discretizations of the same underlying SGD, and hence their dynamics are the same, as long as the discretization approximation is justified.
Stochastic Gradient Descent: We focus on SGD in the context of large datasets. Consider the loss gradient at a randomly chosen data point,
Viewed as a random variable induced by the random sampling of the data items,
is an unbiased estimator of the gradient. For typical loss functions this estimator has finite covariance which we denote by . In the limit of a sufficiently large dataset, each item in a batch is an independent and identically distributed (IID) sample of this estimator.
For a sufficiently large batch size is a mean of components of the form,
, each IID. Hence, under the central limit theorem,is approximately Gaussian with mean and variance .
Stochastic gradient descent (1) can be written as
where we have established that is an additive zero mean Gaussian random noise with variance . Hence we can rewrite (3) as
where is a zero mean Gaussian random variable with covariance .
Stochastic Differential Equation: Consider now a stochastic differential equation (SDE) 111See Mandt et al. (2017) for a different SDE but which also has a discretization equivalent to SGD. of the form
where . In particular we use , and the eigendecomposition of is given by , for
the diagonal matrix of eigenvalues and
the orthonormal matrix of eigenvectors of. This SDE can be discretized using the Euler-Maruyama (EuM) method222See e.g. Kloeden & Platen (1992). with stepsize to obtain precisely the same equation as (4).
Hence we can say that SGD implements an EuM approximation333For a more formal analysis, not requiring central limit arguments, see an alternative approach Li et al. (2017) which also considers SGD as a discretization of an SDE. Note that the learning rate to batch size ratio is not present there. to the SDE (5). Specifically we note that in the underlying SDE the learning rate and batch size only appear in the ratio , which we also refer to as the stochastic noise. This implies that these are not independent variables in SGD. Rather it is only their ratio that affects the path properties of the optimization process. The only independent effect of the learning rate
is to control the stepsize of the EuM method approximation, affecting only the per batch speed at which the discrete process follows the dynamics of the SDE. There are, however, more batches in an epoch for smaller batch sizes, so the per data-point speed is the same.
Further, when plotted on an epoch time axis, the dynamics will approximately match if the learning rate to batch size ratio is the same. This can be seen as follows: rescale both learning rate and batch size by the same amount, and , for some . Note that the different discretization stepsizes give a relation between iteration numbers, . Note also that the epoch number , is related to the iteration number through . This gives , i.e. the dynamics match on an epoch time axis.
Relaxing the Central Limit Theorem Assumption: We note here as an aside that above we provided an intuitive analysis in terms of a central limit theorem argument but this can be relaxed by taking a more formal analysis through computing
where is the sample covariance matrix
This was shown in e.g. (Junchi Li & et al., 2017), but a similar result was also found earlier in (Hoffer & et al., ). By taking the limit of a small batch size compared to training set size, , one achieves , which is the same as the central limit theorem assumption, with the sample covariance matrix approximating , the covariance of .
2.2 Learning rate to batch size ratio and the trace of the Hessian
We argue in this paper that there is a theoretical relationship between the expected loss value, the level of stochastic noise in SGD () and the width of the minima explored at this final stage of training. We derive that relationship in this section.
In talking about the width of a minima, we will define it in terms of the trace of the Hessian at the minima, , with a lower value of , the wider the minima. In order to derive the required relationship, we will make the following assumptions in the final phase of training:
- Assumption 1
As we expect the training to have arrived in a local minima, the loss surface can be approximated by a quadratic bowl, with minimum at zero loss (reflecting the ability of networks to fully fit the training data). Given this the training can be approximated by an Ornstein-Unhlenbeck process. This is a similar assumption to previous papers Mandt et al. (2017); Poggio & et al. (2018).
- Assumption 2
The covariance of the gradients and the Hessian of the loss approximation are approximately equal, i.e. we can sufficiently assume . A closeness of the Hessian and the covariance of the gradients in practical training of DNNs has been argued before Sagun et al. (2017); Zhu et al. (2018). In Appendix A we discuss conditions under which .
The second assumption is inspired by the recently proposed explanation by Zhu et al. (2018); Zhang et al. (2018) for mechanism behind escaping the sharp minima, where it is proposed that SGD escapes sharp minima due to the covariance of the gradients being anisotropic and aligned with the structure of the Hessian.
Based on Assumptions 1 and 2, the Hessian is positive definite, and matches the covariance . Hence its eigendecomposition is , with being the diagonal matrix of positive eigenvalues, and an orthonormal matrix. We can reparameterize the model in terms of a new variable defined by where are the parameters at the minimum.
Starting from the SDE (5), and making the quadratic approximation of the loss and the change of variables, results in an Ornstein-Unhlenbeck (OU) process for
It is a standard result that the stationary distribution of an OU process of the form (8) is Gaussian with zero mean and covariance .
Moreover, in terms of the new parameters , the expected loss can be written as
where the expectation is over the stationary distribution of the OU process, and the second equality follows from the expression for the OU covariance.
We see from Eq. (9) that the learning rate to batch size ratio determines the trade-off between width and expected loss associated with SGD dynamics within a minimum centred at a point of zero loss, with . In the experiments which follow, we compare geometrical properties of minima with similar loss values (but different generalization properties) to empirically analyze this relationship between and .
2.3 Special Case of Isotropic Covariance
In this section, we look at a case in which the assumptions are different to the ones in the previous section. We will not make assumptions 1 and 2. We will instead take the limit of vanishingly small learning rate to batch size ratio, assume an isotropic covariance matrix, and allow the process to continue for exponentially long times to reach equilibrium.
While these assumptions are too restrictive for the analysis to be directly transferable to the practical training of DNNs, the resulting investigations are mathematically interesting and provide further evidence that the learning rate to batch size ratio is theoretically important in SGD.
Let us now assume the individual gradient covariance to be isotropic, that is , for constant .
In this special case the SDE is well known to have an analytic equilibrium distribution444 The equilibrium solution is the very late time stationary (time-independent) solution with detailed balance, which means that in
the stationary solution, each individual transition balances precisely with its time reverse, resulting
in zero probability currents, see §5.3.5 of
The equilibrium solution is the very late time stationary (time-independent) solution with detailed balance, which means that in the stationary solution, each individual transition balances precisely with its time reverse, resulting in zero probability currents, see §5.3.5 ofGardiner ., given by the Gibbs-Boltzmann distribution555The Boltzmann equilibrium distribution of an SDE related to SGD was also investigated by Heskes & Kappen (1993) but only for the online setting (i.e. a batch size of one). (see for example Section 11.4 of Van Kampen (1992))
where we have used the symbol in analogy with the temperature in a thermodynamical setting, and is the normalization constant666Here we assume a weak regularity condition that either the weights belong to a compact space or that the loss grows unbounded as the weights tend to infinity, e.g. by including an L2 regularization in the loss.. If we run SGD for long enough then it will begin to sample from this equilibrium distribution. By inspection of equation (10), we note that at higher values of the distribution becomes more spread out, increasing the covariance of , in line with our general findings of Section 2.2.
We consider a setup with two minima, and , and ask which region SGD will most likely end up in. Starting from the equilibrium distribution in equation (10), we now consider the ratio of the probabilities , of ending up in minima , respectively. We characterize the two minimia regions by the loss values, and , and the Hessians and , respectively. We take a Laplace approximation777 The Laplace approximation is a common approach used to approximate integrals, used for example by MacKay (1992) and Kass & Raftery (1995). For a minimum of with Hessian , the Laplace approximation is as . around each minimum to evaluate the integral of the Gibbs-Boltzmann distribution around each minimum, giving the ratio of probabilities, in the limit of small temperature,
The first term in (12) is the ratio of the Hessian determinants, and is set by the width (volume) of the two minima. The second term is an exponent of the difference in the loss values divided by the temperature. We see that a higher temperature decreases the influence of the exponent, so that the width of a minimum becomes more important relative to its depth, with wider minima favoured by a higher temperature. For a given , the temperature is controlled by the ratio of learning rate to batch size.
We now present an empirical analysis motivated by the theory discussed in the previous section.
3.1 Learning dynamics of SGD depend on LR/BS
In this section we look experimentally at the approximation of SGD as an SDE given in Eq. (5), investigating how the dynamics are affected by the learning rate to batch size ratio.
We first look at the results of four experiments involving the VGG11 architecture888we have adapted the final fully-connected layers of the VGG11 to be FC-512, FC-512, FC-10 so that it is compatible with the CIFAR10 dataset. Simonyan & Zisserman (2014) on the CIFAR10 dataset, shown in Fig 1999 Each experiment was repeated for 5 different random initializations. . The left plot compares two experimental settings: a cyclic batch size (CBS) schedule (blue) oscillating between 128 and 640 at fixed learning rate , compared to a cyclic learning rate (CLR) schedule (red) oscillating between 0.001 to 0.005 with a fixed batch size of . The right plot compares the results for two other experimental settings: a constant learning rate to batch size ratio of (blue) versus (red). We emphasize the similarity of the curves for each pair of experiments, demonstrating that the learning dynamics are approximately invariant under changes in learning rate or batch size that keep the ratio constant.
We next ran experiments with other rescalings of the learning rate when going from a small batch size to a large one, to compare them against rescaling the learning rate exactly with the batch size. In Fig. 2
we show the results from two experiments on ResNet56 and VGG11, both trained with SGD and batch normalization on CIFAR10. In both settings the blue line corresponds to training with a small batch size of 50 and a small starting learning rate101010We used a adaptive learning rate schedule with dropping by a factor of 10 on epochs 60, 100, 140, 180 for ResNet56 and by a factor of 2 every 25 epochs for VGG11.. The other lines correspond to models trained with different learning rates and a larger batch size. It becomes visible that when rescaling by the same amount as (brown curve for ResNet, red for VGG11) the learning curve matches fairly closely the blue curve. Other rescaling strategies such as keeping the ratio constant, as suggested by Hoffer & et al. , (green curve for ResNet, orange for VGG) lead to larger differences in the learning curves.
for 4 layer ReLU MLP on FashionMNIST. Highercorrelates with lower Hessian maximum eigenvalue, lower Hessian Frobenius norm, i.e. wider minima, and better generalization. The validation accuracy is similar for different batch sizes, and different learning rates, so long as the ratio is constant.
3.2 Geometry and generalization depend on LR/BS
In this section we investigate experimentally the impact of learning rate to batch size ratio on the geometry of the region that SGD ends in. We trained a series of 4-layer batch-normalized ReLU MLPs on Fashion-MNISTXiao et al. (2017) with different 111111Each experiment was run for epochs. Models reaching an accuracy of approximately on the training set were selected.. To access the loss curvature at the end of training, we computed the largest eigenvalue and we approximated the Frobenius norm of the Hessian121212The largest eigenvalue and the Frobenius norm of the Hessian are used in place of the trace of the Hessian, because calculating multiple eigenvalues to directly approximate the trace is too computationally intensive. (higher values imply a sharper minimum) using the finite difference method Wu et al. (2017). Fig. 2(a) and Fig. 2(b) show the values of these quantities for minima obtained by SGD for different , with and . As grows, the norm of the Hessian at the minimum decreases, suggesting that higher values of push the optimization towards flatter regions. Figure 2(c) shows the results from exploring the impact of on the final validation performance, which confirms that better generalization correlates with higher values of . Taken together, Fig. 2(a), Fig. 2(b) and Fig. 2(c) imply that as increases, SGD finds wider regions which correlate well with better generalization131313assuming the network has enough capacity.
In Fig. 4 we qualitatively illustrate the behavior of SGD with different . We follow Goodfellow et al. (2014) by investigating the loss on the line interpolating between the parameters of two models with interpolation coefficicent . In Fig. 4(a,b) we consider Resnet56 models on CIFAR10 for different . We see sharper regions on the right of each, for the lower . In Fig. 4(c,d) we consider VGG-11 models on CIFAR10 for the same ratio, but different , where . We see the same sharpness for the same ratio. Experiments were repeated several times with different random initializations and qualitatively similar plots were achieved.
3.3 Cyclic schedules
It has been observed that a cyclic learning rate (CLR) schedule leads to better generalization Smith (2015). We have demonstrated that one can exchange a cyclic learning rate schedule (CLR) with a cyclic batch size (CBS) and approximately preserve the practical benefit of CLR. This exchangeability shows that the generalization benefit of CLR must come from the varying noise level, rather than just from cycling the learning rate. To explore why this helps generalization, we run VGG-11 on CIFAR10 using 5 training schedules: we compared a discrete cyclic learning rate, a discrete cyclic batch size, a triangular cyclic learning rate and a baseline constant learning rate. We track throughout training the Frobenius norm of the Hessian (divided by number of parameters, ), the largest eigenvalue of the Hessian, and the training loss. For each schedule we optimize both in the range [, ] and the cycle length from on a validation set. In all cyclical schedules the maximum value (of or S) is larger than the minimum value. Sharpness is measured at the best validation score.
The results are shown in Table 1. First we note that all cyclic schedules lead to wider bowls (both in terms of Frobenius norm and the largest eigenvalue) and higher loss values than the baseline. We note the discrete schedule leads to much wider bowls for a similar value of the loss. We also note that the discrete schedules varying either or performs similarly, or slightly better than triangular CLR schedule. The results suggest that by by being exposed to higher noise levels, cyclical schemes reach wider endpoints at higher loss than constant learning rate schemes with the same final noise level. We leave the exploration of the implications for cyclic schedules and a more thorough comparison with other noise schedules for future work.
|Loss||Test acc.||Valid acc.|
3.4 Impact of SGD on memorization
To generalize well, a model must identify the underlying pattern in the data instead of simply perfectly memorizing each training example. An empirical approach to test for memorization is to analyze how good a DNN can fit a training set when the true labels are partly replaced by random labels Zhang et al. (2016); Arpit & et al. (2017). To better characterize the practical benefit of ending in a wide bowl, we look at memorization of the training set under varying levels of learning rate to batch size ratio. The experiments described in this section highlight that SGD with a sufficient amount of noise improves generalization after memorizing the training set.
Experiments are performed on the MNIST dataset with an MLP similar to the one used by Arpit & et al. (2017), but with hidden units. We train the MLP with different amounts of random labels in the training set ( and ). For each level of label noise, we evaluate the impact of on the generalization performance. Specifically, we run experiments with taking values in a grid with batch size in range , learning rate in range , and momentum in . Models are trained for epochs. Fig. 6 reports the MLPs performances on both the noisy training set and the validation set after memorizing the training set (defined here as achieving accuracy on random labels). The results show that larger noise in SGD (regardless if induced by using a smaller batch size or a larger learning rate) leads to solutions which generalize better after having memorized the training set. Additionally we observe as in previous Sections a strong correlation of the Hessian norm with ( with p-value ). We highlight that SGD with low noise steers the endpoint of optimization towards a minimum with low generalization ability.
3.5 Breakdown of scaling
We expect discretization errors to become important when the learning rate gets large, we expect our central limit theorem to break down for large batch size and smaller dataset size.
We show this experimentally in Fig. 5, where similar learning dynamics and final performance can be observed when simultaneously multiplying the learning rate and batch size by a factor up to a certain limit141414 Experiments
are repeated 5 times with different random seeds. The graphs denote the mean validation accuracies and the numbers in the brackets denote the mean and standard deviation of the maximum validation accuracy across different runs. The * denotes at least one seed diverged.
Experiments are repeated 5 times with different random seeds. The graphs denote the mean validation accuracies and the numbers in the brackets denote the mean and standard deviation of the maximum validation accuracy across different runs. The * denotes at least one seed diverged.. This is done for a smaller training set size in Fig. 5 (a) than in (b). The curves don’t match when gets too large as expected from our approximations.
4 Related work
The analysis of SGD as an SDE is well established in the stochastic approximation literature, see e.g. Ljung et al. (1992) and Kushner & Yin . It was shown by Li et al. (2017) that SGD can be approximated by an SDE in an order-one weak approximation. However, batch size does not enter their analysis. In contrast, our analysis makes the role of batch size evident and shows the dynamics are set by the ratio of learning rate to batch size. Junchi Li & et al. (2017) reproduce the SDE result of Li et al. (2017) and further show that the covariance matrix of the minibatch-gradient scales inversely with the batch size151515This holds approximately, in the limit of small batch size compared to training set size. and proportionally to the sample covariance matrix over all examples in the training set. Mandt et al. (2017) approximate SGD by a different SDE and show that SGD can be used as an approximate Bayesian posterior inference algorithm. In contrast, we show the ratio of learning rate over batch influences the width of the minima found by SGD. We then explore each of these experimentally linking also to generalization.
Many works have used stochastic gradients to sample from a posterior, see e.g. Welling & Teh (2011), using a decreasing learning rate to correctly sample from the actual posterior. In contrast, we consider SGD with a fixed learning rate and our focus is not on applying SGD to sample from the actual posterior.
Our work is closely related to the ongoing discussion about how batch size affects sharpness and generalization. Our work extends this by investigating the impact of both batch size and learning rate on sharpness and generalization. Shirish Keskar et al. (2016) showed empirically that SGD ends up in a sharp minimum when using a large batch size. Hoffer & et al. rescale the learning rate with the square root of the batch size, and train for more epochs to reach the same generalization with a large batch size. The empirical analysis of Goyal & et al. (2017) demonstrated that rescaling the learning rate linearly with batch size can result in same generalization. Our work theoretically explains this empirical finding, and extends the experimental results on this.
Anisotropic noise in SGD was studied in Zhu et al. (2018). It was found that the gradient covariance matrix is approximately the same as the Hessian, late on in training. In the work of Sagun et al. (2017), the Hessian is also related to the gradient covariance matrix, and both are found to be highly anisotropic. In contrast, our focus is on the importance of the scale of the noise, set by the learning rate to batch size ratio.
Concurrent with this work, Smith & Le (2017) derive an analytical expression for the stochastic noise scale and – based on the trade-off between depth and width in the Bayesian evidence – find an optimal noise scale for optimizing the test accuracy. Chaudhari & Soatto (2017) explored the stationary non-equilibrium solution for the SDE for non-isotropic gradient noise.
In contrast to these concurrent works, our emphasis is on how the learning rate to batch size ratio relates to the width of the minima sampled by SGD. We show theoretically that different SGD processes with the same ratio are different discretizations of the same underlying SDE and hence follow the same dynamics. Further their learning curves will match under simultaneous rescaling of the learning rate and batch size when plotted on an epoch time axis. We also show that at the end of training, the learning rate to batch size ratio affects the width of the regions that SGD ends in, and empirically verify that the width of the endpoint region correlates with the learning rate to batch size ratio in practice.
In this paper we investigated a relation between learning rate, batch size and the properties of the final minima. By approximating SGD as an SDE, we found that the learning rate to batch size ratio controls the dynamics by scaling the stochastic noise. Furthermore, under the discussed assumption on the relation of covariance of gradients and the Hessian, the ratio is a key determinant of width of the minima found by SGD. The learning rate, batch size and the covariance of gradients, in its link to the Hessian, are three factors influencing the final minima.
We experimentally explored this relation using a range of DNN models and datasets, finding approximate invariance under rescaling of learning rate and batch size, and that the ratio of learning rate to batch size correlates with width and generalization with a higher ratio leading to wider minima and better generalization. Finally, our experiments suggest schedules with a changing batch size during training are a viable alternative to a changing learning rate.
Acknowledgements We thank Agnieszka Pocha, Jason Jo, Nicolas Le Roux, Mike Rabbat, Leon Bottou, and James Griffin for discussions. We thank NSERC, Canada Research Chairs, IVADO and CIFAR for funding. SJ was in part supported by Grant No. DI 2014/016644 from Ministry of Science and Higher Education, Poland and ETIUDA stipend No. 2017/24/T/ST6/00487 from National Science Centre, Poland. We acknowledge the computing resources provided by ComputeCanada and CalculQuebec. This project has received funding from the European Union’s Horizon 2020 research and innovation programme under grant agreement No 732204 (Bonseyes). This work is supported by the Swiss State Secretariat for Education‚ Research and Innovation (SERI) under contract number 16.0159. The opinions expressed and arguments employed herein do not necessarily reflect the official views of these funding bodies.
We thank Agnieszka Pocha, Jason Jo, Nicolas Le Roux, Mike Rabbat, Leon Bottou, and James Griffin for discussions. We thank NSERC, Canada Research Chairs, IVADO and CIFAR for funding. SJ was in part supported by Grant No. DI 2014/016644 from Ministry of Science and Higher Education, Poland and ETIUDA stipend No. 2017/24/T/ST6/00487 from National Science Centre, Poland. We acknowledge the computing resources provided by ComputeCanada and CalculQuebec. This project has received funding from the European Union’s Horizon 2020 research and innovation programme under grant agreement No 732204 (Bonseyes). This work is supported by the Swiss State Secretariat for Education‚ Research and Innovation (SERI) under contract number 16.0159. The opinions expressed and arguments employed herein do not necessarily reflect the official views of these funding bodies.
- Advani & Saxe (2017) M. S. Advani and A. M. Saxe. High-dimensional dynamics of generalization error in neural networks. arXiv preprint arXiv:1710.03667, 2017.
- Amari (1998) Shun-Ichi Amari. Natural gradient works efficiently in learning. Neural Comput., 10(2):251–276, February 1998. ISSN 0899-7667. doi: 10.1162/089976698300017746. URL http://dx.doi.org/10.1162/089976698300017746.
- Arpit & et al. (2017) D. Arpit and et al. A closer look at memorization in deep networks. In ICML, 2017.
- Bottou (1998) L. Bottou. Online learning and stochastic approximations. On-line learning in neural networks, 17(9):142, 1998.
- Chaudhari & Soatto (2017) P. Chaudhari and S. Soatto. Stochastic gradient descent performs variational inference, converges to limit cycles for deep networks. arXiv:1710.11029, 2017.
- Dinh et al. (2017) L. Dinh, R. Pascanu, S. Bengio, and Y. Bengio. Sharp Minima Can Generalize For Deep Nets. ArXiv e-prints, 2017.
- (7) C. Gardiner. Stochastic Methods: A Handbook for the Natural and Social Sciences. Springer Series in Synergetics. ISBN 9783642089626.
- Goodfellow et al. (2014) I. J. Goodfellow, O. Vinyals, and A. M. Saxe. Qualitatively characterizing neural network optimization problems. arXiv preprint arXiv:1412.6544, 2014.
Goyal & et al. (2017)
P. Goyal and et al.
Accurate, Large Minibatch SGD: Training ImageNet in 1 Hour.ArXiv e-prints, 2017.
- Heskes & Kappen (1993) T. M. Heskes and B. Kappen. On-line learning processes in artificial neural networks. volume 51 of North-Holland Mathematical Library, pp. 199 – 233. Elsevier, 1993. doi: https://doi.org/10.1016/S0924-6509(08)70038-2. URL http://www.sciencedirect.com/science/article/pii/S0924650908700382.
- Hochreiter & Schmidhuber (1997) S. Hochreiter and J. Schmidhuber. Flat minima. Neural Computation, 9(1):1–42, 1997.
- (12) E. Hoffer and et al. Train longer, generalize better: closing the generalization gap in large batch training of neural networks. ArXiv e-prints, arxiv:1705.08741.
- Junchi Li & et al. (2017) C. Junchi Li and et al. Batch Size Matters: A Diffusion Approximation Framework on Nonconvex Stochastic Gradient Descent. ArXiv e-prints, 2017.
- Kass & Raftery (1995) R. E. Kass and A. E. Raftery. Bayes factors. Journal of the American Statistical Association, 90(430):773–795, 1995. doi: 10.1080/01621459.1995.10476572. URL http://amstat.tandfonline.com/doi/abs/10.1080/01621459.1995.10476572.
- Kloeden & Platen (1992) Peter E. Kloeden and Eckhard Platen. Numerical Solution of Stochastic Differential Equations. Springer, 1992. ISBN 978-3-662-12616-5.
- (16) H. Kushner and G.G. Yin. Stochastic Approximation and Recursive Algorithms and Applications. Stochastic Modelling and Applied Probability. ISBN 9781489926968.
- Li et al. (2017) Q. Li, C. Tai, and Weinan E. Stochastic modified equations and adaptive stochastic gradient algorithms. In Proceedings of the 34th ICML, 2017.
- Ljung et al. (1992) L. Ljung, G. Pflug, and H. Walk. 1992.
D. J. C. MacKay.
A practical bayesian framework for backpropagation networks.Neural Computation, 4(3):448–472, 1992. ISSN 0899-7667. doi: 10.1162/neco.19126.96.36.1998.
Mandt et al. (2017)
S. Mandt, M. D. Hoffman, and D. M. Blei.
Stochastic gradient descent as approximate Bayesian inference.
Journal of Machine Learning Research, 18:134:1–134:35, 2017.
- Martens (2014) James Martens. New insights and perspectives on the natural gradient method. 2014.
- Poggio & et al. (2018) T. Poggio and et al. Theory of Deep Learning III: explaining the non-overfitting puzzle. ArXiv e-prints, arxive 1801.00173, 2018.
- Sagun et al. (2017) L. Sagun, U. Evci, V. Ugur Guney, Y. Dauphin, and L. Bottou. Empirical Analysis of the Hessian of Over-Parametrized Neural Networks. ArXiv e-prints, 2017.
Saxe et al. (2018)
Andrew Michael Saxe, Yamini Bansal, Joel Dapello, Madhu Advani, Artemy
Kolchinsky, Brendan Daniel Tracey, and David Daniel Cox.
On the information bottleneck theory of deep learning.In International Conference on Learning Representations, 2018. URL https://openreview.net/forum?id=ry_WPG-A-.
- Shirish Keskar et al. (2016) N. Shirish Keskar, D. Mudigere, J. Nocedal, M. Smelyanskiy, and P. T. P. Tang. On Large-Batch Training for Deep Learning: Generalization Gap and Sharp Minima. ArXiv e-prints, 2016.
- Shwartz-Ziv & Tishby (2017) R. Shwartz-Ziv and N. Tishby. Opening the Black Box of Deep Neural Networks via Information. ArXiv e-prints, March 2017.
- Simonyan & Zisserman (2014) K. Simonyan and A. Zisserman. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556, 2014.
- Smith (2015) L. N. Smith. Cyclical Learning Rates for Training Neural Networks. ArXiv e-prints, 2015.
- Smith & Le (2017) S.L. Smith and Q.V. Le. Understanding generalization and stochastic gradient descent. arXiv preprint arXiv:1710.06451, 2017.
- Van Kampen (1992) N.G. Van Kampen. Stochastic Processes in Physics and Chemistry. North-Holland Personal Library. Elsevier Science, 1992. ISBN 9780080571386. URL https://books.google.co.uk/books?id=3e7XbMoJzmoC.
- Welling & Teh (2011) M. Welling and Y. W. Teh. Bayesian learning via stochastic gradient Langevin dynamics. In Proceedings of the 28th ICML, pp. 681–688, 2011.
- Wu et al. (2017) Lei Wu, Zhanxing Zhu, et al. Towards understanding generalization of deep learning: Perspective of loss landscapes. arXiv preprint arXiv:1706.10239, 2017.
- Xiao et al. (2017) H. Xiao, K. Rasul, and R. Vollgraf. Fashion-MNIST: a Novel Image Dataset for Benchmarking Machine Learning Algorithms. ArXiv e-prints, 2017.
- Zhang et al. (2016) C. Zhang, S. Bengio, M. Hardt, B. Recht, and Oriol Vinyals. Understanding deep learning requires rethinking generalization. arXiv preprint arXiv:1611.03530, 2016.
- Zhang et al. (2018) Yao Zhang, Andrew M. Saxe, Madhu S. Advani, and Alpha A. Lee. Energy-entropy competition and the effectiveness of stochastic gradient descent in machine learning. CoRR, abs/1803.01927, 2018. URL http://arxiv.org/abs/1803.01927.
- Zhu et al. (2018) Z. Zhu, J. Wu, B. Yu, L. Wu, and J. Ma. The Regularization Effects of Anisotropic Noise in Stochastic Gradient Descent. ArXiv e-prints, 2018.
Appendix A When Covariance is Approximately the Hessian
In this appendix we describe conditions under which the gradient covariance can be approximately the same as the Hessian .
The covariance matrix can be approximated by the sample covariance matrix , defined in (7). Define the mean gradient
and the expectation of the squared norm gradient
In (Saxe et al., 2018; Shwartz-Ziv & Tishby, 2017) (see also (Zhu et al., 2018) who confirm this), they show the squared norm of the mean gradient is much smaller than the expected squared norm of the gradient
From this we have that
We then have that our expression for the sample covariance matrix simplifies to
We follow similar notation to (Martens, 2014). Let be a function mapping the neural network’s input to its output. Let be the loss function of an individual sample comparing target to output , so we take for each sample . Let be the model distribution, and let be the predictive distribution used at the network output, so that . Let be the associated probability density. Many probabilistic models can be formulated by taking the loss function to be
Substituting this into (17) gives
Conversely, the Hessian for this probabilistic model can be written as
The first term is the same as appears in the approximation to the sample covariance matrix (19
). The second term is negligible in the case where the model is realizable, i.e. that the model’s conditional probability distribution coincides with the training data’s conditional distribution. Mathematically, when the parameter is close to the optimum,, . Under these conditions the model has realized the data distribution and the second term is a sample estimator of the following zero quantity
with the estimator becoming more accurate with larger . Thus we have that the covariance is approximately the Hessian161616We also note that the first term is the same as the Empirical Fisher. The same argument can be used (Martens, 2014) to demonstrate that the Empirical Fisher matrix approximates the Hessian, and that Natural Gradient (Amari, 1998) close to the optimum is similar to the Newton method..