Three Factors Influencing Minima in SGD

11/13/2017 ∙ by Stanisław Jastrzębski, et al. ∙ 0

We study the properties of the endpoint of stochastic gradient descent (SGD). By approximating SGD as a stochastic differential equation (SDE) we consider the Boltzmann-Gibbs equilibrium distribution of that SDE under the assumption of isotropic variance in loss gradients. Through this analysis, we find that three factors - learning rate, batch size and the variance of the loss gradients - control the trade-off between the depth and width of the minima found by SGD, with wider minima favoured by a higher ratio of learning rate to batch size. We have direct control over the learning rate and batch size, while the variance is determined by the choice of model architecture, model parameterization and dataset. In the equilibrium distribution only the ratio of learning rate to batch size appears, implying that the equilibrium distribution is invariant under a simultaneous rescaling of learning rate and batch size by the same amount. We then explore experimentally how learning rate and batch size affect SGD from two perspectives: the endpoint of SGD and the dynamics that lead up to it. For the endpoint, the experiments suggest the endpoint of SGD is invariant under simultaneous rescaling of batch size and learning rate, and also that a higher ratio leads to flatter minima, both findings are consistent with our theoretical analysis. We note experimentally that the dynamics also seem to be invariant under the same rescaling of learning rate and batch size, which we explore showing that one can exchange batch size and learning rate for cyclical learning rate schedule. Next, we illustrate how noise affects memorization, showing that high noise levels lead to better generalization. Finally, we find experimentally that the invariance under simultaneous rescaling of learning rate and batch size breaks down if the learning rate gets too large or the batch size gets too small.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Deep neural networks (DNNs) have demonstrated good generalization ability and achieved state-of-the-art performance in many application domains. This is despite being massively over-parameterized, and despite the fact that modern neural networks are capable of getting near zero error on the training dataset Zhang et al. (2016). The reason for their success at generalization remains an open question.

The standard way of training DNNs involves minimizing a loss function using stochastic gradient descent (SGD) or one of its variants 

Bottou (1998). Since the loss functions of DNNs are typically non-convex functions of the parameters, with complex structure and potentially multiple minima and saddle points, SGD generally converges to different regions of parameter space, with different geometries and generalization properties, depending on optimization hyper-parameters and initialization.

Recently, several works Arpit & et al. (2017); Advani & Saxe (2017); Shirish Keskar et al. (2016) have investigated how SGD impacts on generalization in DNNs. It has been argued that wide minima tend to generalize better than sharp ones Hochreiter & Schmidhuber (1997); Shirish Keskar et al. (2016). One paper Shirish Keskar et al. (2016) empirically showed that a larger batch size correlates with sharper minima and worse generalization performance. On the other hand, Dinh et al. (2017) discuss the existence of sharp minima which behave similarly in terms of predictions compared with wide minima. We argue that, even though sharp minima that have similar performance exist, SGD does not target them. Instead it tends to find wider minima at higher noise levels in gradients and it seems to be that such wide minima found by SGD correlate with better generalization.

In this paper we find that the critical control parameter for SGD is not the batch size alone, but the ratio of the learning rate (LR) to batch size (BS), i.e. LR/BS. SGD performs similarly for different batch sizes but a constant LR/BS. On the other hand higher values for LR/BS result in convergence to wider minima, which indeed seem to result in better generalization.

Our main contributions are as follows:

  • We note that any SGD processes with the same LR/BS are discretizations of the same Stochastic Differential Equation.

  • We derive a relation between LR/BS and the width of the minimum found by SGD.

  • We verify experimentally that the dynamics are similar under rescaling of the LR and BS by the same amount. In particular, we investigate changing batch size, instead of learning rate, during training.

  • We verify experimentally that a larger LR/BS correlates with a wider endpoint of SGD and better generalization.

2 Theory

Let us consider a model parameterized by where the components are for , and denotes the number of parameters. For training examples , we define the loss function, , and the corresponding gradient , based on the sum over the loss values for all training examples.

Stochastic gradients arise when we consider a minibatch of size of random indices drawn uniformly from

and form an (unbiased) estimate of the gradient based on the corresponding subset of training examples

.

We consider stochastic gradient descent with learning rate , as defined by the update rule

(1)

where indexes the discrete update steps.

2.1 SGD dynamics are determined by learning rate to batch size ratio

In this section we consider SGD as a discretization of a stochastic differential equation (SDE); in this underlying SDE, the learning rate and batch size only appear as the ratio LR/BS. In contrast to previous work (see related work, in Section 4, e.g. Mandt et al. (2017); Li et al. (2017)), we draw attention to the fact that SGD processes with different learning rates and batch sizes but the same ratio of learning rate to batch size are different discretizations of the same underlying SGD, and hence their dynamics are the same, as long as the discretization approximation is justified.

Stochastic Gradient Descent:  We focus on SGD in the context of large datasets. Consider the loss gradient at a randomly chosen data point,

(2)

Viewed as a random variable induced by the random sampling of the data items,

is an unbiased estimator of the gradient

. For typical loss functions this estimator has finite covariance which we denote by . In the limit of a sufficiently large dataset, each item in a batch is an independent and identically distributed (IID) sample of this estimator.

For a sufficiently large batch size is a mean of components of the form,

, each IID. Hence, under the central limit theorem,

is approximately Gaussian with mean and variance .

Stochastic gradient descent (1) can be written as

(3)

where we have established that is an additive zero mean Gaussian random noise with variance . Hence we can rewrite (3) as

(4)

where is a zero mean Gaussian random variable with covariance .

Stochastic Differential Equation:  Consider now a stochastic differential equation (SDE) 111See Mandt et al. (2017) for a different SDE but which also has a discretization equivalent to SGD. of the form

(5)

where . In particular we use , and the eigendecomposition of is given by , for

the diagonal matrix of eigenvalues and

the orthonormal matrix of eigenvectors of

. This SDE can be discretized using the Euler-Maruyama (EuM) method222See e.g. Kloeden & Platen (1992). with stepsize to obtain precisely the same equation as (4).

Hence we can say that SGD implements an EuM approximation333For a more formal analysis, not requiring central limit arguments, see an alternative approach Li et al. (2017) which also considers SGD as a discretization of an SDE. Note that the learning rate to batch size ratio is not present there. to the SDE (5). Specifically we note that in the underlying SDE the learning rate and batch size only appear in the ratio , which we also refer to as the stochastic noise. This implies that these are not independent variables in SGD. Rather it is only their ratio that affects the path properties of the optimization process. The only independent effect of the learning rate

is to control the stepsize of the EuM method approximation, affecting only the per batch speed at which the discrete process follows the dynamics of the SDE. There are, however, more batches in an epoch for smaller batch sizes, so the per data-point speed is the same.

Further, when plotted on an epoch time axis, the dynamics will approximately match if the learning rate to batch size ratio is the same. This can be seen as follows: rescale both learning rate and batch size by the same amount, and , for some . Note that the different discretization stepsizes give a relation between iteration numbers, . Note also that the epoch number , is related to the iteration number through . This gives , i.e. the dynamics match on an epoch time axis.

Relaxing the Central Limit Theorem Assumption:  We note here as an aside that above we provided an intuitive analysis in terms of a central limit theorem argument but this can be relaxed by taking a more formal analysis through computing

(6)

where is the sample covariance matrix

(7)

This was shown in e.g. (Junchi Li & et al., 2017), but a similar result was also found earlier in (Hoffer & et al., ). By taking the limit of a small batch size compared to training set size, , one achieves , which is the same as the central limit theorem assumption, with the sample covariance matrix approximating , the covariance of .

2.2 Learning rate to batch size ratio and the trace of the Hessian

We argue in this paper that there is a theoretical relationship between the expected loss value, the level of stochastic noise in SGD () and the width of the minima explored at this final stage of training. We derive that relationship in this section.

In talking about the width of a minima, we will define it in terms of the trace of the Hessian at the minima, , with a lower value of , the wider the minima. In order to derive the required relationship, we will make the following assumptions in the final phase of training:

Assumption 1

As we expect the training to have arrived in a local minima, the loss surface can be approximated by a quadratic bowl, with minimum at zero loss (reflecting the ability of networks to fully fit the training data). Given this the training can be approximated by an Ornstein-Unhlenbeck process. This is a similar assumption to previous papers Mandt et al. (2017); Poggio & et al. (2018).

Assumption 2

The covariance of the gradients and the Hessian of the loss approximation are approximately equal, i.e. we can sufficiently assume . A closeness of the Hessian and the covariance of the gradients in practical training of DNNs has been argued before Sagun et al. (2017); Zhu et al. (2018). In Appendix A we discuss conditions under which .

The second assumption is inspired by the recently proposed explanation by Zhu et al. (2018); Zhang et al. (2018) for mechanism behind escaping the sharp minima, where it is proposed that SGD escapes sharp minima due to the covariance of the gradients being anisotropic and aligned with the structure of the Hessian.

Based on Assumptions 1 and 2, the Hessian is positive definite, and matches the covariance . Hence its eigendecomposition is , with being the diagonal matrix of positive eigenvalues, and an orthonormal matrix. We can reparameterize the model in terms of a new variable defined by where are the parameters at the minimum.

Starting from the SDE (5), and making the quadratic approximation of the loss and the change of variables, results in an Ornstein-Unhlenbeck (OU) process for

(8)

It is a standard result that the stationary distribution of an OU process of the form (8) is Gaussian with zero mean and covariance .

Moreover, in terms of the new parameters , the expected loss can be written as

(9)

where the expectation is over the stationary distribution of the OU process, and the second equality follows from the expression for the OU covariance.

We see from Eq. (9) that the learning rate to batch size ratio determines the trade-off between width and expected loss associated with SGD dynamics within a minimum centred at a point of zero loss, with . In the experiments which follow, we compare geometrical properties of minima with similar loss values (but different generalization properties) to empirically analyze this relationship between and .

2.3 Special Case of Isotropic Covariance

In this section, we look at a case in which the assumptions are different to the ones in the previous section. We will not make assumptions 1 and 2. We will instead take the limit of vanishingly small learning rate to batch size ratio, assume an isotropic covariance matrix, and allow the process to continue for exponentially long times to reach equilibrium.

While these assumptions are too restrictive for the analysis to be directly transferable to the practical training of DNNs, the resulting investigations are mathematically interesting and provide further evidence that the learning rate to batch size ratio is theoretically important in SGD.

Let us now assume the individual gradient covariance to be isotropic, that is , for constant . In this special case the SDE is well known to have an analytic equilibrium distribution444

The equilibrium solution is the very late time stationary (time-independent) solution with detailed balance, which means that in the stationary solution, each individual transition balances precisely with its time reverse, resulting in zero probability currents, see §5.3.5 of

Gardiner ., given by the Gibbs-Boltzmann distribution555The Boltzmann equilibrium distribution of an SDE related to SGD was also investigated by Heskes & Kappen (1993) but only for the online setting (i.e. a batch size of one). (see for example Section 11.4 of Van Kampen (1992))

(10)
(11)

where we have used the symbol in analogy with the temperature in a thermodynamical setting, and is the normalization constant666Here we assume a weak regularity condition that either the weights belong to a compact space or that the loss grows unbounded as the weights tend to infinity, e.g. by including an L2 regularization in the loss.. If we run SGD for long enough then it will begin to sample from this equilibrium distribution. By inspection of equation (10), we note that at higher values of the distribution becomes more spread out, increasing the covariance of , in line with our general findings of Section 2.2.

We consider a setup with two minima, and , and ask which region SGD will most likely end up in. Starting from the equilibrium distribution in equation (10), we now consider the ratio of the probabilities , of ending up in minima , respectively. We characterize the two minimia regions by the loss values, and , and the Hessians and , respectively. We take a Laplace approximation777 The Laplace approximation is a common approach used to approximate integrals, used for example by MacKay (1992) and Kass & Raftery (1995). For a minimum of with Hessian , the Laplace approximation is as . around each minimum to evaluate the integral of the Gibbs-Boltzmann distribution around each minimum, giving the ratio of probabilities, in the limit of small temperature,

(12)

The first term in (12) is the ratio of the Hessian determinants, and is set by the width (volume) of the two minima. The second term is an exponent of the difference in the loss values divided by the temperature. We see that a higher temperature decreases the influence of the exponent, so that the width of a minimum becomes more important relative to its depth, with wider minima favoured by a higher temperature. For a given , the temperature is controlled by the ratio of learning rate to batch size.

3 Experiments

We now present an empirical analysis motivated by the theory discussed in the previous section.

3.1 Learning dynamics of SGD depend on LR/BS

In this section we look experimentally at the approximation of SGD as an SDE given in Eq. (5), investigating how the dynamics are affected by the learning rate to batch size ratio.

Figure 1: VGG11 on CIFAR10. Left: cyclic schedules. Right: constant , . Red and blue curves match implies dynamics set by ratio of learning rate to batch size.
Figure 2: ResNet (top) and VGG11 (bottom) on CIFAR10. Rescaling the learning rate to reproduce a similar learning curve when going from a small batch size (blue) to a large one. In both experiments rescaling learning rate by same amount as batch size gives a closer match than rescaling by the square root of batch size.

We first look at the results of four experiments involving the VGG11 architecture888we have adapted the final fully-connected layers of the VGG11 to be FC-512, FC-512, FC-10 so that it is compatible with the CIFAR10 dataset. Simonyan & Zisserman (2014) on the CIFAR10 dataset, shown in Fig 1999 Each experiment was repeated for 5 different random initializations. . The left plot compares two experimental settings: a cyclic batch size (CBS) schedule (blue) oscillating between 128 and 640 at fixed learning rate , compared to a cyclic learning rate (CLR) schedule (red) oscillating between 0.001 to 0.005 with a fixed batch size of . The right plot compares the results for two other experimental settings: a constant learning rate to batch size ratio of (blue) versus (red). We emphasize the similarity of the curves for each pair of experiments, demonstrating that the learning dynamics are approximately invariant under changes in learning rate or batch size that keep the ratio constant.

We next ran experiments with other rescalings of the learning rate when going from a small batch size to a large one, to compare them against rescaling the learning rate exactly with the batch size. In Fig. 2

we show the results from two experiments on ResNet56 and VGG11, both trained with SGD and batch normalization on CIFAR10. In both settings the blue line corresponds to training with a small batch size of 50 and a small starting learning rate

101010We used a adaptive learning rate schedule with dropping by a factor of 10 on epochs 60, 100, 140, 180 for ResNet56 and by a factor of 2 every 25 epochs for VGG11.. The other lines correspond to models trained with different learning rates and a larger batch size. It becomes visible that when rescaling by the same amount as (brown curve for ResNet, red for VGG11) the learning curve matches fairly closely the blue curve. Other rescaling strategies such as keeping the ratio constant, as suggested by Hoffer & et al. , (green curve for ResNet, orange for VGG) lead to larger differences in the learning curves.

(a) Largest eigenvalue.
(b) Frobenius norm.
(c) Validation accuracy.
Figure 3: Ratio of learning rate to batch size, , for a grid of ,

for 4 layer ReLU MLP on FashionMNIST. Higher

correlates with lower Hessian maximum eigenvalue, lower Hessian Frobenius norm, i.e. wider minima, and better generalization. The validation accuracy is similar for different batch sizes, and different learning rates, so long as the ratio is constant.

3.2 Geometry and generalization depend on LR/BS

In this section we investigate experimentally the impact of learning rate to batch size ratio on the geometry of the region that SGD ends in. We trained a series of 4-layer batch-normalized ReLU MLPs on Fashion-MNIST

Xiao et al. (2017) with different 111111Each experiment was run for epochs. Models reaching an accuracy of approximately on the training set were selected.. To access the loss curvature at the end of training, we computed the largest eigenvalue and we approximated the Frobenius norm of the Hessian121212The largest eigenvalue and the Frobenius norm of the Hessian are used in place of the trace of the Hessian, because calculating multiple eigenvalues to directly approximate the trace is too computationally intensive. (higher values imply a sharper minimum) using the finite difference method Wu et al. (2017). Fig. 2(a) and Fig. 2(b) show the values of these quantities for minima obtained by SGD for different , with and . As grows, the norm of the Hessian at the minimum decreases, suggesting that higher values of push the optimization towards flatter regions. Figure 2(c) shows the results from exploring the impact of on the final validation performance, which confirms that better generalization correlates with higher values of . Taken together, Fig. 2(a), Fig. 2(b) and Fig.  2(c) imply that as increases, SGD finds wider regions which correlate well with better generalization131313assuming the network has enough capacity.

(a)
(b)
(c)
(d)
Figure 4: Interpolations between models with interpolation coefficient. At there is one trained model (1st element of subcaption), at there is another (2nd element of subcaption). (a), (b): Resnet56 with different ratio . (c), (d): VGG11 with the same ratio, but different . Higher ratios give wider minima (a,b) as seen by the great width of the basin around , whilst the same ratio gives the same width minima (c,d), despite differences in batch size and learning rate.

In Fig. 4 we qualitatively illustrate the behavior of SGD with different . We follow Goodfellow et al. (2014) by investigating the loss on the line interpolating between the parameters of two models with interpolation coefficicent . In Fig. 4(a,b) we consider Resnet56 models on CIFAR10 for different . We see sharper regions on the right of each, for the lower . In Fig. 4(c,d) we consider VGG-11 models on CIFAR10 for the same ratio, but different , where . We see the same sharpness for the same ratio. Experiments were repeated several times with different random initializations and qualitatively similar plots were achieved.

(a) Train dataset size 12000
(b) Train dataset size 45000
Figure 5: Validation accuracy for different dataset sizes and different values for fixed ratio . The curves diverging from the blue shows the approximation of the SDE discretized to SGD breaking down for large , which is magnified for smaller dataset size.

3.3 Cyclic schedules

It has been observed that a cyclic learning rate (CLR) schedule leads to better generalization Smith (2015). We have demonstrated that one can exchange a cyclic learning rate schedule (CLR) with a cyclic batch size (CBS) and approximately preserve the practical benefit of CLR. This exchangeability shows that the generalization benefit of CLR must come from the varying noise level, rather than just from cycling the learning rate. To explore why this helps generalization, we run VGG-11 on CIFAR10 using 5 training schedules: we compared a discrete cyclic learning rate, a discrete cyclic batch size, a triangular cyclic learning rate and a baseline constant learning rate. We track throughout training the Frobenius norm of the Hessian (divided by number of parameters, ), the largest eigenvalue of the Hessian, and the training loss. For each schedule we optimize both in the range [, ] and the cycle length from on a validation set. In all cyclical schedules the maximum value (of or S) is larger than the minimum value. Sharpness is measured at the best validation score.

The results are shown in Table 1. First we note that all cyclic schedules lead to wider bowls (both in terms of Frobenius norm and the largest eigenvalue) and higher loss values than the baseline. We note the discrete schedule leads to much wider bowls for a similar value of the loss. We also note that the discrete schedules varying either or performs similarly, or slightly better than triangular CLR schedule. The results suggest that by by being exposed to higher noise levels, cyclical schemes reach wider endpoints at higher loss than constant learning rate schemes with the same final noise level. We leave the exploration of the implications for cyclic schedules and a more thorough comparison with other noise schedules for future work.

Loss Test acc. Valid acc.
Discrete
Discrete S
Triangle
Constant
Table 1: Comparison between different cyclical training schedules (cycle length and learning rate are optimized using a grid search).
Figure 6: Impact of on memorization of MNIST when and of labels in the training set are replaced with random labels, using no momentum (on the right) or a momentum with parameter (on the left). We observe that high leads to better generalization under full memorization of the training set.

3.4 Impact of SGD on memorization

To generalize well, a model must identify the underlying pattern in the data instead of simply perfectly memorizing each training example. An empirical approach to test for memorization is to analyze how good a DNN can fit a training set when the true labels are partly replaced by random labels Zhang et al. (2016); Arpit & et al. (2017). To better characterize the practical benefit of ending in a wide bowl, we look at memorization of the training set under varying levels of learning rate to batch size ratio. The experiments described in this section highlight that SGD with a sufficient amount of noise improves generalization after memorizing the training set.

Experiments are performed on the MNIST dataset with an MLP similar to the one used by Arpit & et al. (2017), but with hidden units. We train the MLP with different amounts of random labels in the training set ( and ). For each level of label noise, we evaluate the impact of on the generalization performance. Specifically, we run experiments with taking values in a grid with batch size in range , learning rate in range , and momentum in . Models are trained for epochs. Fig. 6 reports the MLPs performances on both the noisy training set and the validation set after memorizing the training set (defined here as achieving accuracy on random labels). The results show that larger noise in SGD (regardless if induced by using a smaller batch size or a larger learning rate) leads to solutions which generalize better after having memorized the training set. Additionally we observe as in previous Sections a strong correlation of the Hessian norm with ( with p-value ). We highlight that SGD with low noise steers the endpoint of optimization towards a minimum with low generalization ability.

3.5 Breakdown of scaling

We expect discretization errors to become important when the learning rate gets large, we expect our central limit theorem to break down for large batch size and smaller dataset size.

We show this experimentally in Fig. 5, where similar learning dynamics and final performance can be observed when simultaneously multiplying the learning rate and batch size by a factor up to a certain limit141414

Experiments are repeated 5 times with different random seeds. The graphs denote the mean validation accuracies and the numbers in the brackets denote the mean and standard deviation of the maximum validation accuracy across different runs. The * denotes at least one seed diverged.

. This is done for a smaller training set size in Fig. 5 (a) than in (b). The curves don’t match when gets too large as expected from our approximations.

4 Related work

The analysis of SGD as an SDE is well established in the stochastic approximation literature, see e.g. Ljung et al. (1992) and Kushner & Yin . It was shown by Li et al. (2017) that SGD can be approximated by an SDE in an order-one weak approximation. However, batch size does not enter their analysis. In contrast, our analysis makes the role of batch size evident and shows the dynamics are set by the ratio of learning rate to batch size. Junchi Li & et al. (2017) reproduce the SDE result of Li et al. (2017) and further show that the covariance matrix of the minibatch-gradient scales inversely with the batch size151515This holds approximately, in the limit of small batch size compared to training set size. and proportionally to the sample covariance matrix over all examples in the training set. Mandt et al. (2017) approximate SGD by a different SDE and show that SGD can be used as an approximate Bayesian posterior inference algorithm. In contrast, we show the ratio of learning rate over batch influences the width of the minima found by SGD. We then explore each of these experimentally linking also to generalization.

Many works have used stochastic gradients to sample from a posterior, see e.g. Welling & Teh (2011), using a decreasing learning rate to correctly sample from the actual posterior. In contrast, we consider SGD with a fixed learning rate and our focus is not on applying SGD to sample from the actual posterior.

Our work is closely related to the ongoing discussion about how batch size affects sharpness and generalization. Our work extends this by investigating the impact of both batch size and learning rate on sharpness and generalization. Shirish Keskar et al. (2016) showed empirically that SGD ends up in a sharp minimum when using a large batch size. Hoffer & et al. rescale the learning rate with the square root of the batch size, and train for more epochs to reach the same generalization with a large batch size. The empirical analysis of Goyal & et al. (2017) demonstrated that rescaling the learning rate linearly with batch size can result in same generalization. Our work theoretically explains this empirical finding, and extends the experimental results on this.

Anisotropic noise in SGD was studied in Zhu et al. (2018). It was found that the gradient covariance matrix is approximately the same as the Hessian, late on in training. In the work of Sagun et al. (2017), the Hessian is also related to the gradient covariance matrix, and both are found to be highly anisotropic. In contrast, our focus is on the importance of the scale of the noise, set by the learning rate to batch size ratio.

Concurrent with this work, Smith & Le (2017) derive an analytical expression for the stochastic noise scale and – based on the trade-off between depth and width in the Bayesian evidence – find an optimal noise scale for optimizing the test accuracy. Chaudhari & Soatto (2017) explored the stationary non-equilibrium solution for the SDE for non-isotropic gradient noise.

In contrast to these concurrent works, our emphasis is on how the learning rate to batch size ratio relates to the width of the minima sampled by SGD. We show theoretically that different SGD processes with the same ratio are different discretizations of the same underlying SDE and hence follow the same dynamics. Further their learning curves will match under simultaneous rescaling of the learning rate and batch size when plotted on an epoch time axis. We also show that at the end of training, the learning rate to batch size ratio affects the width of the regions that SGD ends in, and empirically verify that the width of the endpoint region correlates with the learning rate to batch size ratio in practice.

5 Conclusion

In this paper we investigated a relation between learning rate, batch size and the properties of the final minima. By approximating SGD as an SDE, we found that the learning rate to batch size ratio controls the dynamics by scaling the stochastic noise. Furthermore, under the discussed assumption on the relation of covariance of gradients and the Hessian, the ratio is a key determinant of width of the minima found by SGD. The learning rate, batch size and the covariance of gradients, in its link to the Hessian, are three factors influencing the final minima.

We experimentally explored this relation using a range of DNN models and datasets, finding approximate invariance under rescaling of learning rate and batch size, and that the ratio of learning rate to batch size correlates with width and generalization with a higher ratio leading to wider minima and better generalization. Finally, our experiments suggest schedules with a changing batch size during training are a viable alternative to a changing learning rate.

Acknowledgements

We thank Agnieszka Pocha, Jason Jo, Nicolas Le Roux, Mike Rabbat, Leon Bottou, and James Griffin for discussions. We thank NSERC, Canada Research Chairs, IVADO and CIFAR for funding. SJ was in part supported by Grant No. DI 2014/016644 from Ministry of Science and Higher Education, Poland and ETIUDA stipend No. 2017/24/T/ST6/00487 from National Science Centre, Poland. We acknowledge the computing resources provided by ComputeCanada and CalculQuebec. This project has received funding from the European Union’s Horizon 2020 research and innovation programme under grant agreement No 732204 (Bonseyes). This work is supported by the Swiss State Secretariat for Education‚ Research and Innovation (SERI) under contract number 16.0159. The opinions expressed and arguments employed herein do not necessarily reflect the official views of these funding bodies.

References

Appendix A When Covariance is Approximately the Hessian

In this appendix we describe conditions under which the gradient covariance can be approximately the same as the Hessian .

The covariance matrix can be approximated by the sample covariance matrix , defined in (7). Define the mean gradient

(13)

and the expectation of the squared norm gradient

(14)

In (Saxe et al., 2018; Shwartz-Ziv & Tishby, 2017) (see also (Zhu et al., 2018) who confirm this), they show the squared norm of the mean gradient is much smaller than the expected squared norm of the gradient

(15)

From this we have that

(16)

We then have that our expression for the sample covariance matrix simplifies to

(17)

We follow similar notation to (Martens, 2014). Let be a function mapping the neural network’s input to its output. Let be the loss function of an individual sample comparing target to output , so we take for each sample . Let be the model distribution, and let be the predictive distribution used at the network output, so that . Let be the associated probability density. Many probabilistic models can be formulated by taking the loss function to be

(18)

Substituting this into (17) gives

(19)

Conversely, the Hessian for this probabilistic model can be written as

(20)

The first term is the same as appears in the approximation to the sample covariance matrix (19

). The second term is negligible in the case where the model is realizable, i.e. that the model’s conditional probability distribution coincides with the training data’s conditional distribution. Mathematically, when the parameter is close to the optimum,

, . Under these conditions the model has realized the data distribution and the second term is a sample estimator of the following zero quantity

(21)
(22)
(23)

with the estimator becoming more accurate with larger . Thus we have that the covariance is approximately the Hessian161616We also note that the first term is the same as the Empirical Fisher. The same argument can be used (Martens, 2014) to demonstrate that the Empirical Fisher matrix approximates the Hessian, and that Natural Gradient (Amari, 1998) close to the optimum is similar to the Newton method..