1 Introduction
Deep neural networks (DNNs) trained with algorithms based on stochastic gradient descent (SGD) are able to tune the parameters of massively overparametrized models to reach small training loss with good generalization despite the existence of numerous bad minima. This is especially surprising given DNNs are capable of overfitting random data with almost zero training loss
Zhang et al. (2016). This behavior has been studied by Arpit et al. (2017); Advani and Saxe (2017) where they suggest that deep networks generalize well because they tend to fit simple functions over training data before overfitting noise. It has been further discussed that model parameters that are in a region of flatter minima generalize better Hochreiter and Schmidhuber (1997); Keskar et al. (2016); Wu et al. (2017), and that SGD finds such minima when used with small batch size and large learning rate Keskar et al. (2016); Jastrzębski et al. (2017); Smith et al. (2017); Chaudhari and Soatto (2017). These recent papers frame SGD as a stochastic differential equation (SDE) under the assumption of using small learning rates. A main result of these papers is that the SDE dynamics remains the same as long as the ratio of learning rate to batch size remains unchanged. However, this view is limited due to its assumption and ignores the importance of the structure of SGD noise (i.e., the gradient covariance) and the qualitative roles of learning rate and batch size, which remain relatively obscure.On the other hand, various variants of SGD have been proposed for optimizing deep networks with the goal of addressing some of the common problems found in the high dimensional nonconvex loss landscapes (Eg. saddle points, faster loss descent etc). Some of the popular algorithms used for training deep networks apart from vanilla SGD are SGD with momentum Polyak (1964); Sutskever et al. (2013), AdaDelta Zeiler (2012)
, RMSProp
Tieleman and Hinton (2012), Adam Kingma and Ba (2014) etc. However, for any of these methods, currently there is little theory proving they help in improving generalization in DNNs (which by itself is currently not very well understood), although there have been some notable efforts (Eg. Hardt et al. (2015); Kawaguchi et al. (2017)). This raises the question of whether optimization algorithms that are designed with the goal of solving the aforementioned high dimensional problems also help in finding minima that generalize well, or put differently, what attributes allow optimization algorithms to find such good minima in the nonconvex setting.We take a step towards answering the above two questions in this paper for SGD (without momentum) through qualitative experiments. The main tool we use for studying the DNN loss surface along SGD’s path is to interpolate the loss surface between parameters before and after each training update and track various metrics. Our findings about SGD’s trajectory can be summarized as follows:
1. We observe that the loss interpolation between parameters before and after each iteration’s update is roughly convex with a minimum (valley floor) in between. Thus, we deduce that SGD bounces off walls of a valleylikestructure at a height above the floor.
2. Learning rate controls the height at which SGD bounces above the valley floor while batch size controls gradient stochasticity which facilitates exploration (visible from larger parameter distance from initialization for small batchsize). In this way, learning rate and batch size exhibit different qualitative roles in SGD dynamics.^{1}^{1}1This implies that except when using a reasonably small learning rate (which would make the SDE approximation hold), the effect of small batch size with a certain learning rate cannot be achieved by using a large batch size with a proportionally large learning rate (observed by Goyal et al. (2017)).
3. The valley floor along SGD’s path has many ups and downs (barriers) which may hinder exploration. Thus using a large learning rate helps avoid encountering barriers along SGD’s path by maintaining a large height above valley floor (thus moving over the barriers instead of crossing^{2}^{2}2A barrier is crossed when we see a point in the parameter space interpolated between the points just before and after an update step, such that the loss at the barrier point is higher than the loss at both the other points. them).
Experiments are conducted on multiple data sets, architectures and hyperparameter settings. The findings mentioned above hold true on all of them. We further find that stochasticity in SGD induced by minibatches is needed both for better optimization and generalization. Conversely, artificially added isotropic noise in the absence of minibatch induced stochasticity is bad for DNN optimization. We also discuss some striking similarities between our empirical findings about SGD’s trajectory in DNNs and classical optimization theory in the quadratic setting.
2 Background and Related Work
Various algorithms have been proposed for optimizing deep neural networks that are designed from the view point of tackling various high dimensional optimization problems like oscillation during training (SGD with momentum Polyak (1964)
), oscillations around minima (Nesterov momentum
Nesterov (1983); Sutskever et al. (2013)), saddle points Dauphin et al. (2014), automatic decay of the learning rate (AdaDelta Zeiler (2012), RMSProp Tieleman and Hinton (2012) and ADAM Kingma and Ba (2014)), etc. However, currently there is insufficient theory to understand what kind of minima generalize better although empirically it has been observed that wider minima (that can be quantified by low Hessian norm) seem to have better generalization Keskar et al. (2016); Wu et al. (2017); Jastrzębski et al. (2017) due to their low complexity and are more likely to be reached under random initialization given their larger volumes Wu et al. (2017).This argument raises the question of whether the intuitions behind the designs of the various optimization algorithms are really the reasons behind their success in deep learning or there are other underlying mechanisms that make them successful.
To understand this aspect better, a number of (mostly) recent papers study SGD as a stochastic differential process Kushner and Yin (2003); Mandt et al. (2017); Chaudhari and Soatto (2017); Smith and Le (2017); Jastrzębski et al. (2017); Li et al. (2015) under the assumption (among others) that the learning rate is reasonably small. Broadly, these papers show that the stochastic fluctuation in the stochastic differential equation simulated by SGD is governed by the ratio of learning rate to batch size. Hence according to this theoretical framework, the training dynamics of SGD should remain roughly identical when changing learning rate and batch size by the same factor. However, given DNNs (especially Resnet He et al. (2016) like architectures) are often trained with quite large learning rates, the small learning rate assumption may be a pitfall of this theoretical framework^{3}^{3}3For instance Goyal et al. (2017) investigate that increasing learning rate linearly with batch size helps to a certain extent but breaks down for very large learning rates.. But this theory is nonetheless useful since learning rates do attain small values during training due to annealing or adaptive scheduling, so this framework may indeed apply during parts of training. In this paper we attempt to go beyond these analyses to study the different qualitative roles of the noise induced by large learning rate versus the noise induced by a small batch size.
Plots for VGG11 Epoch 1 trained using full batch
Gradient Descent (GD) on CIFAR10. Top: Training loss for the iterations of training. Between the training loss at every consecutive iteration (vertical gray lines), we uniformly sample points between the parameters before and after a training update and calculate the loss at these points. Thus we take a slice of the loss surface between two iterations. These loss values are plotted between every consecutive training loss value from training updates. The dashed orange line connects the minimum of the loss interpolation between consecutive iterations (this minimum denotes the valley floor along the interpolation). Middle: Cosine of the angle between gradients from two consecutive iterations. Bottom: Parameter distance from initialization. Gist: The loss interpolation between consecutive iterations have a minimum for iterations where cosine is highly negative (close to after around iterations meaning the consecutive gradients are almost along opposite directions), suggesting the optimization is oscillating along the walls of a valley like structure. The valley floor reduces monotonously.There have also been work that consider SGD as a diffusion process where SGD is running a Brownian motion in the parameter space. Li et al. (2017) hypothesize this behavior of SGD and theoretically show that this diffusion process would allow SGD to cross barriers and thus escape sharp local minima. The authors use this theoretical result to support the findings of Keskar et al. (2016) who find that SGD with small minibatch find wider minima. Hoffer et al. (2017) on the other hand make a similar hypothesis based on the evidence that the distance moved by SGD from initialization resembles a diffusion process, and make a similar claim about SGD crossing barriers during training. Contrary to these claims, we find that interpolating the loss surface traversed by SGD on a per iteration basis suggests SGD almost never crosses any significant barriers for most of the training.
There is also a long list of work towards understanding the loss surface geometry of DNNs from a theoretical standpoint. Dotsenko (1995); Amit et al. (1985); Choromanska et al. (2015) show that under certain assumptions, the DNN loss landscape can be modeled by a spherical spin glass model which is well studied in terms of its critical points. Safran and Shamir (2016) show that under certain mild assumptions, the initialization is likely to be such that there exists a continuous monotonically decreasing path from the initial point to the global minimum. Freeman and Bruna (2016)
theoretically show that for DNNs with rectified linear units (ReLU), the level sets of the loss surface become more connected as network overparametrization increases. This has also been justified by
Sagun et al. (2017) who show that the hessian of deep ReLU networks is degenerate when the network is overparametrized and hence the loss surface is flat along such degenerate directions. Goodfellow et al. (2014) empirically show that the convex interpolation of the loss surface from the initialization to the final parameters found by optimization algorithms do not cross any significant barriers, and that the landscape of loss surface near SGD’s trajectory has a valleylike 2D projection. Broadly these studies analyze DNN loss surfaces (either theoretically or empirically) in isolation from the optimization dynamics.In our work we do not study the loss surface in isolation, but rather analyze it through the lens of SGD. In other words, we study the DNN loss surface along the trajectory of SGD and track various metrics while doing so, from which we deduce both how the landscape relevant to SGD looks like, and how the hyperparameters of SGD (learning rate and batch size) help SGD maneuver through it.
3 A Walk with SGD
We now begin our analysis of studying the loss surface of DNNs along the trajectory of optimization updates. Specifically, consider that the parameters of a DNN are initialized to a value . When using an optimization method to update these parameters, the update step takes the parameter from to
using estimated gradient
as,(1) 
where is the learning rate. Notice the update step implies the epoch only in the case when using the full batch gradient descent (GD– gradient computed using the whole dataset). In the case of stochastic gradient descent, one iteration is an update from gradient computed from a minibatch. We then interpolate the DNN loss between the convex combination of and
by considering parameter vectors
, where is chosen such that we obtain samples uniformly placed between these two parameter points. Simultaneously, we also keep track of two metrics– the cosine of the angle between two consecutive gradients , and the distance of the current parameter from the initialization given by . As it will become apparent in a bit, these two metrics along with the interpolation curve help us make deductions about how the optimization interacts with the loss surface during its trajectory.We perform experiments on MNIST Lecun and Cortes , CIFAR10 Krizhevsky (2009)
and a subset of the tiny Imagenet dataset
Russakovsky et al. (2015)using multilayer perceptrons (MLP), VGG11
Simonyan and Zisserman (2014) and Resnet56 He et al. (2016) architectures with various batch sizes and learning rates. In the main text, we mostly show results for CIFAR10 using VGG11 architecture with a batch size of 100 and a fixed learning rate of 0.1 due to space limitations. Experiments on all the other datasets, architectures and hyperparameter settings can be found in the appendix. All the claims are consistent across them.3.1 Optimization Trajectory
We first experiment with full batch gradient descent (GD) to study its behavior before jumping to the analysis of SGD to isolate the confounding factor of minibatch induced stochasticity. The plot of training loss interpolation between consecutive iterations (referred in the figure as training loss), , and parameter distance for CIFAR10 on VGG11 architecture optimized using full batch gradient descent is shown in Figure 1 for the first iterations of training. To be clear, the xaxis is calibrated by the number of iterations, and there are interpolated loss values between each consecutive iterations (vertical gray lines) in the training loss plot which is as described above (the cosine and parameter distance plots do not have any interpolations). This figure shows that the interpolated loss between every consecutive parameters from GD optimization update after iteration appears to be a quadraticlike structure with a minimum in between.
Additionally, the cosine of the angle between consecutive gradients after iteration is going negative and finally very close to , which means the consecutive gradients are almost along opposite directions. These two observations together suggest that the GD iterate is bouncing between walls of a valleylike landscape. For the iterations where there is a minimum in the interpolation between two iterations, we refer to this minimum as the floor of the valley (these valley floors are connected by dashed orange line in figure 1 for clarity). We will see GD behavior shows lack of exploration for better minima by comparing its parameter distance from initialization with that of SGD. Take note that the parameter distance of GD during these iterations reaches .
Now we perform the same analysis for SGD. Notice that even though the updates are performed using minibatches for SGD, the training loss values used in the plot are computed using the full dataset to visualize the actual loss landscape. We show these plots for epoch (Figure 2) in the main text and epoch (Figure 16) and epoch (Figure 17) in appendix. We find that while the loss interpolation also shows a quadraticlike structure with a minimum in between (similar to GD), there are some qualitative differences compared with GD. We see that the cosine of the angle between gradients from consecutive iterations are significantly less negative, suggesting that instead of oscillating in the same region, SGD is quickly moving away from its previous position. This can be verified by the parameter distance from initialization. We see that the distance after iterations is , which is larger than the distance moved by GD^{4}^{4}4In general, after the same number of updates, GD traverses a smaller distance compared with SGD, see Hoffer et al. (2017). Finally and most interestingly, we see that the height of valley floor has many ups and downs for consecutive iterations in contrast with that of GD (emphasized by the dashed orange line in figure 2), which means that there is a rough terrain or barriers along the path of SGD that could hinder exploration if the optimization was traveling too close to the valley floor.
Arch\Epochs  1  10  25  100 
VGG11  0  0  5  13 
Resnet56  0  0  2  23 
MLP  0  3  5   
A similar analysis for Resnet56 on CIFAR10, MLP on MNIST, VGG11 on tiny ImageNet trained using GD for the first epoch are shown in Figures 10, 18,21 respectively in appendix. The same experiments analysis for SGD on different datasets, architectures under different hyperparameters are also shown in section 1 in appendix. The observations and rules we discovered and described here are all consistent for all these experiments.
In order to show that the claim about optimization not crossing barriers extends to the whole training instead of only a few iterations we’ve shown, we quantitatively measure for the entire epoch in different phase of training if barriers are crossed. This result is shown in table 1 for VGG11 and Resnet56 trained on CIFAR10 (trained for 100 epochs) and an MLP trained on MNIST (trained for 40 epochs). As we see, no barriers are crossed for most parts of the training. We further compute the number of barriers crossed for the first epochs for VGG11 on CIFAR10 shown in Figure 9 in the appendix: no barriers are crossed for most of the epochs and even for the barriers that are crossed towards the end, we find that their heights are substantially smaller compared with the loss value at the corresponding point during training, meaning they are not significant.
Finally, we track the spectral norm of the Hessian along with the validation accuracy while the model is being trained^{5}^{5}5Note that we track the spectral norm in the train mode of batchnorm; we observe that in validation mode the values are significantly larger. Tracking the value in train mode is fair because this is what SGD experiences
during training. Additionally, we track spectral norm because it captures the largest eigenvalue of the Hessian in contrast with Frobenius norm which can be misleading because the Hessian may have negative eigenvalues and the Frobenius norm sums the square of all eigenvalues.
. This plot is shown in Figure 3 for VGG11 (and Figure 6 for Resnet56 in the appendix). We find that the spectral norm reduces as training progresses (hence SGD finds flatter regions) but starts increasing towards the end. This is mildly correlated with a drop in validation accuracy towards the end. Regarding this correlation, while Dinh et al. (2017) discuss that sharper minima can perform as well as wider ones, it is empirically known that flatter minima generalize better than sharper ones with SGD (Keskar et al., 2016; Jastrzębski et al., 2017). This may be explained by Neyshabur et al. (2017); Achille and Soatto (2017) that discuss that minima that are both wide and have small norm may explain generalization in overparametrized deep networks.3.2 Qualitative Roles of Learning Rate and Batch Size
We now focus in more detail on how the learning rate and batch size play qualitatively different roles during SGD optimization. As an extreme case, we already saw in the last section that when using GD vs SGD, the cosine of the angle between gradients from two consecutive iterations is significantly closer to 1 (180 degrees) in the case of GD in contrast with SGD.
Now we show on a more granular scale that changing the batch size gradually (keeping the learning rate fixed) changes , while changing the learning rate gradually (keeping batch size fixed) does not. It is shown in Figure 4 for VGG11 trained on CIFAR10 and in Figures 31, 32 and 33for Resnet56 with CIFAR10, MLP with MNIST and VGG11 with tiny ImageNet separately in the appendix. Notice the cosine is significantly more negative for larger batch sizes, implying that for larger batch sizes, the optimization is bouncing more within the same region instead of traveling farther along the valley as the case of small batch sizes. This behavior is verified by the smaller distance of parameters from initialization during training for larger batch size which is also discussed by Hoffer et al. (2017). This suggests that the noise from a small minibatch size facilitates exploration that may lead to better minima and that this is hard to achieve by changing the learning rate.
On the other hand, we find that the learning rate controls the height from the valley floor at which the optimization oscillates along the valley walls which is important for avoiding barriers along SGD’s path. Specifically, to quantify the height at which the optimization is bouncing above the valley floor, we make the following computations. Suppose at iterations and of training, the parameters are given by and respectively, and from the points sampled uniformly between and given by for different values of , we define , where denotes the DNN loss at parameter using the whole training set. Then we define the height of the iterate from the valley floor at iteration as . We then separately compute the average height for all iterations of epochs , , and .
VGG11  Epoch 1  Epoch 10  Epoch 25 

LR 0.1  0.06251.2e3  0.01994.8e4  0.01042.2e5 
LR 0.05  0.01023.8e5  0.00502.8e5  0.00351.7e5 
Resnet56  Epoch 1  Epoch 10  Epoch 25 
LR 0.3  0.03805.9e4  0.01314.5e4  0.00941.3e5 
LR 0.15  0.00845.2e5  0.00343.2e5  0.00207.2e6 
These values are shown in table 2 for VGG11 and Resnet56 trained on CIFAR10 and in table 3 in appendix for VGG11 on TinyImageNet. They show that for almost all epochs, a smaller learning rate leads to a smaller height from the valley floor. Since the floor has barriers, it would increase the risk of hindering exploration for flatter minima. This has been corroborated by the recent empirical observations that smaller learning rates lead to sharper minima and poor generalization Smith et al. (2017); Jastrzębski et al. (2017). Based on our observations on the role of learning rate and batchsize, we empirically study learning rate schedules in appendix C.
4 Importance of SGD Noise Structure
The gradient from minibatch SGD at a parameter value is expressed as, , where , denotes the expected gradient using all training samples, is the minibatch size and is the gradient covariance matrix at . In the previous section we discussed how minibatch induced stochasticity plays a crucial role in SGD based optimization. This stochasticity due to SGD has historically been attributed to helping the optimization escape local minima in DNNs. However, the importance of the structure of the gradient covariance matrix
is often neglected in these claims. To better understand its importance, we study the training dynamics of full batch gradient with artificially added isotropic noise. Specifically, we treat isotropic noise as our null hypothesis to confirm that the structure of noise induced by minibatches in SGD is important.
In this experiment, we first train our models with gradient descent (GD), meaning there is no noise sampled from of SGD. For gradient descent with isotropic noise, we add isotropic noise at every iteration on
. The noise is sampled from a normal distribution with variance calculated by multiplying the maximum gradient variance of the model at the initialization with a factor of
and . We train all models until their training losses saturate and monitor training loss, validation accuracy, cosine of the angle between gradients from two consecutive iterations and the parameter distance from initialization.Figure 5 shows the results for VGG11 and Figures 34 and 35 in the appendix shows the results for Resnet56 on CIFAR10 and MLP on MNIST. From the training loss and validation accuracy curves, we can see that adding even a small isotropic noise makes both the convergence^{6}^{6}6We additionally find that the model trained with isotropic noise gets stuck because we find that neither reducing learning rate, nor switching to GD at this point leads to reduction in training loss. However, switching to SGD makes the loss go down. and generalization worse compared with the model trained with GD. The cosine of the angle between gradients of two consecutive iterations is close to for iterations for GD, which means two consecutive gradients are almost along opposite directions. It is an extra evidence that GD makes the optimization bounce off valley walls, which is what we discussed in section 3. The parameter distance from initialization shows that models trained with isotropic noise travel farther away compared with the model trained using noiseless GD. These distances are much larger even compared with models trained with SGD (not shown here) for the same number of updates.
To gain more intuitions into this behavior, we also calculate the norm of the final parameters found by GD, SGD and the isotropic noise cases. The parameter norms are and for GD and SGD respectively, and and for the and isotropic variance case. These numbers corroborate the generally discussed notion that SGD finds solutions with small norm Zhang et al. (2016) compared with GD, and the fact that isotropic noise solutions have much larger norms and get stuck suggests that isotropic noise both hinders optimization and is bad for generalization.
Neelakantan et al. (2015) suggest adding isotropic noise to gradients and report performance improvement on a number of tasks. However, notice the crucial difference between our claim in this section and their setup is that they add isotropic noise on top of the noise due to the minibatch induced stochasticity, while we add isotropic noise to the full dataset gradient (hence no noise is sampled from the gradient covariance matrix).
To gains insights into why the noise sampled from the gradient covariance matrix helps SGD, we note that there is a relationship between the covariance and the Hessian of the loss surface at parameter which is revealed by the generalized Gauss Newton decomposition (see Sagun et al. (2017)) when using the crossentropy (or negative log likelihood) loss. Let
denote the predicted probability output (of the correct class in the classification setting for instance) of a DNN parameterized by
for the data sample (in total N samples). Then the negative log likelihood loss for the sample is given by, . The relation between the Hessian and the gradient covariance for negative log likelihood loss is,The derivation can be found in section D of the appendix. Thus we find that the Hessian and covariance at any point are related, and are almost equal near minima where the second term tends to zero. This relationship would imply that the minibatch induced noise is roughly aligned with sharper directions of the loss landscape (empirically confirmed concurrently by Zhu et al. (2018)). This would prevent the optimization from converging along such directions unless a wider region is found, which could explain why SGD finds wider minima without relying on the stochastic differential equation framework (previous work) which assumes a reasonably small learning rate.
Plots for VGG11 trained by GD (without noise) and GD with artificial isotropic noise sampled from Gaussian distribution with different variances. Models trained using GD with added isotropic noise get stuck in terms of training loss and have worse validation performance compared with the model trained with GD.
5 Discussion
We presented qualitative results to understand how GD and SGD interact with the DNN loss surface and avoided assumptions in order to rely instead on empirical evidence. We now draw similarities between the optimization trajectory in DNNs that we have empirically found, with those in quadratic loss optimization (see section 5 of LeCun et al. (1998)). Based on our empirical evidence, we deduce that both GD and SGD move in a valley like landscape by bouncing off valley walls. This is reminiscent of optimization in a quadratic loss setting with a nonisotropic positive semidefinite Hessian, where the optimal learning rate
causes underdamping without divergence along eigenvectors of the Hessian which have eigenvalues
such that . On the other hand, in the case of DNNs trained with GD, we find that even though the training loss oscillates between valley walls during consecutive iterations, the valley floor decreases smoothly (see Figure 1). This is similar to the quadratic loss optimization with overdamped convergence along the eigenvectors corresponding to eigenvalues such that .On a different note, it is commonly conjectured that when training DNNs, SGD crosses barriers to escape local minima. Contrary to this commonly held intuition, we find that SGD almost never crosses any significant barriers along its path. More interestingly, when training with a certain learning rate (see figure 2), we find barriers at the floor of the valley but SGD avoids them by traveling at a height above the floor (due to large learning rate). Hence if we use a small learning rate, SGD should encounter such barriers and likely cross them. But we found in our experiments that this was not the case. This suggests that while in theory SGD is capable of crossing barriers (due to stochasticity), it does not do so because probably there exist other directions in such regions along which SGD can continue to optimize without crossing barriers. But since small learning rates empirically correlate with bad generalization, this suggests that moving over such barriers instead of crossing them by using a large learning rate is a good mechanism for exploration for good regions.
Finally, much of what we have discussed is based on the loss landscape of specific datasets and architectures along with network parameterization choices like rectified linear activation units (ReLUs) and batch normalization
Ioffe and Szegedy (2015). These conclusions may differ depending on these choices. In these cases analysis similar to ours can be performed to see if similar dynamics hold or not. Studying these dynamics may provide more practical guidelines for setting optimization hyperparameters.References
 Achille and Soatto (2017) Alessandro Achille and Stefano Soatto. On the emergence of invariance and disentangling in deep representations. arXiv preprint arXiv:1706.01350, 2017.
 Advani and Saxe (2017) Madhu S Advani and Andrew M Saxe. Highdimensional dynamics of generalization error in neural networks. arXiv preprint arXiv:1710.03667, 2017.
 Amit et al. (1985) Daniel J Amit, Hanoch Gutfreund, and Haim Sompolinsky. Spinglass models of neural networks. Physical Review A, 32(2):1007, 1985.

Arpit et al. (2017)
Devansh Arpit, Stanisław Jastrzębski, Nicolas Ballas, David Krueger,
Emmanuel Bengio, Maxinder S Kanwal, Tegan Maharaj, Asja Fischer, Aaron
Courville, Yoshua Bengio, et al.
A closer look at memorization in deep networks.
In
International Conference on Machine Learning
, pages 233–242, 2017.  Chaudhari and Soatto (2017) Pratik Chaudhari and Stefano Soatto. Stochastic gradient descent performs variational inference, converges to limit cycles for deep networks. arXiv preprint arXiv:1710.11029, 2017.
 Choromanska et al. (2015) Anna Choromanska, Mikael Henaff, Michael Mathieu, Gérard Ben Arous, and Yann LeCun. The loss surfaces of multilayer networks. In Artificial Intelligence and Statistics, pages 192–204, 2015.
 Dauphin et al. (2014) Yann N Dauphin, Razvan Pascanu, Caglar Gulcehre, Kyunghyun Cho, Surya Ganguli, and Yoshua Bengio. Identifying and attacking the saddle point problem in highdimensional nonconvex optimization. In Advances in neural information processing systems, pages 2933–2941, 2014.
 Dinh et al. (2017) Laurent Dinh, Razvan Pascanu, Samy Bengio, and Yoshua Bengio. Sharp minima can generalize for deep nets. arXiv eprints, 1703.04933, March 2017.
 Dotsenko (1995) Viktor Dotsenko. An introduction to the theory of spin glasses and neural networks, volume 54. World Scientific, 1995.
 Freeman and Bruna (2016) C Daniel Freeman and Joan Bruna. Topology and geometry of halfrectified network optimization. arXiv preprint arXiv:1611.01540, 2016.
 Goodfellow et al. (2014) Ian J Goodfellow, Oriol Vinyals, and Andrew M Saxe. Qualitatively characterizing neural network optimization problems. arXiv preprint arXiv:1412.6544, 2014.
 Goyal et al. (2017) Priya Goyal, Piotr Dollár, Ross Girshick, Pieter Noordhuis, Lukasz Wesolowski, Aapo Kyrola, Andrew Tulloch, Yangqing Jia, and Kaiming He. Accurate, large minibatch sgd: training imagenet in 1 hour. arXiv preprint arXiv:1706.02677, 2017.
 Hardt et al. (2015) Moritz Hardt, Benjamin Recht, and Yoram Singer. Train faster, generalize better: Stability of stochastic gradient descent. arXiv preprint arXiv:1509.01240, 2015.

He et al. (2016)
Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun.
Deep residual learning for image recognition.
In
Proceedings of the IEEE conference on computer vision and pattern recognition
, pages 770–778, 2016.  Hochreiter and Schmidhuber (1997) Sepp Hochreiter and Jürgen Schmidhuber. Flat minima. Neural Computation, 9(1):1–42, 1997.
 Hoffer et al. (2017) Elad Hoffer, Itay Hubara, and Daniel Soudry. Train longer, generalize better: closing the generalization gap in large batch training of neural networks. In Advances in Neural Information Processing Systems, pages 1729–1739, 2017.
 Ioffe and Szegedy (2015) Sergey Ioffe and Christian Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. In International conference on machine learning, pages 448–456, 2015.
 Jastrzębski et al. (2017) Stanisław Jastrzębski, Zachary Kenton, Devansh Arpit, Nicolas Ballas, Asja Fischer, Yoshua Bengio, and Amos Storkey. Three factors influencing minima in sgd. arXiv preprint arXiv:1711.04623, 2017.
 Kawaguchi et al. (2017) Kenji Kawaguchi, Leslie Pack Kaelbling, and Yoshua Bengio. Generalization in deep learning. arXiv preprint arXiv:1710.05468, 2017.
 Keskar et al. (2016) Nitish Shirish Keskar, Dheevatsa Mudigere, Jorge Nocedal, Mikhail Smelyanskiy, and Ping Tak Peter Tang. On largebatch training for deep learning: Generalization gap and sharp minima. arXiv preprint arXiv:1609.04836, 2016.
 Kingma and Ba (2014) Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014.
 Krizhevsky (2009) Alex Krizhevsky. Learning Multiple Layers of Features from Tiny Images. Technical report, 2009.
 Kushner and Yin (2003) Harold Kushner and G George Yin. Stochastic approximation and recursive algorithms and applications, volume 35. Springer Science & Business Media, 2003.

(24)
Yann Lecun and Corinna Cortes.
The MNIST database of handwritten digits.
URL http://yann.lecun.com/exdb/mnist/.  LeCun et al. (1998) Yann LeCun, Léon Bottou, Genevieve B Orr, and KlausRobert Müller. Efficient backprop. In Neural networks: Tricks of the trade, pages 9–50. Springer, 1998.
 Li et al. (2017) Chris Junchi Li, Lei Li, Junyang Qian, and JianGuo Liu. Batch size matters: A diffusion approximation framework on nonconvex stochastic gradient descent. arXiv preprint arXiv:1705.07562, 2017.
 Li et al. (2015) Qianxiao Li, Cheng Tai, et al. Stochastic modified equations and adaptive stochastic gradient algorithms. arXiv preprint arXiv:1511.06251, 2015.
 Mandt et al. (2017) Stephan Mandt, Matthew D Hoffman, and David M Blei. Stochastic gradient descent as approximate bayesian inference. arXiv preprint arXiv:1704.04289, 2017.
 Neelakantan et al. (2015) Arvind Neelakantan, Luke Vilnis, Quoc V Le, Ilya Sutskever, Lukasz Kaiser, Karol Kurach, and James Martens. Adding gradient noise improves learning for very deep networks. arXiv preprint arXiv:1511.06807, 2015.
 Nesterov (1983) Yurii Nesterov. A method of solving a convex programming problem with convergence rate o (1/k2). In Soviet Mathematics Doklady, volume 27, pages 372–376, 1983.
 Neyshabur et al. (2017) Behnam Neyshabur, Srinadh Bhojanapalli, David McAllester, and Nati Srebro. Exploring generalization in deep learning. In Advances in Neural Information Processing Systems, pages 5949–5958, 2017.
 Polyak (1964) Boris T Polyak. Some methods of speeding up the convergence of iteration methods. USSR Computational Mathematics and Mathematical Physics, 4(5):1–17, 1964.
 Russakovsky et al. (2015) Olga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng Huang, Andrej Karpathy, Aditya Khosla, Michael Bernstein, Alexander C. Berg, and Li FeiFei. ImageNet Large Scale Visual Recognition Challenge. International Journal of Computer Vision (IJCV), 115(3):211–252, 2015. doi: 10.1007/s112630150816y.
 Safran and Shamir (2016) Itay Safran and Ohad Shamir. On the quality of the initial basin in overspecified neural networks. In International Conference on Machine Learning, pages 774–782, 2016.
 Sagun et al. (2017) Levent Sagun, Utku Evci, V Ugur Guney, Yann Dauphin, and Leon Bottou. Empirical analysis of the hessian of overparametrized neural networks. arXiv preprint arXiv:1706.04454, 2017.
 Simonyan and Zisserman (2014) Karen Simonyan and Andrew Zisserman. Very deep convolutional networks for largescale image recognition. arXiv preprint arXiv:1409.1556, 2014.
 Smith (2017) Leslie N Smith. Cyclical learning rates for training neural networks. In Applications of Computer Vision (WACV), 2017 IEEE Winter Conference on, pages 464–472. IEEE, 2017.
 Smith and Topin (2017) Leslie N Smith and Nicholay Topin. Superconvergence: Very fast training of residual networks using large learning rates. arXiv preprint arXiv:1708.07120, 2017.
 Smith and Le (2017) Samuel L Smith and Quoc V Le. Understanding generalization and stochastic gradient descent. arXiv preprint arXiv:1710.06451, 2017.
 Smith et al. (2017) Samuel L Smith, PieterJan Kindermans, and Quoc V Le. Don’t decay the learning rate, increase the batch size. arXiv preprint arXiv:1711.00489, 2017.
 Sutskever et al. (2013) Ilya Sutskever, James Martens, George Dahl, and Geoffrey Hinton. On the importance of initialization and momentum in deep learning. In International conference on machine learning, pages 1139–1147, 2013.
 Tieleman and Hinton (2012) Tijmen Tieleman and Geoffrey Hinton. Lecture 6.5rmsprop: Divide the gradient by a running average of its recent magnitude. COURSERA: Neural networks for machine learning, 4(2):26–31, 2012.
 Wu et al. (2017) Lei Wu, Zhanxing Zhu, et al. Towards understanding generalization of deep learning: Perspective of loss landscapes. arXiv preprint arXiv:1706.10239, 2017.
 Zeiler (2012) Matthew D Zeiler. Adadelta: an adaptive learning rate method. arXiv preprint arXiv:1212.5701, 2012.
 Zhang et al. (2016) Chiyuan Zhang, Samy Bengio, Moritz Hardt, Benjamin Recht, and Oriol Vinyals. Understanding deep learning requires rethinking generalization. arXiv preprint arXiv:1611.03530, 2016.
 Zhu et al. (2018) Zhanxing Zhu, Jingfeng Wu, Lei Wu, Jinwen Ma, and Bing Yu. The regularization effects of anisotropic noise in stochastic gradient descent. arXiv preprint arXiv:1803.00195, 2018.
Appendix
Appendix A Optimization Trajectory
This is a continuation of section in the main text. Here we show further experiments on other datasets, architectures and hyperparameter settings. The analysis of GD training for Resnet56 on CIFAR10, MLP on MNIST and VGG11 on tiny ImageNet are shown in figures 10, 18 and 21 respectively. Similarly, the analysis of SGD training for Resnet56 on CIFAR10 dataset with batch size of 100 and learning rate 0.1 for epochs 1, 2, 25 and 100 are shown in figures 11, 12, 13 and 14 respectively. The analysis of SGD training for VGG11 on CIFAR10 with the batch size of 100 and learning rate 0.1 on epochs 2, 25,100 are shown in figures 15, 16 and 17. The analysis of SGD training for MLP on MNIST for epochs 1 and 2 are shown in figures 19 and 20. The analysis of SGD training for VGG11 on tiny ImageNet for epochs 1 is shown in figure 22. We also conducted the same experiment and analysis on various batch sizes and learning rates for every architecture. Results of VGG11 can be found in figures 23, 24, 25 and 26. Results of Resnet56 can be found in figures 27, 28, 29 and 30. The observations and rules we discovered and described in section are all consistent for all these experiments. Specifically, for the interpolation of SGD for VGG11 on tiny ImageNet, the valleylike trajectory is weirdlooking but even so, according to our quantitative evaluation there is no barrier between any two consecutive iterations.
We track the spectral norm of the Hessian along with the validation accuracy while the model is being trained. This is shown in figure 6 for Resnet56 trained on CIFAR10.
Appendix B Qualitative Roles of Learning Rate and Batch Size
This is a continuation of section in the main text. In this section we show further experiments for the analysis of different roles of learning rate and batch size during training on various architectures and data sets. Figures 31, 32 and 33 shows the results for Resnet56 on CIFAR10, MLP on MNIST and VGG11 on TinyImagenet. In all of the experiments, training the model with smaller batch size will make the angle between gradients of two consecutive iterations larger, which means for smaller batch size, instead of oscillating within the same region, the optimization travels farther along the valley, as we described in section . For all architectures, changing learning rate doesn’t change the angles.
VGG11 on CIFAR10  Epoch 1  Epoch 10  Epoch 25  Epoch 100 
LR 0.1  0.06251.2e3  0.01994.8e4  0.01042.2e5  0.00253.1e6 
LR 0.05  0.01023.8e5  0.00502.8e5  0.00351.7e5  0.00111.0e6 
Resnet56 on CIFAR10  Epoch 1  Epoch 10  Epoch 25  Epoch 100 
LR 0.3  0.03805.9e4  0.01314.5e4  0.00941.3e5  0.00171.0e5 
LR 0.15  0.00845.2e5  0.00343.2e5  0.00207.2e6  0.00136.7e6 
VGG11 on TinyImageNet  Epoch 1  Epoch 10  Epoch 25  Epoch 100 
LR 0.5  0.0281.0e3  0.2131.5e3  0.1871.9e3  9.8e52.0e9 
LR 0.1  0.00395.2e5  0.1632.64e5  0.1160.013  1.1e53.6e11 
Appendix C Learning Rate Schedule
We observe from table 2 that the optimization oscillates at a lower height as training progresses (which is likely because SGD finds flatter regions as training progresses, see Figure 3). As we discussed based on Figure 2, the floor of the DNN valley is highly nonlinear with many barriers. Based on these two observations, it seems that it should be advantageous for SGD to maintain a large height from the floor of the valley to facilitate further exploration without getting hindered by barriers as it may allow the optimization to find flatter regions. Hence, this line of thought suggests that we should increase the learning rate as training progresses (of course eventually it needs to be annealed for convergence to a minimum). Smith [2017], Smith and Topin [2017] propose a cyclical learning rate (CLR) schedule which partially has this property. It involves linearly increasing the learning rate every iteration until a certain number of iterations, then similarly linearly reducing it, and repeat this process in a cycle. We now empirically show that multiple cycles of CLR are redundant, and simply increasing the learning rate until a certain point, and then annealing it leads to similar or better performance. Specifically, to rule out the need for cycles, as a null hypothesis, we increase the learning rate as in the first cycle of CLR, then keep it flat, then linearly anneal it (we call it the trapezoid schedule). For fairness, we also plot the widely used stepwise learning rate annealing schedule. In our experiments, we find that methods which increase learning rate during training may be considered slightly better. The learning curves are shown in figures 8 in main text and 7 in appendix (with other details). We leave an extensive study of learning rate schedule design based on the proposed guideline as future work.
We run the same experiment as described above on Resnet56 with CIFAR10 and it shows the rule for CLR, trapezoid schedule and SGD with stepwise annealing. Plots can be seen at figure 7. All schedules are tuned to their best performance with a hyperparameter grid search. For both Resnet56 and VGG11, we use batch size 100 for all models. The learning rate schedules are apparent from the figures themselves.
Appendix D Importance of SGD Noise Structure
Here we derive in detail the relation between the Hessian and gradient covariance using the fact that for the negative log likelihood loss
. Note we use the fact that for this particular loss function,
, and , which yields .(2)  
(3)  
(4)  
(5)  
(6)  
(7) 
where .
Appendix E Discussion
In the main text, we talk about converge in the quadratic setting depending on the value of learning rate relative to the largest eigenvalue of the Hessian. The convergence in this setting has been visualized in 36.
Comments
There are no comments yet.