The optimization algorithm chosen by a deep learning practitioner determines the training speed and the final predictive performance of their model. To date, there is no theory that adequately explains how to make this choice. Instead, our community relies on empirical studies [Wilson et al., 2017] and benchmarking [Schneider et al., 2019]. Indeed, it is the de facto standard that papers introducing new optimizers report extensive comparisons across a large number of workloads. Therefore, to maximize scientific progress, we must have confidence in our ability to make empirical comparisons between optimization algorithms.
Although there is no theory guiding us when comparing optimizers, the popular first-order optimizers form a natural inclusion hierarchy. For example, Adam [Kingma and Ba, 2015] and RMSProp [Tieleman and Hinton, 2012] can approximately simulate Momentum [Polyak, 1964] if the term in the denominator of their parameter updates is allowed to grow very large. However, these relationships may not matter in practice. For example, the settings of Adam’s metaparameters that allow it to match the performance of Momentum may be too difficult to find (for instance, they may be infinite).
In this paper, we demonstrate two important and interrelated points about empirical comparisons of neural network optimizers. First, we show that inclusion relationships between optimizers actually matter in practice; in our experiments, more general optimizers never underperform special cases. Despite conventional wisdom [Wilson et al., 2017, Balles and Hennig, 2017], we find that when carefully tuned, Adam and other adaptive gradient methods never underperform Momentum or SGD. Second, we demonstrate the sensitivity of optimizer comparisons to the metaparameter tuning protocol. By comparing to previous experimental evaluations, we show how easy it is to change optimizer rankings on a given workload (model and dataset pair) by changing the metaparameter tuning protocol, with optimizer rankings stabilizing according to inclusion relationships as we spend more and more effort tuning. Our findings raise serious questions about the practical relevance of conclusions drawn from these sorts of empirical comparisons.
The remainder of this paper is structured as follows. In Section 2, we review related work, focusing on papers that make explicit claims about optimizer comparisons in deep learning and application papers that provide evidence about the tuning protocols of practitioners. We develop our definition of first-order optimizers in Section 3 along with a notion of inclusion relationships between optimizers. We present our experimental results in Section 4. Despite thorny methodological issues over how to avoid biases in comparisons due to search spaces that favor one optimizer over another, we believe that our experimental methodology is an acceptable compromise and has substantial practical relevance. Among other results, we show that the inclusion hierarchy of update rules is almost entirely predictive of optimizer comparisons. In particular, NAdam [Dozat, 2016]
achieves the best top-1 validation accuracy on ResNet-50 on ImageNet in our experiments. The 77.1% we obtain withNAdam, although not as good as the 77.6% obtained using learned data augmentation by Cubuk et al. , is better than the best existing published results using any of the more standard pre-processing pipelines (76.5%, due to Goyal et al.  using Momentum).
2 Background and Related Work
Our work was inspired by the recent studies of neural network optimizers by Wilson et al.  and Schneider et al. . Wilson et al.  constructed a simple classification problem in which adaptive gradient methods (e.g. Adam) converge to provably worse solutions than standard gradient methods. However, crucially, their analysis ignored the parameter in the denominator of some adaptive gradient methods. Wilson et al.  also presented experiments in which Adam produced worse validation accuracy than SGD across all deep learning workloads considered. However they only tuned over the learning rate and learning rate decay scheme in their experiments, leaving all other parameters of Adam at fixed default values. Despite these findings, adaptive gradient methods continue to be popular since the work of Wilson et al. . Schneider et al.  presented a benchmark suite (DeepOBS) for deep learning optimizers and reported that there was no single best optimizer across the workloads they considered. Yet Schneider et al.  only tuned the learning rate of each optimizer and left all other metaparameters at some fixed default values.
As we will discuss in Section 4.3, the choices of metaparameter tuning protocols in Wilson et al.  and Schneider et al.  may be the most important factor preventing their results from being relevant to practical choices about which optimizer to use. Metaparameter tuning is a crucial step of the deep learning pipeline [Bergstra and Bengio, 2012, Snoek et al., 2012, Sutskever et al., 2013, Smith, 2018], so it is critical for papers studying optimizers to match as closely as possible the tuning protocols of an ideal practitioner. Tuning protocols can vary widely and often differ between work studying neural network optimizers and work concerned with actually training neural networks to solve specific problems.
Recent papers that study or introduce optimization algorithms tend to compare to Adam and RMSProp without tuning , presumably to simplify their experiments. It is standard to leave at the common default value of for Adam and for RMSProp [Tieleman and Hinton, 2012, Kingma and Ba, 2015, Dozat, 2016, Balles and Hennig, 2017, Loshchilov and Hutter, 2017, Zou and Shen, 2018, Ma and Yarats, 2018, Bernstein et al., 2018, Chen et al., 2019, Zou et al., 2019]. Others do not even report the value of used [Balles and Hennig, 2017, Zhang and Mitliagkas, 2017, Keskar and Socher, 2017, Chen et al., 2018, Zhou et al., 2018, Aitchison, 2018, Reddi et al., 2019, Luo et al., 2019]. There are exceptions. Zaheer et al.  and Liu et al.  consider values orders of magnitude larger than the standard default. However, the experiments in both papers gave only a limited consideration to , testing at most two values while tuning Adam. De et al.  is the only work we found that considered a broad range of values for . Both Zaheer et al.  and De et al.  found that non-default values of outperformed the default.
While it is also extremely common in applications to use a default value of , some notable papers tuned and selected values up to eight orders of magnitude away from the common defaults. Szegedy et al.  used for RMSProp; Liu et al.  reported that their results were sensitive to and set for Adam; Tan et al.  and Tan and Le  set
for RMSProp, the latter achieving state-of-the-art ImageNet top-1 accuracy. In reinforcement learning,Hessel et al.  set . Although we focused this discussion on in Adam and RMSProp, we suspect these trends hold for other rarely tuned metaparameters as well.
3 What is an optimizer?
Optimization algorithms are typically controlled by metaparameters that determine their behavior (e.g. the learning rate). An optimization algorithm therefore represents a family of update rules until all metaparameters have been specified. Practitioners generally tune a subset of the metaparameters to maximize performance over a validation set, while often leaving some metaparameters at fixed default values. We define an optimizer to be an update rule together with a list of metaparameters to tune. In other words, someone using Adam and tuning is using a “different” optimizer than someone using Adam with the default . We focus on first-order optimizers within the following standard model of iterative methods for optimization [Nesterov, 2018].
Consider a differentiable loss function
whose vector of first partial derivatives, or gradient, is given by. In our context, generally represents the loss function computed over an entire dataset by a neural network on a specific task, where is a vector of model parameters. The optimization problem is to find a global minimum such that for all , but in practice we content ourselves with points that are locally optimal, for all in a non-empty neighbourhood of . First-order methods for optimization [Nesterov, 2018] use queries to and locally at to solve this problem. In most deep learning applications, the cost of evaluating
scales linearly with the data set size and it is usually more effective to use a stochastic estimator of, whose cost is constant in the data set size [Bottou, 2010]. We assume that is a stochastic estimate of the true gradient for the remainder of this section.
The stochastic gradient descent algorithm[SGD; Robbins and Monro, 1951] is one of the simplest methods used for training neural networks. SGD is initialized with and produces a sequence of iterates according to the rule , where is an iteration-dependent “learning rate” or “step size”. Recently, there has been an explosion new methods in deep learning based on SGD, all of which fall into the following first-order scheme. update rule , initialization , metaparameters while stopping criteria on not met do end while return
This scheme is a slight modification of Nesterov’s (2018) and includes all of the modern first-order methods popular in deep learning. As an example, the metaparameter of SGD is a learning rate schedule and its update rule is given by . The Momentum method due to Polyak  generalizes the gradient method by linearly combining the gradient direction with a constant multiple of the previous parameter update. Its metaparameters are a learning rate schedule and a momentum parameter ,
The difference between optimizers is entirely captured by the choice of update rule and metaparameters . Thus, in analogy to (overloaded) function declarations in C++, we identify optimizers by an update rule “signature,” the update rule name together with the free metaparameter arguments. is not the same optimizer as , because the latter has two free metaparameters while the former only has one. The two concerns of a practitioner are choosing and . We consider each in turn.
3.1 The practice of choosing metaparameters
In the theory of convex optimization, metaparameter choices are well-understood for the most common methods on many classes of convex functions [Rockafellar, 1970, Nesterov, 2018, Boyd and Vandenberghe, 2004].
For example, for smooth convex loss functions, the learning rate of gradient descent should be the inverse of the smoothness constant.
This stands in sharp contrast to non-convex neural network optimization, for which the interactions between metaparameters and loss function classes are not well understood.
Many of the most popular neural network optimization methods have a panoply of metaparameters whose provenance is sometimes accidental and whose importance is disputed. Despite Adam’s metaparameter being introduced solely to prevent division by zero and often being ignored in practice111 The Keras documentation previously referred to
The Keras documentation previously referred toas a “fuzz factor” and now doesn’t mention it at all (https://git.io/no-epsilon)., some practitioners have nonetheless found it helpful to tune (see Section 2). If Adam is interpreted as an empirical, diagonal approximation to natural gradient descent [Kingma and Ba, 2015], can be viewed as a multi-purpose damping term whose role is to improve the conditioning of the Fisher, in analogy to the approximate second-order method considered by Becker and Le Cun . We can also view as setting a trust region radius [Martens and Grosse, 2015, Adolphs et al., 2019]
and controlling an interpolation between momentum and diagonal natural gradient descent, by either diminishing or increasing the effect ofon the update direction. Under either interpretation, the best value for will be problem-dependent and likely benefit from tuning.
Since the roles of optimizer metaparameters on neural network loss functions are not well-understood, most practitioners treat metaparameters as nuisance parameters and optimize them away for each new workload via a tuning protocol. These protocols vary widely, but all contemporary protocols require a hand-designed search space as input, including partially automated procedures using Bayesian optimization [Snoek et al., 2012]. Good search spaces are hard-won treasures: they tend to be refined over many experiments and across many workloads, representing the sum total of a practitioner’s experience. Even given a search space, the best way to tune is still an open research question that depends on the computational budget of the user. Grid search is inefficient [Bergstra and Bengio, 2012] and random search and Bayesian optimization algorithms tend to use priors oblivious to the meanings of different metaparameters [Snoek et al., 2012]. For budgets that allow dozens or hundreds of trials and multiple rounds of experiments, the current state of the art for tuning metaparameters is to iteratively use human judgment to design a search space and use some black-box algorithm to tune within that space.
3.2 The taxonomy of first-order methods and choosing the update rule
The basic observation of this section is that some optimizers can approximately simulate others (i.e., optimizer A might be able to approximately simulate the trajectory of optimizer B for any particular setting of B’s metaparameters). This is important knowledge because, as a metaparameter tuning protocol approaches optimality, a more expressive optimizer can never underperform any of its specializations. To capture these concepts more precisely, we define the following inclusion relationship between optimizers, which captures the idea that one optimizer can approximate another arbitrarily well.
Definition (Inclusion relationship).
Let be update rules for use in a first-order optimization method. is a subset or specialization of , if for all , there exists a sequence , such that for all and information sets ,
This is denoted , with equality iff and .
Evidently , since . Many well-known optimizers fall naturally into this taxonomy. In particular, we consider with momentum [Tieleman and Hinton, 2012], [Kingma and Ba, 2015] and [Dozat, 2016] in the appendix and show the following inclusions.222The transformation that generalizes Momentum into RMSProp can also be applied to Nesterov. So, in the appendix we define RMSterov, a novel variant satisfying .
If two optimizers have an inclusion relationship, the more general optimizer can never can never be worse with respect to any metric of interest, provided the metaparameters are sufficiently tuned to optimize that metric. Optimally-tuned Momentum cannot underperform optimally-tuned SGD, because setting in Momentum recovers SGD. However, optimizers with more metaparameters might be more expensive to tune, so we should have a theoretical or experimental reason for using (or creating) a more general optimizer. For example, Momentum improves local convergence rates over SGD on twice-differentiable functions that are smooth and strongly convex [Polyak, 1964], and Nesterov has globally optimal convergence rates within the class of smooth and strongly convex functions [Nesterov, 1983, 2018].
At first glance, the taxonomy of optimizer inclusions appears to resolve many optimizer comparison questions. However, for a deep learning practitioner, there is no guarantee that the inclusion hierarchy is at all meaningful in practice. For example, the metaparameters that allow Adam to match or outperform Momentum might not be easily accessible. They might exist only in the limit of very large values, or be so difficult to find that only practitioners with huge computational budgets can hope to discover them. Indeed, empirical studies and conventional wisdom hold that the inclusion hierarchy does not predict optimizer performance for many practical workloads [Wilson et al., 2017, Balles and Hennig, 2017, Schneider et al., 2019]. Either these experimental investigations are too limited or the taxonomy of this section is of limited practical interest and provides no guidance about which optimizer to use on a real workload. In the following section we attempt to answer this question experimentally, and show that these inclusion relationships are meaningful in practice.
An empirical comparison of optimizers should aim to inform a careful practitioner. Accordingly, we model our protocol on a practitioner that is allowed to vary all optimization metaparameters for each optimizer (e.g. , , , for Adam) in addition to a parameterized learning rate decay schedule, in contrast to studies that fix a subset of the optimization metaparameters to their default values [e.g. Wilson et al., 2017, Schneider et al., 2019]. There is no standard method for selecting the values of these metaparameters, but most practitioners tune at least a subset of the optimization metaparameters by running a set of trials to maximize performance over the validation set. In our experiments, we run tens to hundreds of individual trials per workload. Given the variety of workloads we consider, this trial budget covers a wide range of computational budgets.
Selecting the metaparameter search space for each optimizer is a key methodological choice for any empirical comparison of optimizers. Prior studies have attempted to treat each optimizer fairly by using the same search space for all optimizers [e.g. Wilson et al., 2017, Schneider et al., 2019]. However, this requires the assumption that similarly-named metaparameters should take similar values between optimizers, which is not always true. For example, Momentum and Nesterov both have similar-looking momentum and learning rate metaparameters, but Nesterov tolerates larger values of its momentum metaparameter [Sutskever et al., 2013], so any fixed search space will likely be more favorable for one of the two. The situation worsens with less closely related optimizers, and designing a search space that is equally appropriate for optimizers with incommensurate metaparameters is almost impossible. Despite coming with its own set of challenges, it is most informative to compare optimizers assuming the practitioner is allowed to tune metaparameters for different optimizers independently by way of optimizer-specific search spaces.
In our experiments, we chose the search space for each optimizer by running an initial set of experiments over a relatively large search space. In a typical case, we ran a single set of initial trials per optimizer to select the final search space. However, in some cases we chose the initial search space poorly, so we ran another set of experiments to select the final search space. The effort required to choose each search space cannot simply be quantified by the number of initial trials; the provenance of each search space is difficult to trace exactly. In some cases, our search spaces were informed by published results or prior experience with particular models and optimizers. We validated our search spaces by checking that that the optimal metaparameter values were away from the search space boundaries for all optimizers in all experiments (see Figure 5 in Appendix E). We provide our final search spaces for all experiments in Appendix D. The fact that our final error rates compare favorably to prior published results – including reaching state-of-the-art for our particular configuration of ResNet-50 on ImageNet (see Section 4.2) – supports our claim that our methodology is highly competitive with expert tuning procedures.
4.1 Overview of Workloads and Experimental Details
|Simple CNN||Fashion MNIST||6.6%||256||10k steps|
|LSTM||War and Peace||–||50||200 epochs|
|Cross entropy||Transformer||LM1B||3.45||256||750k steps|
We investigated the relative performance of optimizers across a variety of image classification and language modeling tasks. For image classification, we trained a simple convolutional neural network (Simple CNN) on Fashion MNIST[Xiao et al., 2017]; ResNet-32 [He et al., 2016a] on CIFAR-10 [Krizhevsky, 2009]; a CNN on CIFAR-100; VGG-16 [Simonyan and Zisserman, 2014] on CIFAR-10; and ResNet-50 on ImageNet [Russakovsky et al., 2015]. For language modeling, we trained a 2-layer LSTM model [Hochreiter and Schmidhuber, 1997] on Tolstoy’s War and Peace; and Transformer [Vaswani et al., 2017] on LM1B [Chelba et al., 2014]. We used a linear learning rate decay schedule parameterized the same way as Shallue et al.  for all workloads. We used a fixed batch size and a fixed budget of training steps for each workload independent of the optimizer. Table 1 summarizes these workloads and Appendix B provides the full details.
Given a hypercube-shaped search space, our tuning protocol sought to model a practitioner with a fixed budget of trials trying to achieve the best outcome using tens of feasible trials (, , or depending on the workload).333Although we used a budget of tens of independent tuning trials throughout this section, in retrospect the best validation error across tuning trials converged quite quickly for our final search spaces, producing good results with fewer than 20 trials in many cases. See Figures 6– 8 in Appendix E. A feasible trial is any trial that achieves finite training loss. We used quasi-random uniform search [Bousquet et al., 2017], and continued the search until we obtained a fixed number of feasible trials. From those trials we considered two statistics. The first, in order to characterize the best outcome, is a metric of interest (e.g. test accuracy) corresponding to the trial achieving the optimum of some other metric (e.g. validation accuracy). The second, in order to characterize the speed of training, is the number of steps required to reach a fixed validation target conditional on at least one trial in the search having reached that target. We chose the target for each workload based on initial experiments and known values from the literature (see Table 1). We estimated means and uncertainties using the bootstrap procedure described in Appendix C.
4.2 Inclusion relationships matter in practice
Figure 1 shows the final predictive performance of six optimizers on four different workloads after tuning metaparameters to minimize validation error. Regardless of whether we compare final validation error or test error, the inclusion relationships hold in all cases – a more general optimizer never underperforms any of its specializations within the error bars. Similar results hold for training error (see Figure 9 in Appendix E). Training speed is also an important consideration, and Figure 2 demonstrates that the inclusion relationships also hold within error bars when we compare the number of steps required to reach a target validation error. Moreover, these results confirming the relevance of optimizer inclusion relationships do not depend on the exact step budgets or error targets we chose (see Figure 10 in Appendix E), although large changes to these values would require new experiments.
Of course, just because a more general optimizer is no worse than any of its specializations doesn’t mean the choice of optimizer makes a large difference on all workloads. For some workloads in Figures 1 and 2, all optimizers perform about the same, while other workloads have a clear ranking or even dramatic differences. For example, the choice of optimizer seems to make little difference for ResNet-32 on CIFAR-10; all optimizers achieve similar predictive performance and training speed. On the other hand, Transformer on LM1B exhibits a clear ranking in terms of predictive performance and training speed. For this workload, Adam needs roughly half the steps that Momentum requires to reach our target error, and, although not shown in Figure 2, roughly six times fewer steps to get the same result as SGD. These differences are clearly significant enough to matter to a practitioner, and highlight the practical importance of choosing the right optimizer for some workloads.
The most general optimizers we considered were RMSProp, Adam, and NAdam, which do not include each other as special cases, and whose relative performance is not predicted by inclusion relationships. Across the workloads we considered, none of these optimizers emerged as the clear winner, although Adam and NAdam generally seemed to have an edge over RMSProp. For all of these optimizers, we sometimes had to set the parameter orders of magnitude larger than the default value in order to get good results. In particular, we achieved a validation accuracy of 77.1% for ResNet-50 on ImageNet using NAdam with , a result that exceeds the 76.5% achieved by Goyal et al.  using Momentum. Across just these 4 workloads, the range of the optimal values of the parameter spanned 10 orders of magnitude. Faced with this reality, a practitioner might reasonably doubt their ability to find a value near the optimum. However, we found that we could reasonably expect to find a suitable value with only tens of trials. When tuning for Adam or NAdam over a large range, we found it more efficient to search over instead of ; see Appendix D for more details.
4.3 Reconciling disagreements with previous work
In order to confirm that differences in metaparameter tuning protocols explain the differences between our conclusions and those of Wilson et al.  and Schneider et al. , we reproduced a representative subset of their results and then inverted, or at least collapsed, the ranking over optimizers just by expanding the metaparameter search space.
The left pane of Figure 3 shows our experiments on VGG on CIFAR-10 using code released by Wilson et al. . When we match their protocol and perform their grid search over the initial learning rate and no other tuning, we reproduce their original result showing worse test error for RMSProp and Adam. However, when we tune the momentum parameter and with random search, all four optimizers reach nearly identical test error rates.444Wilson et al.  selected trials to minimize the training loss and then report test set results. As Figure 3 shows, removing this somewhat non-standard choice and tuning on a validation set and reporting test set results does not change anything. With our learning rate schedule search space, merely tuning the learning rate schedule was enough to make all optimizers reach the same test error within error bars. When we additionally tuned the optimization metaparameters and weight decay in our setup we also get similar results for all optimizers, removing any evidence the inclusion relationships might be violated in practice.
Figure 4 shows our results with different tuning protocols for a CNN on CIFAR-100 and an LSTM language model trained on War and Peace to match the experiments in Schneider et al. . As reported by Schneider et al. , if we only tune the learning rate without tuning the decay schedule or other optimizer metaparameters, Adam does worse than Momentum for the CNN and SGD performs slightly better than Adam and Momentum on the War and Peace dataset, although Schneider et al.  found a larger advantage for SGD. However, once we tune the all the optimizer metaparameters, Adam does better than Momentum which does better than SGD, as predicted by the inclusion relationships.
We conclude that the reason both Schneider et al.  and Wilson et al.  observed a ranking that, at first glance, contradicts the inclusion relationships is because they were not tuning enough of the metaparameters. If we recast their results in our terminology where Adam with default is a different optimizer than Adam with tuned then there is no contradiction with our results and it becomes clear immediately that they do not consider the most interesting form of Adam for practitioners.
Inspired by the recent efforts of Wilson et al.  and Schneider et al. , we set out to provide a detailed empirical characterization of the optimizer selection process in deep learning. Our central finding is that inclusion relationships between optimizers are meaningful in practice. When tuning all available metaparameters under a realistic protocol at scales common in deep learning, we find that more general optimizers never underperform their special cases. In particular, we found that RMSProp, Adam, and NAdam never underperformed SGD, Nesterov, or Momentum under our most exhaustive tuning protocol. We did not find consistent trends when comparing optimizers that could not approximate each other. We also found workloads for which there was not a statistically significant separation in the optimizer ranking.
Our experiments have some important limitations and we should be careful not to overgeneralize from our results. The first major caveat is that we did not measure the effects of varying the batch size. Recent empirical work [Shallue et al., 2019, Zhang et al., 2019] has shown that increasing the batch size can increase the gaps between training times for different optimizers, with the gap from SGD to Momentum [Shallue et al., 2019] and from Momentum to Adam [Zhang et al., 2019] increasing with the batch size. Nevertheless, we strongly suspect that the inclusion relations would be predictive at any batch size under a tuning protocol similar to the one we used. The second important caveat of our results is that they inevitably depend on the tuning protocol and workloads that we considered. Although we made every attempt to conduct realistic experiments, we should only expect our detailed findings to hold for similar workloads under similar protocols, namely uniform quasi-random tuning for tens to hundreds of trials, over hypercube search spaces, and with our specific learning rate schedule parameterization. Nevertheless, these caveats reinforce our central point: all empirical comparisons of neural network optimizers depend heavily on the metaparameter tuning protocol, perhaps far more than we are used to with comparisons between model architectures.
If we were to extract “best practices” from our findings, then we suggest the following. If we can afford tens or more runs of our code, we should tune all of the metaparameters of the popular adaptive gradient methods. Just because two metaparameters have a similar role in two different update rules doesn’t mean they should take similar values— optimization metaparameters tend to be coupled and the optimal value for one may depend on how the others are set. Our results also confirm that the optimal value of Adam’s is problem-dependent, so the onus is on empirical studies that fix to defend that choice. Finally, we should be skeptical of empirical comparisons of optimizers in papers, especially if an optimizer underperforms any of its specializations. When we do inevitably compare optimizers, we should report search spaces and highlight decisions about what metaparameters were tuned when interpreting results.
- Adolphs et al.  Leonard Adolphs, Jonas Kohler, and Aurelien Lucchi. Ellipsoidal trust region methods and the marginal value of Hessian information for neural network training. arXiv preprint arXiv:1905.09201, 2019.
- Aitchison  Laurence Aitchison. A unified theory of adaptive stochastic gradient descent as Bayesian filtering. arXiv preprint arXiv:1807.07540, 2018.
- Balles and Hennig  Lukas Balles and Philipp Hennig. Dissecting Adam: The sign, magnitude and variance of stochastic gradients. arXiv e-prints, art. arXiv:1705.07774, May 2017.
Becker and Le Cun 
S Becker and Y Le Cun.
Improving the convergence of the backpropagation learning with second order methods.Morgan Koufmann, San Mateo, CA, 1988.
Bergstra and Bengio 
James Bergstra and Yoshua Bengio.
Random search for hyper-parameter optimization.
Journal of Machine Learning Research, 13(Feb):281–305, 2012.
- Bernstein et al.  Jeremy Bernstein, Yu-Xiang Wang, Kamyar Azizzadenesheli, and Anima Anandkumar. signSGD: Compressed optimisation for non-convex problems. arXiv preprint arXiv:1802.04434, 2018.
- Bottou  Léon Bottou. Large-scale machine learning with stochastic gradient descent. In Proceedings of COMPSTAT’2010, pages 177–186. Springer, 2010.
- Bousquet et al.  Olivier Bousquet, Sylvain Gelly, Karol Kurach, Olivier Teytaud, and Damien Vincent. Critical hyper-parameters: no random, no cry. arXiv preprint arXiv:1706.03200, 2017.
- Boyd and Vandenberghe  Stephen Boyd and Lieven Vandenberghe. Convex optimization. Cambridge university press, 2004.
- Chelba et al.  Ciprian Chelba, Tomas Mikolov, Mike Schuster, Qi Ge, Thorsten Brants, Phillipp Koehn, and Tony Robinson. One billion word benchmark for measuring progress in statistical language modeling. In Conference of the International Speech Communication Association, 2014.
- Chen et al.  Jing Chen, Liu Zhao, Xue Qiao, and Yang Fu. NAMSG: An efficient method for training neural networks. arXiv preprint arXiv:1905.01422, 2019.
- Chen et al.  Xiangyi Chen, Sijia Liu, Ruoyu Sun, and Mingyi Hong. On the convergence of a class of Adam-type algorithms for non-convex optimization. arXiv preprint arXiv:1808.02941, 2018.
- Cubuk et al.  Ekin D Cubuk, Barret Zoph, Dandelion Mane, Vijay Vasudevan, and Quoc V Le. Autoaugment: Learning augmentation policies from data. In
- De et al.  Soham De, Anirbit Mukherjee, and Enayat Ullah. Convergence guarantees for RMSProp and ADAM in non-convex optimization and an empirical comparison to Nesterov acceleration. arXiv preprint arXiv:1807.06766, 2018.
Incorporating Nesterov momentum into Adam.In ICLR Workshops, 2016.
- Goyal et al.  Priya Goyal, Piotr Dollár, Ross Girshick, Pieter Noordhuis, Lukasz Wesolowski, Aapo Kyrola, Andrew Tulloch, Yangqing Jia, and Kaiming He. Accurate, large minibatch SGD: training ImageNet in 1 hour. arXiv preprint arXiv:1706.02677, 2017.
- He et al. [2016a] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In Conference on Computer Vision and Pattern Recognition, pages 770–778. IEEE, 2016a.
- He et al. [2016b] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Identity mappings in deep residual networks. In European Conference on Computer Vision, pages 630–645. Springer, 2016b.
- Hessel et al.  M Hessel, J Modayil, and H van Hasselt. Rainbow: Combining improvements in deep reinforcement learning. 2017. arXiv preprint arXiv:1710.02298, 2017.
- Hochreiter and Schmidhuber  Sepp Hochreiter and Jürgen Schmidhuber. Long short-term memory. Neural Computation, 9(8):1735–1780, 1997.
- Hoffer et al.  Elad Hoffer, Itay Hubara, and Daniel Soudry. Train longer, generalize better: closing the generalization gap in large batch training of neural networks. In Advances in Neural Information Processing Systems, pages 1731–1741, 2017.
- Ioffe and Szegedy  Sergey Ioffe and Christian Szegedy. Batch normalization: accelerating deep network training by reducing internal covariate shift. In International Conference on Machine Learning, pages 448–456, 2015.
- Keskar and Socher  Nitish Shirish Keskar and Richard Socher. Improving generalization performance by switching from Adam to SGD. arXiv preprint arXiv:1712.07628, 2017.
- Kingma and Ba  Diederik P Kingma and Jimmy Ba. Adam: a method for stochastic optimization. In ICLR, 2015.
- Krizhevsky  Alex Krizhevsky. Learning multiple layers of features from tiny images. Technical report, University of Toronto, 2009. URL http://www.cs.toronto.edu/~kriz/learning-features-2009-TR.pdf.
- Liu et al.  Liyuan Liu, Haoming Jiang, Pengcheng He, Weizhu Chen, Xiaodong Liu, Jianfeng Gao, and Jiawei Han. On the variance of the adaptive learning rate and beyond. arXiv preprint arXiv:1908.03265, 2019.
- Liu et al.  Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. RoBERTa: a robustly optimized BERT pretraining approach. arXiv e-prints, art. arXiv:1907.11692, Jul 2019.
- Loshchilov and Hutter  Ilya Loshchilov and Frank Hutter. Fixing weight decay regularization in Adam. arXiv preprint arXiv:1711.05101, 2017.
- Luo et al.  Liangchen Luo, Yuanhao Xiong, Yan Liu, and Xu Sun. Adaptive gradient methods with dynamic bound of learning rate. arXiv preprint arXiv:1902.09843, 2019.
- Ma and Yarats  Jerry Ma and Denis Yarats. Quasi-hyperbolic momentum and Adam for deep learning. arXiv preprint arXiv:1810.06801, 2018.
- Martens and Grosse  James Martens and Roger Grosse. Optimizing neural networks with Kronecker-factored approximate curvature. arXiv preprint arXiv:1503.05671, page 58, 2015.
- Nesterov  Yurii Nesterov. Lectures on convex optimization, volume 137. Springer, 2018.
- Nesterov  Yurii E Nesterov. A method for solving the convex programming problem with convergence rate O(1/k^2). In Dokl. akad. nauk Sssr, volume 269, pages 543–547, 1983.
- Polyak  Boris T Polyak. Some methods of speeding up the convergence of iteration methods. USSR Computational Mathematics and Mathematical Physics, 4(5):1–17, 1964.
- Reddi et al.  Sashank J Reddi, Satyen Kale, and Sanjiv Kumar. On the convergence of Adam and beyond. In ICLR, 2019.
- Robbins and Monro  Herbert Robbins and Sutton Monro. A stochastic approximation method. The Annals of Mathematical Statistics, 22(3):400–407, 1951.
- Rockafellar  R Tyrrell Rockafellar. Convex analysis, volume 28. Princeton university press, 1970.
- Russakovsky et al.  Olga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng Huang, Andrej Karpathy, Aditya Khosla, Michael Bernstein, et al. ImageNet large scale visual recognition challenge. International Journal of Computer Vision, 115(3):211–252, 2015.
- Schneider et al.  Frank Schneider, Lukas Balles, and Philipp Hennig. DeepOBS: a deep learning optimizer benchmark suite. arXiv preprint arXiv:1903.05499, 2019.
- Shallue et al.  Christopher J. Shallue, Jaehoon Lee, Joseph Antognini, Jascha Sohl-Dickstein, Roy Frostig, and George E. Dahl. Measuring the effects of data parallelism on neural network training. Journal of Machine Learning Research, 20(112):1–49, 2019.
- Simonyan and Zisserman  Karen Simonyan and Andrew Zisserman. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556, 2014.
- Smith  Leslie N. Smith. A disciplined approach to neural network hyper-parameters: Part 1 – learning rate, batch size, momentum, and weight decay. arXiv e-prints, art. arXiv:1803.09820, Mar 2018.
- Snoek et al.  Jasper Snoek, Hugo Larochelle, and Ryan P. Adams. Practical Bayesian optimization of machine learning algorithms. In Proceedings of the 25th International Conference on Neural Information Processing Systems - Volume 2, NIPS’12, pages 2951–2959, USA, 2012. Curran Associates Inc.
- Springenberg et al.  Jost Tobias Springenberg, Alexey Dosovitskiy, Thomas Brox, and Martin Riedmiller. Striving for simplicity: the all convolutional net. arXiv preprint arXiv:1412.6806, 2014.
- Sutskever et al.  Ilya Sutskever, James Martens, George Dahl, and Geoffrey Hinton. On the importance of initialization and momentum in deep learning. In ICML, 2013.
- Szegedy et al.  Christian Szegedy, Vincent Vanhoucke, Sergey Ioffe, Jon Shlens, and Zbigniew Wojna. Rethinking the inception architecture for computer vision. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 2818–2826, 2016.
- Tan and Le  Mingxing Tan and Quoc V Le. Efficientnet: Rethinking model scaling for convolutional neural networks. arXiv preprint arXiv:1905.11946, 2019.
- Tan et al.  Mingxing Tan, Bo Chen, Ruoming Pang, Vijay Vasudevan, Mark Sandler, Andrew Howard, and Quoc V Le. Mnasnet: Platform-aware neural architecture search for mobile. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 2820–2828, 2019.
- Tieleman and Hinton  Tijmen Tieleman and Geoffrey Hinton. Lecture 6.5-RMSProp: Divide the gradient by a running average of its recent magnitude. COURSERA: Neural networks for machine learning, 4(2):26–31, 2012.
- Vaswani et al.  Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. Attention is all you need. In Advances in Neural Information Processing Systems, pages 5998–6008, 2017.
- Wilson et al.  Ashia C Wilson, Rebecca Roelofs, Mitchell Stern, Nati Srebro, and Benjamin Recht. The marginal value of adaptive gradient methods in machine learning. In Advances in Neural Information Processing Systems 30, pages 4148–4158. Curran Associates, Inc., 2017.
- Xiao et al.  Han Xiao, Kashif Rasul, and Roland Vollgraf. Fashion-MNIST: a novel image dataset for benchmarking machine learning algorithms. arXiv preprint arXiv:1708.07747, 2017.
- Zaheer et al.  Manzil Zaheer, Sashank Reddi, Devendra Sachan, Satyen Kale, and Sanjiv Kumar. Adaptive methods for nonconvex optimization. In Advances in Neural Information Processing Systems 31, pages 9793–9803. Curran Associates, Inc., 2018.
- Zhang et al.  Guodong Zhang, Lala Li, Zachary Nado, James Martens, Sushant Sachdeva, George E. Dahl, Christopher J. Shallue, and Roger Grosse. Which algorithmic choices matter at which batch sizes? insights from a noisy quadratic model. arXiv e-prints, art. arXiv:1907.04164, Jul 2019.
- Zhang and Mitliagkas  Jian Zhang and Ioannis Mitliagkas. Yellowfin and the art of momentum tuning. arXiv preprint arXiv:1706.03471, 2017.
- Zhou et al.  Zhiming Zhou, Qingru Zhang, Guansong Lu, Hongwei Wang, Weinan Zhang, and Yong Yu. Adashift: Decorrelation and convergence of adaptive learning rate methods. arXiv preprint arXiv:1810.00143, 2018.
- Zou and Shen  Fangyu Zou and Li Shen. On the convergence of weighted AdaGrad with momentum for training deep neural networks. arXiv preprint arXiv:1808.03408, 2018.
- Zou et al.  Fangyu Zou, Li Shen, Zequn Jie, Weizhong Zhang, and Wei Liu. A sufficient condition for convergences of Adam and RMSProp. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 11127–11135, 2019.
Appendix A Optimization schemes and their inclusions
a.1 Optimization schemes
summarizes the update rules for the optimizers we consider in this work. We assume update rules as implemented in TensorFlow.RMSProp includes momentum.
a.2 Optimization Inclusions: Which optimizers can implement other optimizers?
Momentum can exactly implement SGD
, so .
Nesterov can exactly implement SGD
, so .
RMSProp with momentum can exactly implement Momentum
Consider , so that
This is equivalent to Momentum, since
Thus , so .
RMSterov can exactly implement Nesterov
Consider , so that
This is equivalent to Momentum, since
Thus , so .
Adam can approximate Momentum for large
Consider , so that
If is large, so that , then
This is equivalent to Momentum, since
Thus , so
NAdam can approximate Nesterov for large
Consider , so that
If is large, so that , then
This is equivalent to Nesterov, since
Thus , so .
Appendix B Workload details
This section details the datasets and models summarized in Table 1.
b.1 Dataset Descriptions
For Fashion MNIST, CIFAR-10, ImageNet, and LM1B, our setup was identical to Shallue et al.  except for the image pre-processing details described below. For War and Peace, our setup was identical to the “Tolstoi” dataset of Schneider et al. .
We pre-processed images by subtracting the average value across all pixels and channels and dividing by the standard deviation.555We used the TensorFlow op tf.image.per_image_standardization. For experiments with the ResNet-32 and CNN models, we followed the standard data augmentation scheme used in He et al. [2016a]
: 4 pixels padded on each side with single random crop from padded image or its horizontal reflection. We did not use random cropping for experiments with VGG for consistency withWilson et al. .
ImageNet: We augmented images at training time by resizing each image, taking a random crop of pixels, randomly horizontally reflecting the cropped images, and randomly distorting the image colors. At evaluation time, we performed a single central crop of pixels. In both training and evaluation, we then subtracted the global mean RGB value from each pixel using the values computed by Simonyan and Zisserman .666See https://gist.github.com/ksimonyan/211839e770f7b538e2d8#description for the mean RGB values used.
b.2 Model Descriptions
Simple CNN is identical to the base model described in Shallue et al. 
. It consists of 2 convolutional layers with max pooling followed by 1 fully connected layer. The convolutional layers usewindow with stride 2. Convolutional layers have 32 and 64 filters each and the fully connected layer has 1024 units. It does not use batch normalization.
CNN is the “All-CNN-C” model from Springenberg et al. , as used in Schneider et al. . The model consists of 3 convolutional layer blocks with max pooling. The convolutional layers use filters with stride 1, “same” padding, and ReLU activation function. Max pooling uses a window with stride 2. Convolutional layer blocks have 96, 192 and 192 filters each. As in Schneider et al. , we used regularization of .
ResNet is described in He et al. [2016a]. We used the improved residual block described in He et al. [2016b]. We used batch normalization [Ioffe and Szegedy, 2015] with exponential moving average (EMA) decay of 0.997 for ResNet-32, and ghost batch normalization [Hoffer et al., 2017] with ghost batch size of 32 and EMA decay of for ResNet-50.
VGG is based on “model C” from Simonyan and Zisserman . It consists of 13 convolutional layers followed by 3 fully connected hidden layers. We followed the modification used by Wilson et al.  with batch normalization layers.
Transformer is the “base” model described in [Vaswani et al., 2017]
. We used it as an autoregressive language model by applying the decoder directly to the sequence of word embeddings for each sentence. Unlike the default implementation, we removed dropout regularization and used separate weight matrices for the input embedding layer and the pre-softmax linear transformation, as we observed these choices led to better performing models.
Appendix C Estimating trial outcomes via bootstrap
Our tuning protocol corresponds to running trials with quasi-random metaparameter values sampled uniformly from the search space until feasible trials are obtained, with depending on the workload. We then select the best trial, based on our statistic of interest, over those trials.
We used the following bootstrap procedure to estimate means and uncertainties of our tuning protocol. We ran trials, with depending on the workload. Then, for each bootstrap sample, we resampled the dataset of trials with replacement and computed our statistic on the first trials of the resampled dataset. We collected such bootstrap samples each time, and from those computed the means, percentiles, and percentiles of the bootstrap distribution. We used this procedure to generate the means and error bars for each plot.
Simple CNN on Fashion MNIST used ; ResNet-32 on CIFAR-100 used ; ResNet-50 on ImageNet used ; Transformer on LM1B used ; VGG on CIFAR-10 with our code used for tuning the learning rate schedule and for tuning the learning rate schedule, , and regularization; CNN on CIFAR-10 used ; LSTM on War and Peace used for tuning just the learning rate and for tuning the learning rate schedule and .
The sole exceptions to this bootstrap procedure are the two left panels of Figure 3, for which we used a similar procedure to Wilson et al.  to ensure comparability. For each optimizer, we selected the trial that minimized validation error in our final search space and ran the same metaparameter values 5 times, reporting the mean, minimum, and maximum test error over those 5 runs in Figure 3. This is slightly different to Wilson et al. , who chose the trial that minimized training error and reported validation error. When tuning the learning rate and , we used 24 trials per optimizer in the initial search space (which we used to select the final search space), and 16 trials per optimizer in the final search space.
Appendix D Metaparameter Search Spaces
When tuning metaparameters over a large range, we found that our search could sometimes be made more efficient if we parametrized the search space in a way that decorrelated the axes of the space. For example, with Momentum and Nesterov we observed a clear relationship between the initial learning rate and the momentum parameter ; smaller values of require larger values of for good performance, and vice versa. Indeed, Shallue et al.  suggested that these optimizers are governed by the “effective learning rate” , and inspired by this, we found that searching over instead of usually led to a more efficient metaparameter search. Similarly, with Adam and NAdam we observed a relationship between the initial learning rate and the parameter; larger values of require larger values of for good performance, and vice versa. This is not surprising given the analysis in Appendix A that showed that, for large , is analogous to the effective learning rate of Adam and NAdam. We found that searching over was usually more efficient than searching over . We used these techniques in a subset of our experiments.
Below we report the search spaces used for our experiments. We include both the initial search spaces used to refine the search spaces, and the final spaces used to generate the plots. When only one search space was used, we denote the initial space as final. , , , , , , and combinations thereof are always tuned on a log scale. The number of samples from each search space is specified in Appendix C.
d.1 CNN on Fashion MNIST
We used linear learning rate decay for all experiments. We tuned the number of decay steps within times the number of training steps and the learning rate decay factor within . We did not use regularization or weight decay.
d.2 ResNet-32 on CIFAR-10
We used linear learning rate decay for all experiments. We tuned the number of decay steps within times the number of training steps and the learning rate decay factor within the values shown in the tables below. denotes the regularization coefficient.
d.3 ResNet-50 on ImageNet
We used linear learning rate decay for all experiments. We tuned the number of decay steps within times the number of training steps and the learning rate decay factor within the values shown in the tables below. denotes the weight decay coefficient and denotes the label smoothing coefficient.