On Empirical Comparisons of Optimizers for Deep Learning

by   Dami Choi, et al.

Selecting an optimizer is a central step in the contemporary deep learning pipeline. In this paper, we demonstrate the sensitivity of optimizer comparisons to the metaparameter tuning protocol. Our findings suggest that the metaparameter search space may be the single most important factor explaining the rankings obtained by recent empirical comparisons in the literature. In fact, we show that these results can be contradicted when metaparameter search spaces are changed. As tuning effort grows without bound, more general optimizers should never underperform the ones they can approximate (i.e., Adam should never perform worse than momentum), but recent attempts to compare optimizers either assume these inclusion relationships are not practically relevant or restrict the metaparameters in ways that break the inclusions. In our experiments, we find that inclusion relationships between optimizers matter in practice and always predict optimizer comparisons. In particular, we find that the popular adaptive gradient methods never underperform momentum or gradient descent. We also report practical tips around tuning often ignored metaparameters of adaptive gradient methods and raise concerns about fairly benchmarking optimizers for neural network training.


page 1

page 2

page 3

page 4


Evaluating Deep Learning in SystemML using Layer-wise Adaptive Rate Scaling(LARS) Optimizer

Increasing the batch size of a deep learning model is a challenging task...

LaProp: a Better Way to Combine Momentum with Adaptive Gradient

Identifying a divergence problem in Adam, we propose a new optimizer, La...

Unbounded Bayesian Optimization via Regularization

Bayesian optimization has recently emerged as a popular and efficient to...

Curvature Injected Adaptive Momentum Optimizer for Convolutional Neural Networks

In this paper, we propose a new approach, hereafter referred as AdaInjec...

On the Tunability of Optimizers in Deep Learning

There is no consensus yet on the question whether adaptive gradient meth...

Does Adam optimizer keep close to the optimal point?

The adaptive optimizer for training neural networks has continually evol...

Descending through a Crowded Valley – Benchmarking Deep Learning Optimizers

Choosing the optimizer is among the most crucial decisions of deep learn...

1 Introduction

The optimization algorithm chosen by a deep learning practitioner determines the training speed and the final predictive performance of their model. To date, there is no theory that adequately explains how to make this choice. Instead, our community relies on empirical studies [Wilson et al., 2017] and benchmarking [Schneider et al., 2019]. Indeed, it is the de facto standard that papers introducing new optimizers report extensive comparisons across a large number of workloads. Therefore, to maximize scientific progress, we must have confidence in our ability to make empirical comparisons between optimization algorithms.

Although there is no theory guiding us when comparing optimizers, the popular first-order optimizers form a natural inclusion hierarchy. For example, Adam [Kingma and Ba, 2015] and RMSProp [Tieleman and Hinton, 2012] can approximately simulate Momentum [Polyak, 1964] if the term in the denominator of their parameter updates is allowed to grow very large. However, these relationships may not matter in practice. For example, the settings of Adam’s metaparameters that allow it to match the performance of Momentum may be too difficult to find (for instance, they may be infinite).

In this paper, we demonstrate two important and interrelated points about empirical comparisons of neural network optimizers. First, we show that inclusion relationships between optimizers actually matter in practice; in our experiments, more general optimizers never underperform special cases. Despite conventional wisdom [Wilson et al., 2017, Balles and Hennig, 2017], we find that when carefully tuned, Adam and other adaptive gradient methods never underperform Momentum or SGD. Second, we demonstrate the sensitivity of optimizer comparisons to the metaparameter tuning protocol. By comparing to previous experimental evaluations, we show how easy it is to change optimizer rankings on a given workload (model and dataset pair) by changing the metaparameter tuning protocol, with optimizer rankings stabilizing according to inclusion relationships as we spend more and more effort tuning. Our findings raise serious questions about the practical relevance of conclusions drawn from these sorts of empirical comparisons.

The remainder of this paper is structured as follows. In Section 2, we review related work, focusing on papers that make explicit claims about optimizer comparisons in deep learning and application papers that provide evidence about the tuning protocols of practitioners. We develop our definition of first-order optimizers in Section 3 along with a notion of inclusion relationships between optimizers. We present our experimental results in Section 4. Despite thorny methodological issues over how to avoid biases in comparisons due to search spaces that favor one optimizer over another, we believe that our experimental methodology is an acceptable compromise and has substantial practical relevance. Among other results, we show that the inclusion hierarchy of update rules is almost entirely predictive of optimizer comparisons. In particular, NAdam [Dozat, 2016]

achieves the best top-1 validation accuracy on ResNet-50 on ImageNet in our experiments. The 77.1% we obtain with

NAdam, although not as good as the 77.6% obtained using learned data augmentation by Cubuk et al. [2018], is better than the best existing published results using any of the more standard pre-processing pipelines (76.5%, due to Goyal et al. [2017] using Momentum).

2 Background and Related Work

Our work was inspired by the recent studies of neural network optimizers by Wilson et al. [2017] and Schneider et al. [2019]. Wilson et al. [2017] constructed a simple classification problem in which adaptive gradient methods (e.g. Adam) converge to provably worse solutions than standard gradient methods. However, crucially, their analysis ignored the parameter in the denominator of some adaptive gradient methods. Wilson et al. [2017] also presented experiments in which Adam produced worse validation accuracy than SGD across all deep learning workloads considered. However they only tuned over the learning rate and learning rate decay scheme in their experiments, leaving all other parameters of Adam at fixed default values. Despite these findings, adaptive gradient methods continue to be popular since the work of Wilson et al. [2017]. Schneider et al. [2019] presented a benchmark suite (DeepOBS) for deep learning optimizers and reported that there was no single best optimizer across the workloads they considered. Yet Schneider et al. [2019] only tuned the learning rate of each optimizer and left all other metaparameters at some fixed default values.

As we will discuss in Section 4.3, the choices of metaparameter tuning protocols in Wilson et al. [2017] and Schneider et al. [2019] may be the most important factor preventing their results from being relevant to practical choices about which optimizer to use. Metaparameter tuning is a crucial step of the deep learning pipeline [Bergstra and Bengio, 2012, Snoek et al., 2012, Sutskever et al., 2013, Smith, 2018], so it is critical for papers studying optimizers to match as closely as possible the tuning protocols of an ideal practitioner. Tuning protocols can vary widely and often differ between work studying neural network optimizers and work concerned with actually training neural networks to solve specific problems.

Recent papers that study or introduce optimization algorithms tend to compare to Adam and RMSProp without tuning , presumably to simplify their experiments. It is standard to leave at the common default value of for Adam and for RMSProp [Tieleman and Hinton, 2012, Kingma and Ba, 2015, Dozat, 2016, Balles and Hennig, 2017, Loshchilov and Hutter, 2017, Zou and Shen, 2018, Ma and Yarats, 2018, Bernstein et al., 2018, Chen et al., 2019, Zou et al., 2019]. Others do not even report the value of used [Balles and Hennig, 2017, Zhang and Mitliagkas, 2017, Keskar and Socher, 2017, Chen et al., 2018, Zhou et al., 2018, Aitchison, 2018, Reddi et al., 2019, Luo et al., 2019]. There are exceptions. Zaheer et al. [2018] and Liu et al. [2019] consider values orders of magnitude larger than the standard default. However, the experiments in both papers gave only a limited consideration to , testing at most two values while tuning Adam. De et al. [2018] is the only work we found that considered a broad range of values for . Both Zaheer et al. [2018] and De et al. [2018] found that non-default values of outperformed the default.

While it is also extremely common in applications to use a default value of , some notable papers tuned and selected values up to eight orders of magnitude away from the common defaults. Szegedy et al. [2016] used for RMSProp; Liu et al. [2019] reported that their results were sensitive to and set for Adam; Tan et al. [2019] and Tan and Le [2019] set

for RMSProp, the latter achieving state-of-the-art ImageNet top-1 accuracy. In reinforcement learning,

Hessel et al. [2017] set . Although we focused this discussion on in Adam and RMSProp, we suspect these trends hold for other rarely tuned metaparameters as well.

3 What is an optimizer?

Optimization algorithms are typically controlled by metaparameters that determine their behavior (e.g. the learning rate). An optimization algorithm therefore represents a family of update rules until all metaparameters have been specified. Practitioners generally tune a subset of the metaparameters to maximize performance over a validation set, while often leaving some metaparameters at fixed default values. We define an optimizer to be an update rule together with a list of metaparameters to tune. In other words, someone using Adam and tuning is using a “different” optimizer than someone using Adam with the default . We focus on first-order optimizers within the following standard model of iterative methods for optimization [Nesterov, 2018].

Consider a differentiable loss function

whose vector of first partial derivatives, or gradient, is given by

. In our context, generally represents the loss function computed over an entire dataset by a neural network on a specific task, where is a vector of model parameters. The optimization problem is to find a global minimum such that for all , but in practice we content ourselves with points that are locally optimal, for all in a non-empty neighbourhood of . First-order methods for optimization [Nesterov, 2018] use queries to and locally at to solve this problem. In most deep learning applications, the cost of evaluating

scales linearly with the data set size and it is usually more effective to use a stochastic estimator of

, whose cost is constant in the data set size [Bottou, 2010]. We assume that is a stochastic estimate of the true gradient for the remainder of this section.

The stochastic gradient descent algorithm

[SGD; Robbins and Monro, 1951] is one of the simplest methods used for training neural networks. SGD is initialized with and produces a sequence of iterates according to the rule , where is an iteration-dependent “learning rate” or “step size”. Recently, there has been an explosion new methods in deep learning based on SGD, all of which fall into the following first-order scheme. Algorithm 1 First-order optimization method. update rule , initialization , metaparameters while stopping criteria on not met do                   end while return

This scheme is a slight modification of Nesterov’s (2018) and includes all of the modern first-order methods popular in deep learning. As an example, the metaparameter of SGD is a learning rate schedule and its update rule is given by . The Momentum method due to Polyak [1964] generalizes the gradient method by linearly combining the gradient direction with a constant multiple of the previous parameter update. Its metaparameters are a learning rate schedule and a momentum parameter ,

The difference between optimizers is entirely captured by the choice of update rule and metaparameters . Thus, in analogy to (overloaded) function declarations in C++, we identify optimizers by an update rule “signature,” the update rule name together with the free metaparameter arguments. is not the same optimizer as , because the latter has two free metaparameters while the former only has one. The two concerns of a practitioner are choosing and . We consider each in turn.

3.1 The practice of choosing metaparameters

In the theory of convex optimization, metaparameter choices are well-understood for the most common methods on many classes of convex functions [Rockafellar, 1970, Nesterov, 2018, Boyd and Vandenberghe, 2004]. For example, for smooth convex loss functions, the learning rate of gradient descent should be the inverse of the smoothness constant. This stands in sharp contrast to non-convex neural network optimization, for which the interactions between metaparameters and loss function classes are not well understood. Many of the most popular neural network optimization methods have a panoply of metaparameters whose provenance is sometimes accidental and whose importance is disputed. Despite Adam’s metaparameter being introduced solely to prevent division by zero and often being ignored in practice111

The Keras documentation previously referred to

as a “fuzz factor” and now doesn’t mention it at all (https://git.io/no-epsilon)., some practitioners have nonetheless found it helpful to tune (see Section 2). If Adam is interpreted as an empirical, diagonal approximation to natural gradient descent [Kingma and Ba, 2015], can be viewed as a multi-purpose damping term whose role is to improve the conditioning of the Fisher, in analogy to the approximate second-order method considered by Becker and Le Cun [1988]. We can also view as setting a trust region radius [Martens and Grosse, 2015, Adolphs et al., 2019]

and controlling an interpolation between momentum and diagonal natural gradient descent, by either diminishing or increasing the effect of

on the update direction. Under either interpretation, the best value for will be problem-dependent and likely benefit from tuning.

Since the roles of optimizer metaparameters on neural network loss functions are not well-understood, most practitioners treat metaparameters as nuisance parameters and optimize them away for each new workload via a tuning protocol. These protocols vary widely, but all contemporary protocols require a hand-designed search space as input, including partially automated procedures using Bayesian optimization [Snoek et al., 2012]. Good search spaces are hard-won treasures: they tend to be refined over many experiments and across many workloads, representing the sum total of a practitioner’s experience. Even given a search space, the best way to tune is still an open research question that depends on the computational budget of the user. Grid search is inefficient [Bergstra and Bengio, 2012] and random search and Bayesian optimization algorithms tend to use priors oblivious to the meanings of different metaparameters [Snoek et al., 2012]. For budgets that allow dozens or hundreds of trials and multiple rounds of experiments, the current state of the art for tuning metaparameters is to iteratively use human judgment to design a search space and use some black-box algorithm to tune within that space.

3.2 The taxonomy of first-order methods and choosing the update rule

The basic observation of this section is that some optimizers can approximately simulate others (i.e., optimizer A might be able to approximately simulate the trajectory of optimizer B for any particular setting of B’s metaparameters). This is important knowledge because, as a metaparameter tuning protocol approaches optimality, a more expressive optimizer can never underperform any of its specializations. To capture these concepts more precisely, we define the following inclusion relationship between optimizers, which captures the idea that one optimizer can approximate another arbitrarily well.

Definition (Inclusion relationship).

Let be update rules for use in a first-order optimization method. is a subset or specialization of , if for all , there exists a sequence , such that for all and information sets ,

This is denoted , with equality iff and .

Evidently , since . Many well-known optimizers fall naturally into this taxonomy. In particular, we consider with momentum [Tieleman and Hinton, 2012], [Kingma and Ba, 2015] and [Dozat, 2016] in the appendix and show the following inclusions.222The transformation that generalizes Momentum into RMSProp can also be applied to Nesterov. So, in the appendix we define RMSterov, a novel variant satisfying .

SGD (1)

If two optimizers have an inclusion relationship, the more general optimizer can never can never be worse with respect to any metric of interest, provided the metaparameters are sufficiently tuned to optimize that metric. Optimally-tuned Momentum cannot underperform optimally-tuned SGD, because setting in Momentum recovers SGD. However, optimizers with more metaparameters might be more expensive to tune, so we should have a theoretical or experimental reason for using (or creating) a more general optimizer. For example, Momentum improves local convergence rates over SGD on twice-differentiable functions that are smooth and strongly convex [Polyak, 1964], and Nesterov has globally optimal convergence rates within the class of smooth and strongly convex functions [Nesterov, 1983, 2018].

At first glance, the taxonomy of optimizer inclusions appears to resolve many optimizer comparison questions. However, for a deep learning practitioner, there is no guarantee that the inclusion hierarchy is at all meaningful in practice. For example, the metaparameters that allow Adam to match or outperform Momentum might not be easily accessible. They might exist only in the limit of very large values, or be so difficult to find that only practitioners with huge computational budgets can hope to discover them. Indeed, empirical studies and conventional wisdom hold that the inclusion hierarchy does not predict optimizer performance for many practical workloads [Wilson et al., 2017, Balles and Hennig, 2017, Schneider et al., 2019]. Either these experimental investigations are too limited or the taxonomy of this section is of limited practical interest and provides no guidance about which optimizer to use on a real workload. In the following section we attempt to answer this question experimentally, and show that these inclusion relationships are meaningful in practice.

4 Experiments

An empirical comparison of optimizers should aim to inform a careful practitioner. Accordingly, we model our protocol on a practitioner that is allowed to vary all optimization metaparameters for each optimizer (e.g. , , , for Adam) in addition to a parameterized learning rate decay schedule, in contrast to studies that fix a subset of the optimization metaparameters to their default values [e.g. Wilson et al., 2017, Schneider et al., 2019]. There is no standard method for selecting the values of these metaparameters, but most practitioners tune at least a subset of the optimization metaparameters by running a set of trials to maximize performance over the validation set. In our experiments, we run tens to hundreds of individual trials per workload. Given the variety of workloads we consider, this trial budget covers a wide range of computational budgets.

Selecting the metaparameter search space for each optimizer is a key methodological choice for any empirical comparison of optimizers. Prior studies have attempted to treat each optimizer fairly by using the same search space for all optimizers [e.g. Wilson et al., 2017, Schneider et al., 2019]. However, this requires the assumption that similarly-named metaparameters should take similar values between optimizers, which is not always true. For example, Momentum and Nesterov both have similar-looking momentum and learning rate metaparameters, but Nesterov tolerates larger values of its momentum metaparameter [Sutskever et al., 2013], so any fixed search space will likely be more favorable for one of the two. The situation worsens with less closely related optimizers, and designing a search space that is equally appropriate for optimizers with incommensurate metaparameters is almost impossible. Despite coming with its own set of challenges, it is most informative to compare optimizers assuming the practitioner is allowed to tune metaparameters for different optimizers independently by way of optimizer-specific search spaces.

In our experiments, we chose the search space for each optimizer by running an initial set of experiments over a relatively large search space. In a typical case, we ran a single set of initial trials per optimizer to select the final search space. However, in some cases we chose the initial search space poorly, so we ran another set of experiments to select the final search space. The effort required to choose each search space cannot simply be quantified by the number of initial trials; the provenance of each search space is difficult to trace exactly. In some cases, our search spaces were informed by published results or prior experience with particular models and optimizers. We validated our search spaces by checking that that the optimal metaparameter values were away from the search space boundaries for all optimizers in all experiments (see Figure 5 in Appendix E). We provide our final search spaces for all experiments in Appendix D. The fact that our final error rates compare favorably to prior published results – including reaching state-of-the-art for our particular configuration of ResNet-50 on ImageNet (see Section 4.2) – supports our claim that our methodology is highly competitive with expert tuning procedures.

4.1 Overview of Workloads and Experimental Details

Model Dataset
Simple CNN Fashion MNIST 6.6% 256 10k steps
ResNet-32 CIFAR-10 7% 256 50k steps
CNN CIFAR-100 256

350 epochs

VGG-16 CIFAR-10 128 250 epochs
ResNet-50 ImageNet 24% 1024 150k steps
LSTM War and Peace 50 200 epochs
Cross entropy Transformer LM1B 3.45 256 750k steps
Table 1: Summary of workloads used in experiments.

We investigated the relative performance of optimizers across a variety of image classification and language modeling tasks. For image classification, we trained a simple convolutional neural network (Simple CNN) on Fashion MNIST

[Xiao et al., 2017]; ResNet-32 [He et al., 2016a] on CIFAR-10 [Krizhevsky, 2009]; a CNN on CIFAR-100; VGG-16 [Simonyan and Zisserman, 2014] on CIFAR-10; and ResNet-50 on ImageNet [Russakovsky et al., 2015]. For language modeling, we trained a 2-layer LSTM model [Hochreiter and Schmidhuber, 1997] on Tolstoy’s War and Peace; and Transformer [Vaswani et al., 2017] on LM1B [Chelba et al., 2014]. We used a linear learning rate decay schedule parameterized the same way as Shallue et al. [2019] for all workloads. We used a fixed batch size and a fixed budget of training steps for each workload independent of the optimizer. Table 1 summarizes these workloads and Appendix B provides the full details.

Given a hypercube-shaped search space, our tuning protocol sought to model a practitioner with a fixed budget of trials trying to achieve the best outcome using tens of feasible trials (, , or depending on the workload).333Although we used a budget of tens of independent tuning trials throughout this section, in retrospect the best validation error across tuning trials converged quite quickly for our final search spaces, producing good results with fewer than 20 trials in many cases. See Figures 6– 8 in Appendix E. A feasible trial is any trial that achieves finite training loss. We used quasi-random uniform search [Bousquet et al., 2017], and continued the search until we obtained a fixed number of feasible trials. From those trials we considered two statistics. The first, in order to characterize the best outcome, is a metric of interest (e.g. test accuracy) corresponding to the trial achieving the optimum of some other metric (e.g. validation accuracy). The second, in order to characterize the speed of training, is the number of steps required to reach a fixed validation target conditional on at least one trial in the search having reached that target. We chose the target for each workload based on initial experiments and known values from the literature (see Table 1). We estimated means and uncertainties using the bootstrap procedure described in Appendix C.

Figure 1: The relative performance of optimizers is consistent with the inclusion relationships, regardless of whether we compare final validation error (top) or test error (bottom). For all workloads, we tuned the metaparameters of each optimizer separately, and selected the trial that achieved the lowest final validation error.
Figure 2: The relative training speed of optimizers is consistent with the inclusion relationships. We measured (idealized) training speed as the number of training steps required to reach a target validation error (see Table 1 for the error targets).

4.2 Inclusion relationships matter in practice

Figure 1 shows the final predictive performance of six optimizers on four different workloads after tuning metaparameters to minimize validation error. Regardless of whether we compare final validation error or test error, the inclusion relationships hold in all cases – a more general optimizer never underperforms any of its specializations within the error bars. Similar results hold for training error (see Figure 9 in Appendix E). Training speed is also an important consideration, and Figure 2 demonstrates that the inclusion relationships also hold within error bars when we compare the number of steps required to reach a target validation error. Moreover, these results confirming the relevance of optimizer inclusion relationships do not depend on the exact step budgets or error targets we chose (see Figure 10 in Appendix E), although large changes to these values would require new experiments.

Of course, just because a more general optimizer is no worse than any of its specializations doesn’t mean the choice of optimizer makes a large difference on all workloads. For some workloads in Figures 1 and 2, all optimizers perform about the same, while other workloads have a clear ranking or even dramatic differences. For example, the choice of optimizer seems to make little difference for ResNet-32 on CIFAR-10; all optimizers achieve similar predictive performance and training speed. On the other hand, Transformer on LM1B exhibits a clear ranking in terms of predictive performance and training speed. For this workload, Adam needs roughly half the steps that Momentum requires to reach our target error, and, although not shown in Figure 2, roughly six times fewer steps to get the same result as SGD. These differences are clearly significant enough to matter to a practitioner, and highlight the practical importance of choosing the right optimizer for some workloads.

The most general optimizers we considered were RMSProp, Adam, and NAdam, which do not include each other as special cases, and whose relative performance is not predicted by inclusion relationships. Across the workloads we considered, none of these optimizers emerged as the clear winner, although Adam and NAdam generally seemed to have an edge over RMSProp. For all of these optimizers, we sometimes had to set the parameter orders of magnitude larger than the default value in order to get good results. In particular, we achieved a validation accuracy of 77.1% for ResNet-50 on ImageNet using NAdam with , a result that exceeds the 76.5% achieved by Goyal et al. [2017] using Momentum. Across just these 4 workloads, the range of the optimal values of the parameter spanned 10 orders of magnitude. Faced with this reality, a practitioner might reasonably doubt their ability to find a value near the optimum. However, we found that we could reasonably expect to find a suitable value with only tens of trials. When tuning for Adam or NAdam over a large range, we found it more efficient to search over instead of ; see Appendix D for more details.

4.3 Reconciling disagreements with previous work

In order to confirm that differences in metaparameter tuning protocols explain the differences between our conclusions and those of Wilson et al. [2017] and Schneider et al. [2019], we reproduced a representative subset of their results and then inverted, or at least collapsed, the ranking over optimizers just by expanding the metaparameter search space.

The left pane of Figure 3 shows our experiments on VGG on CIFAR-10 using code released by Wilson et al. [2017]. When we match their protocol and perform their grid search over the initial learning rate and no other tuning, we reproduce their original result showing worse test error for RMSProp and Adam. However, when we tune the momentum parameter and with random search, all four optimizers reach nearly identical test error rates.444Wilson et al. [2017] selected trials to minimize the training loss and then report test set results. As Figure 3 shows, removing this somewhat non-standard choice and tuning on a validation set and reporting test set results does not change anything. With our learning rate schedule search space, merely tuning the learning rate schedule was enough to make all optimizers reach the same test error within error bars. When we additionally tuned the optimization metaparameters and weight decay in our setup we also get similar results for all optimizers, removing any evidence the inclusion relationships might be violated in practice.

Figure 4 shows our results with different tuning protocols for a CNN on CIFAR-100 and an LSTM language model trained on War and Peace to match the experiments in Schneider et al. [2019]. As reported by Schneider et al. [2019], if we only tune the learning rate without tuning the decay schedule or other optimizer metaparameters, Adam does worse than Momentum for the CNN and SGD performs slightly better than Adam and Momentum on the War and Peace dataset, although Schneider et al. [2019] found a larger advantage for SGD. However, once we tune the all the optimizer metaparameters, Adam does better than Momentum which does better than SGD, as predicted by the inclusion relationships.

We conclude that the reason both Schneider et al. [2019] and Wilson et al. [2017] observed a ranking that, at first glance, contradicts the inclusion relationships is because they were not tuning enough of the metaparameters. If we recast their results in our terminology where Adam with default is a different optimizer than Adam with tuned then there is no contradiction with our results and it becomes clear immediately that they do not consider the most interesting form of Adam for practitioners.

Figure 3: Tuning more metaparameters removes the differences in test error between optimizers observed by Wilson et al. [2017]. Tuning a subset of optimizer metaparameters and the initial learning rate is sufficient to equalize performance between all optimizers (left). More extensive metaparameter tuning in our setup, including the learning rate schedule, improves results for all optimizers and still does not produce any differences between optimizer performances (right).
Figure 4: Tuning more metaparameters changes optimizer rankings from Schneider et al. [2019] to rankings that are consistent with the inclusion relationships. The leftmost columns for each workload reproduce the rankings from Schneider et al. [2019], while the remaining columns tune over increasingly general search spaces. All columns use our random search tuning protocol.

5 Conclusions

Inspired by the recent efforts of Wilson et al. [2017] and Schneider et al. [2019], we set out to provide a detailed empirical characterization of the optimizer selection process in deep learning. Our central finding is that inclusion relationships between optimizers are meaningful in practice. When tuning all available metaparameters under a realistic protocol at scales common in deep learning, we find that more general optimizers never underperform their special cases. In particular, we found that RMSProp, Adam, and NAdam never underperformed SGD, Nesterov, or Momentum under our most exhaustive tuning protocol. We did not find consistent trends when comparing optimizers that could not approximate each other. We also found workloads for which there was not a statistically significant separation in the optimizer ranking.

Our experiments have some important limitations and we should be careful not to overgeneralize from our results. The first major caveat is that we did not measure the effects of varying the batch size. Recent empirical work [Shallue et al., 2019, Zhang et al., 2019] has shown that increasing the batch size can increase the gaps between training times for different optimizers, with the gap from SGD to Momentum [Shallue et al., 2019] and from Momentum to Adam [Zhang et al., 2019] increasing with the batch size. Nevertheless, we strongly suspect that the inclusion relations would be predictive at any batch size under a tuning protocol similar to the one we used. The second important caveat of our results is that they inevitably depend on the tuning protocol and workloads that we considered. Although we made every attempt to conduct realistic experiments, we should only expect our detailed findings to hold for similar workloads under similar protocols, namely uniform quasi-random tuning for tens to hundreds of trials, over hypercube search spaces, and with our specific learning rate schedule parameterization. Nevertheless, these caveats reinforce our central point: all empirical comparisons of neural network optimizers depend heavily on the metaparameter tuning protocol, perhaps far more than we are used to with comparisons between model architectures.

If we were to extract “best practices” from our findings, then we suggest the following. If we can afford tens or more runs of our code, we should tune all of the metaparameters of the popular adaptive gradient methods. Just because two metaparameters have a similar role in two different update rules doesn’t mean they should take similar values— optimization metaparameters tend to be coupled and the optimal value for one may depend on how the others are set. Our results also confirm that the optimal value of Adam’s is problem-dependent, so the onus is on empirical studies that fix to defend that choice. Finally, we should be skeptical of empirical comparisons of optimizers in papers, especially if an optimizer underperforms any of its specializations. When we do inevitably compare optimizers, we should report search spaces and highlight decisions about what metaparameters were tuned when interpreting results.


Appendix A Optimization schemes and their inclusions

a.1 Optimization schemes

Table 2

summarizes the update rules for the optimizers we consider in this work. We assume update rules as implemented in TensorFlow.

RMSProp includes momentum.

Table 2: Update rules for the optimizers considered in this work. For , is a component-wise power function. SGD is due to Robbins and Monro [1951], Momentum to Polyak [1964], Nesterov to Nesterov [1983], RMSProp to Tieleman and Hinton [2012], RMSterov is our own, Adam to Kingma and Ba [2015], and NAdam to Dozat [2016].

a.2 Optimization Inclusions: Which optimizers can implement other optimizers?

Momentum can exactly implement SGD

, so .

Nesterov can exactly implement SGD

, so .

RMSProp with momentum can exactly implement Momentum

Consider , so that

This is equivalent to Momentum, since

Thus , so .

RMSterov can exactly implement Nesterov

Consider , so that

This is equivalent to Momentum, since

Thus , so .

Adam can approximate Momentum for large

Consider , so that

If is large, so that , then

This is equivalent to Momentum, since

Thus , so

NAdam can approximate Nesterov for large

Consider , so that

If is large, so that , then

This is equivalent to Nesterov, since

Thus , so .

Appendix B Workload details

This section details the datasets and models summarized in Table 1.

b.1 Dataset Descriptions

For Fashion MNIST, CIFAR-10, ImageNet, and LM1B, our setup was identical to Shallue et al. [2019] except for the image pre-processing details described below. For War and Peace, our setup was identical to the “Tolstoi” dataset of Schneider et al. [2019].


We pre-processed images by subtracting the average value across all pixels and channels and dividing by the standard deviation.

555We used the TensorFlow op tf.image.per_image_standardization. For experiments with the ResNet-32 and CNN models, we followed the standard data augmentation scheme used in He et al. [2016a]

: 4 pixels padded on each side with single random crop from padded image or its horizontal reflection. We did not use random cropping for experiments with VGG for consistency with

Wilson et al. [2017].

ImageNet: We augmented images at training time by resizing each image, taking a random crop of pixels, randomly horizontally reflecting the cropped images, and randomly distorting the image colors. At evaluation time, we performed a single central crop of pixels. In both training and evaluation, we then subtracted the global mean RGB value from each pixel using the values computed by Simonyan and Zisserman [2014].666See https://gist.github.com/ksimonyan/211839e770f7b538e2d8#description for the mean RGB values used.

b.2 Model Descriptions

Simple CNN is identical to the base model described in Shallue et al. [2019]

. It consists of 2 convolutional layers with max pooling followed by 1 fully connected layer. The convolutional layers use

filters with stride 1, “same” padding, and ReLU activation function. Max pooling uses a

window with stride 2. Convolutional layers have 32 and 64 filters each and the fully connected layer has 1024 units. It does not use batch normalization.

CNN is the “All-CNN-C” model from Springenberg et al. [2014], as used in Schneider et al. [2019]. The model consists of 3 convolutional layer blocks with max pooling. The convolutional layers use filters with stride 1, “same” padding, and ReLU activation function. Max pooling uses a window with stride 2. Convolutional layer blocks have 96, 192 and 192 filters each. As in Schneider et al. [2019], we used regularization of .

ResNet is described in He et al. [2016a]. We used the improved residual block described in He et al. [2016b]. We used batch normalization [Ioffe and Szegedy, 2015] with exponential moving average (EMA) decay of 0.997 for ResNet-32, and ghost batch normalization [Hoffer et al., 2017] with ghost batch size of 32 and EMA decay of for ResNet-50.

VGG is based on “model C” from Simonyan and Zisserman [2014]. It consists of 13 convolutional layers followed by 3 fully connected hidden layers. We followed the modification used by Wilson et al. [2017] with batch normalization layers.

LSTM is a two hidden-layer LSTM model [Hochreiter and Schmidhuber, 1997] identical to the model used in Schneider et al. [2019]. It uses 128 embedding dimensions and 128 hidden units.

Transformer is the “base” model described in [Vaswani et al., 2017]

. We used it as an autoregressive language model by applying the decoder directly to the sequence of word embeddings for each sentence. Unlike the default implementation, we removed dropout regularization and used separate weight matrices for the input embedding layer and the pre-softmax linear transformation, as we observed these choices led to better performing models.

Appendix C Estimating trial outcomes via bootstrap

Our tuning protocol corresponds to running trials with quasi-random metaparameter values sampled uniformly from the search space until feasible trials are obtained, with depending on the workload. We then select the best trial, based on our statistic of interest, over those trials.

We used the following bootstrap procedure to estimate means and uncertainties of our tuning protocol. We ran trials, with depending on the workload. Then, for each bootstrap sample, we resampled the dataset of trials with replacement and computed our statistic on the first trials of the resampled dataset. We collected such bootstrap samples each time, and from those computed the means, percentiles, and percentiles of the bootstrap distribution. We used this procedure to generate the means and error bars for each plot.

Simple CNN on Fashion MNIST used ; ResNet-32 on CIFAR-100 used ; ResNet-50 on ImageNet used ; Transformer on LM1B used ; VGG on CIFAR-10 with our code used for tuning the learning rate schedule and for tuning the learning rate schedule, , and regularization; CNN on CIFAR-10 used ; LSTM on War and Peace used for tuning just the learning rate and for tuning the learning rate schedule and .

The sole exceptions to this bootstrap procedure are the two left panels of Figure 3, for which we used a similar procedure to Wilson et al. [2017] to ensure comparability. For each optimizer, we selected the trial that minimized validation error in our final search space and ran the same metaparameter values 5 times, reporting the mean, minimum, and maximum test error over those 5 runs in Figure 3. This is slightly different to Wilson et al. [2017], who chose the trial that minimized training error and reported validation error. When tuning the learning rate and , we used 24 trials per optimizer in the initial search space (which we used to select the final search space), and 16 trials per optimizer in the final search space.

Appendix D Metaparameter Search Spaces

When tuning metaparameters over a large range, we found that our search could sometimes be made more efficient if we parametrized the search space in a way that decorrelated the axes of the space. For example, with Momentum and Nesterov we observed a clear relationship between the initial learning rate and the momentum parameter ; smaller values of require larger values of for good performance, and vice versa. Indeed, Shallue et al. [2019] suggested that these optimizers are governed by the “effective learning rate” , and inspired by this, we found that searching over instead of usually led to a more efficient metaparameter search. Similarly, with Adam and NAdam we observed a relationship between the initial learning rate and the parameter; larger values of require larger values of for good performance, and vice versa. This is not surprising given the analysis in Appendix A that showed that, for large , is analogous to the effective learning rate of Adam and NAdam. We found that searching over was usually more efficient than searching over . We used these techniques in a subset of our experiments.

Below we report the search spaces used for our experiments. We include both the initial search spaces used to refine the search spaces, and the final spaces used to generate the plots. When only one search space was used, we denote the initial space as final. , , , , , , and combinations thereof are always tuned on a log scale. The number of samples from each search space is specified in Appendix C.

d.1 CNN on Fashion MNIST

We used linear learning rate decay for all experiments. We tuned the number of decay steps within times the number of training steps and the learning rate decay factor within . We did not use regularization or weight decay.

Table 3: SGD
Table 4: Momentum
Table 5: Nesterov
Table 6: RMSProp
Table 7: Adam
Table 8: NAdam

d.2 ResNet-32 on CIFAR-10

We used linear learning rate decay for all experiments. We tuned the number of decay steps within times the number of training steps and the learning rate decay factor within the values shown in the tables below. denotes the regularization coefficient.

Table 9: SGD
Table 10: Momentum
Table 11: Nesterov
Table 12: RMSProp
Table 13: Adam
Table 14: NAdam

d.3 ResNet-50 on ImageNet

We used linear learning rate decay for all experiments. We tuned the number of decay steps within times the number of training steps and the learning rate decay factor within the values shown in the tables below. denotes the weight decay coefficient and denotes the label smoothing coefficient.

Table 15: SGD
Table 16: Momentum