lookahead.pytorch
lookahead optimizer (Lookahead Optimizer: k steps forward, 1 step back) for pytorch
view repo
The vast majority of successful deep neural networks are trained using variants of stochastic gradient descent (SGD) algorithms. Recent attempts to improve SGD can be broadly categorized into two approaches: (1) adaptive learning rate schemes, such as AdaGrad and Adam, and (2) accelerated schemes, such as heavyball and Nesterov momentum. In this paper, we propose a new optimization algorithm, Lookahead, that is orthogonal to these previous approaches and iteratively updates two sets of weights. Intuitively, the algorithm chooses a search direction by looking ahead at the sequence of "fast weights" generated by another optimizer. We show that Lookahead improves the learning stability and lowers the variance of its inner optimizer with negligible computation and memory cost. We empirically demonstrate Lookahead can significantly improve the performance of SGD and Adam, even with their default hyperparameter settings on ImageNet, CIFAR10/100, neural machine translation, and Penn Treebank.
READ FULL TEXT VIEW PDF
Most popular optimizers for deep learning can be broadly categorized as
...
read it
Momentum plays a crucial role in stochastic gradientbased optimization
...
read it
Optimizers that further adjust the scale of gradient, such as Adam, Natu...
read it
Optimization methods (optimizers) get special attention for the efficien...
read it
We present an approach to automate the process of discovering optimizati...
read it
From a simplified analysis of adaptive methods, we derive AvaGrad, a new...
read it
Despite superior training outcomes, adaptive optimization methods such a...
read it
lookahead optimizer (Lookahead Optimizer: k steps forward, 1 step back) for pytorch
Implementation for the Lookahead Optimizer.
lookahead optimizer for keras
高校赛2019 文本点击预测
Package of Optimizer implemented with PyTorch .
Despite their simplicity, SGDlike algorithms remain competitive for neural network training against advanced secondorder optimization methods. Largescale distributed optimization algorithms (Goyal et al., 2017; You et al., 2018) have shown impressive performance in combination with improved learning rate scheduling schemes (Vaswani et al., 2017; Radford et al., 2018), yet variants of SGD remain the core algorithm in the distributed systems. The recent improvements to SGD can be broadly categorized into two approaches: (1) adaptive learning rate schemes, such as AdaGrad (Duchi et al., 2011) and Adam (Kingma and Ba, 2014), and (2) accelerated schemes, such as Polyak heavyball (Polyak, 1964) and Nesterov momentum (Nesterov, 1983). Both approaches make use of the accumulated past gradient information to achieve faster convergence. However, to obtain their improved performance in neural networks often requires costly hyperparameter tuning (Montavon et al., 2012).
In this work, we present Lookahead, a new optimization method, that is orthogonal to these previous approaches. Lookahead first updates the “fast weights” (Hinton and Plaut, 1987)
times using any standard optimizer in its inner loop before updating the “slow weights” once in the direction of the final fast weights. We show that this update reduces the variance. We find that Lookahead is less sensitive to suboptimal hyperparameters and therefore lessens the need for extensive hyperparameter tuning. By using Lookahead with inner optimizers such as SGD or Adam, we achieve faster convergence across different deep learning tasks with minimal computation overhead.
Empirically, we evaluate Lookahead by training classifiers on the CIFAR
(Krizhevsky, 2009) and ImageNet datasets (Deng et al., 2009), observing faster convergence on the ResNet50 and ResNet152 architectures (He et al., 2016). We also trained LSTM language models on the Penn Treebank dataset (Marcus et al., 1993) and Transformerbased (Vaswani et al., 2017) neural machine translation models on the WMT 2014 EnglishtoGerman dataset. For all tasks, using Lookahead leads to improved convergence over the inner optimizer and often improved generalization performance while being robust to hyperparameter changes. Our experiments demonstrate that Lookahead is robust to changes in the inner loop optimizer, the number of fast weight updates, and the slow weights learning rate.In this section, we describe the Lookahead algorithm and discuss its properties. Lookahead maintains a set of slow weights and fast weights , which get synced with the fast weights every updates. The fast weights are updated through applying , any standard optimization algorithm, to batches of training examples sampled from the dataset . After inner optimizer updates using
, the slow weights are updated towards the fast weights by linearly interpolating in weight space,
. We denote the slow weights learning rate as . After each slow weights update, the fast weights are reset to the current slow weights value. Psuedocode is provided in Algorithm 1.Standard optimization methods typically require carefully tuned learning rates to prevent oscillation and slow convergence. This is even more important in the stochastic setting (Martens, 2014; Wu et al., 2018). Lookahead, however, benefits from a larger learning rate in the inner loop. When oscillating in the high curvature direction, the fast weights updates make rapid progress along the low curvature direction. The slow weights help smooth out the oscillation through the parameter interpolation. The combination of fast weights and slow weights improves learning in high curvature directions, reduces variance, and enables Lookahead to converge rapidly in practice.
Figure 1 shows the trajectory of both the fast weights and slow weights during the optimization of a ResNet32 model on CIFAR100. While the fast weights explore around the minima, the slow weight update pushes Lookahead aggressively towards an area of improved test accuracy, a region which remains unexplored by SGD after 20 updates.
We can characterize the trajectory of the slow weights as an exponential moving average (EMA) of the final fast weights within each innerloop, regardless of the inner optimizer. After innerloop steps we have:
(1)  
(2) 
Intuitively, the slow weights heavily utilize recent proposals from the fast weight optimization but maintain some influence from previous fast weights. We show that this has the effect of reducing variance in Section 3.1. While a Polyakstyle average has further theoretical guarantees, our results match the claim that “an exponentiallydecayed moving average typically works much better in practice" (Martens, 2014).
Within each innerloop, the trajectory of the fast weights depends on the choice of underlying optimizer. Given an optimization algorithm that takes in an objective function and the current minibatch training examples , we have the update rule for the fast weights:
(3) 
We have the choice of maintaining, interpolating, or resetting the internal state (e.g. momentum) of the inner optimizer. Every choice improves convergence of the inner optimizer. We describe this tradeoff on the CIFAR dataset in Appendix C.5 and maintain internal state for the other experiments.
Lookahead has a constant computational overhead due to parameter copying and basic arithmetic operations that is amortized across the inner loop updates. The number of operations is times that of the inner optimizer. Lookahead maintains a single additional copy of the number of learnable parameters in the model.
The step size in the direction () is controlled by . By taking a quadratic approximation of the loss, we present a principled way of selecting .
For a quadratic loss function
, the step size that minimizes the loss for two points and is given by:where minimizes the loss.
Proof is in the appendix. Using quadratic approximations for the curvature, which is typical in second order optimization (Duchi et al., 2011; Kingma and Ba, 2014; Martens and Grosse, 2015)
, we can derive an estimate for the optimal
more generally. The full Hessian is typically intractable so we instead use aforementioned approximations, such as the diagonal approximate empirical Fisher used by the Adam optimizer (Kingma and Ba, 2014). This approximation works well in our numerical experiments if we clip the magnitude of the step size. At each slow weight update, we compute:Setting improves the stability of our algorithm. We evaluate the performance of this adaptive scheme versus a fixed scheme and standard Adam on a ResNet18 trained on CIFAR10 with two different learning rates and show the results in Figure 2. Additional hyperparameter details are given in appendix C. Both the fixed and adaptive Lookahead offer improved convergence.
In practice, a fixed choice of offers similar convergence benefits and tends to generalize better. Fixing avoids the need to maintain an estimate of the empirical Fisher, which incurs a memory and computational cost when the inner optimizer does not maintain such an estimate e.g. SGD. We thus use a fixed for the rest of our deep learning experiments.
We analyze Lookahead on the noisy quadratic model to shed light on its convergence guarantees. While simple, this model is a proxy for neural network optimization and effectively optimizing it remains a challenging open problem (Schaul et al., 2013; Martens and Grosse, 2015; Wu et al., 2018; Zhang et al., 2019).
We use the same model as in Schaul et al. (2013) and Wu et al. (2018).
(4) 
with . We assume that both and are diagonal and that . ^{1}^{1}1Classical momentum’s iterates are invariant to translations and rotations (see e.g. Sutskever et al. (2013)) and Lookahead’s linear interpolation is also invariant to such changes. The assumption on noise is less trivial and we refer to Wu et al. (2018) and Zhang et al. (2019) for discussion. We use and to denote the diagonal elements of and respectively. Taking the expectation over , the expected loss of the iterates is,
(5) 
Analyzing the expected dynamics of the SGD iterates and the slow weights gives the following result.
Let be the learning rate of SGD and Lookahead where . In the noisy quadratic model, the iterates of SGD and Lookahead with SGD as its inner optimizer converge to in expectation and the variances converge to the following fixed points:
(6)  
(7) 
For the Lookahead variance fixed point, the first product term is always smaller than 1 for , and thus Lookahead has a variance fixed point that is strictly smaller than that of the SGD innerloop optimizer. Evidence of this phenomenon is present in Figure 10.
In Proposition 2, we use the same learning rate. In order to compare the convergence of the two methods we should choose hyperparameters such that the variance fixed points are equal. In Figure 3 we show the expected loss after 1000 updates (computed analytically) for both Lookahead and SGD. At this stage in optimization (and onwards), Lookahead outperforms SGD across the broad spectrum of values. Details and additional discussion are in Appendix B.
In the previous section we showed that on the noisy quadratic model, Lookahead is able to reduce the variance of the SGD optimizer. Here we analyze the quadratic model without noise using gradient descent with momentum (Polyak, 1964; Goh, 2017) and show that when the system is underdamped, Lookahead is able to improve on the convergence rate.
As before, we restrict our attention to diagonal quadratic functions with minima at the origin. Given an initial point , we wish to measure the convergence rate of contraction . We follow the approach of (O’Donoghue and Candes, 2015) and model the optimization of this function as a linear dynamical system. Details are in Appendix B.
As in Lucas et al. (2018), to better understand the sensitivity of Lookahead to misspecified conditioning we fix the momentum coefficient of classical momentum and explore the convergence rate over varying condition number under the optimal learning rate. As expected, Lookahead has slightly worse convergence in the overdamped regime where momentum is set too low and CM is slowly, monotonically converging to the optimum. However, when the system is underdamped (and oscillations occur) Lookahead is able to significantly improve the convergence rate by skipping to a better parameter setting during oscillation.
Our work is inspired by recent advances in understanding the loss surface of deep neural networks. While the idea of following the trajectory of weights dates back to Ruppert (1988); Polyak and Juditsky (1992), averaging weights in neural networks has not been carefully studied until more recently. Garipov et al. (2018) observe that the final weights of two independently trained neural networks can be connected by a curve with low loss. Izmailov et al. (2018)
proposes Stochastic Weight Averaging (SWA), which averages the weights of different neural network obtained during training. Parameter averaging schemes are used to create ensembles in natural language processing tasks
(Jean et al., 2014; Merity et al., 2017) and in training Generative Adversarial Networks (Yasin Yazıcı, 2018). In contrast to previous approaches, which generally focus on generating a set of parameters at the end of training, Lookahead is an optimization algorithm which performs parameter averaging during the training procedure to achieve faster convergence.The Reptile algorithm, proposed by Nichol et al. (2018), samples tasks in its outer loop and runs an optimization algorithm on each task within the inner loop. The initial weights are then updated in the direction of the new weights. While the functionality is similar, the application and setting are starkly different. Reptile samples different tasks and aims to find parameters which act as good initial values for new tasks sampled at test time. Lookahead does not sample new tasks for each outer loop and aims to take advantage of the geometry of loss surfaces to improve convergence.
Katyusha (AllenZhu, 2017), an accelerated form of SVRG (Johnson and Zhang, 2013), also uses an outer and inner loop during optimization. Katyusha checkpoints parameters during optimization and within each inner loop step the parameters are pulled back towards the latest checkpoint. Lookahead computes the pullback only at the end of the inner loop and the gradient updates do not utilize the SVRG correction (though this would be possible). While Katyusha has theoretical guarantees in the convex optimization setting, the SVRGbased update does not work well for neural networks (Defazio and Bottou, 2018).
Anderson acceleration (Anderson, 1965) and other related extrapolation techniques (Brezinski and Zaglia, 2013) have a similar flavor to Lookahead. These methods keep track of all iterates within an inner loop and then compute some linear combination which extrapolates the iterates towards their fixed point. This presents additional challenges first in the form of additional memory overhead as the number of innerloop steps increases and also in finding the best linear combination. Scieur et al. (2018a, b) propose a method by which to find a good linear combination and apply this approach to deep learning problems and report both improved convergence and generalization. However, their method requires on the order of times more memory than Lookahead. Lookahead can be seen as a simple version of Anderson acceleration wherein only the first and last iterates are used.
We completed a thorough evaluation of the Lookahead optimizer on a range of deep learning tasks against wellcalibrated baselines. We explored image classification on CIFAR10/CIFAR100 (Krizhevsky, 2009) and ImageNet (Deng et al., 2009). We also trained LSTM language models on the Penn Treebank dataset (Marcus et al., 1993) and Transformerbased (Vaswani et al., 2017) neural machine translation models on the WMT 2014 EnglishtoGerman dataset. For all of our experiments, every algorithm consumed the same amount of training data.
The CIFAR10 and CIFAR100 datasets for classification consist of color images, with 10 and 100 different classes, split into a training set with 50,000 images and a test set with 10,000 images. We ran all our CIFAR experiments with 3 seeds and trained for 200 epochs on a ResNet18 (He et al., 2016) with batches of 128 images and decay the learning rate by a factor of at the 60th, 120th, and 160th epochs. Additional details are given in appendix C.
We summarize our results in Figure 5.^{2}^{2}2We refer to SGD with heavy ball momentum (Polyak, 1964) as SGD Note that Lookahead achieves significantly faster convergence throughout training even though the learning rate schedule is optimized for the inner optimizer—future work can involve building a learning rate schedule for Lookahead.
Optimizer  CIFAR10  CIFAR100 

SGD  
Polyak  
Adam  
Lookahead 
The 1000way ImageNet task (Deng et al., 2009)
is a classification task that contains roughly 1.28 million training images and 50,000 validation images. We use the official PyTorch implementation
^{3}^{3}3Implementation available at https://github.com/pytorch/examples/tree/master/imagenet. and the ResNet50 and ResNet152 (He et al., 2016) architectures. Our baseline algorithm is SGD with an initial learning rate of 0.1 and momentum value of 0.9. We train for 90 epochs and decay our learning rate by a factor of 10 at the 30th and 60th epochs. For Lookahead, we set and slow weights step size .Motivated by the improved convergence we observed in our initial experiment, we tried a more aggressive learning rate decay schedule where we decay the learning rate by a factor of 10 at the 30th, 48th, and 58th epochs. Using such a schedule, we reach single crop top1 accuracy on ImageNet in just 50 epochs and reach top1 accuracy in 60 epochs. The results are shown in Figure 6.
Optimizer  LA  SGD 

Epoch 50  Top 1  75.13  74.43 
Epoch 50  Top 5  92.22  92.15 
Epoch 60  Top 1  75.49  75.15 
Epoch 60  Top 5  92.53  92.56 
To test the scalability of our method, we ran Lookahead with the aggressive learning rate decay on ResNet152. We reach single crop top1 accuracy in 49 epochs (matching what is reported in He et al. (2016)) and top1 accuracy in 60 epochs. Other approaches for improving convergence on ImageNet can require hundreds of GPUs, or tricks such as ramping up the learning rate and adaptive batchsizes (Goyal et al., 2017; Jia et al., 2018). The fastest convergence we are aware of uses an approximate secondorder method to train a ResNet50 to top1 accuracy in 35 epochs with 1,024 GPUs (Osawa et al., 2018). In contrast, Lookahead requires changing one single line of code and can easily scale to ResNet152.
We trained LSTMs (Hochreiter and Schmidhuber, 1997) for language modeling on the Penn Treebank dataset. We followed the model setup of Merity et al. (2017) and made use of their publicly available code in our experiments. We did not include the finetuning stages. We searched over hyperparameters for both Adam and SGD (without momentum) to find the model which gave the best validation performance. We then performed an additional small grid search on each of these methods with Lookahead. Each model was trained for 750 epochs. We show training curves for each model in Figure 6(a).




Using Lookahead with Adam we were able to achieve the fastest convergence and best training, validation, and test perplexity. The models trained with SGD took much longer to converge (around 700 epochs) and were unable to match the final performance of Adam. Using Polyak weight averaging (Polyak and Juditsky, 1992) with SGD, as suggested by Merity et al. (2017) and referred to as ASGD, we were able to improve on the performance of Adam but were unable to match the performance of Lookahead. Full results are given in Table 3 and additional details are in appendix C.
We trained Transformer based models (Vaswani et al., 2017)
on the WMT2014 EnglishtoGerman translation task on a single Tensor Processing Unit (TPU) node. We took the base model from
Vaswani et al. (2017) and trained it using the proposed warmupthendecay learning rate scheduling scheme and, additionally, the same scheme wrapped with Lookahead. We found Lookahead speedups the early stage of the training over Adam and the later proposed AdaFactor (Shazeer and Stern, 2018) optimizer. All the methods converge to similar training loss and BLEU score at the end, see Figure 6(b) and Table 4.Our NMT experiments further confirms Lookahead improves the robustness of the inner loop optimizer. We found Lookahead enables a wider range of learning rate {0.02, 0.04, 0.06} choices for the Transformer model that all converge to similar final losses. Full details are given in Appendix C.4.


We demonstrate empirically on the CIFAR dataset that Lookahead consistently delivers fast convergence across different hyperparameter settings. We fix slow weights step size and and run Lookahead on inner SGD optimizers with different learning rates and momentum; results are shown in Figure 8. In general, we observe that Lookahead can train with higher learning rates on the base optimizer with little tuning on and . This agrees with our discussion of variance reduction in Section 3.1. We also evaluate robustness to the Lookahead hyperparameters by fixing the inner optimizer and evaluating runs with varying updates and step size ; these results are shown in In Figure 9.
[width=3em]k  0.5  0.8 

5  
10 
To get a better understanding of the Lookahead update, we also plotted the test accuracy for every update on epoch 65 in Figure 10. We found that within each inner loop the fast weights may lead to substantial degradation in task performance—this reflects our analysis of the higher variance of the inner loop update in section 3.1. The slow weights step recovers the outer loop variance and restores the test accuracy.
In this paper, we present Lookahead, an algorithm that can be combined with any standard optimization method. Our algorithm computes weight updates by looking ahead at the sequence of “fast weights" generated by another optimizer. We illustrate how Lookahead improves convergence by reducing variance and show strong empirical results on many deep learning benchmark datasets.
Proceedings of the 49th Annual ACM SIGACT Symposium on Theory of Computing
, pages 1200–1205. ACM, 2017.Journal of Machine Learning Research
, 12(Jul):2121–2159, 2011.Here we present the details of the noisy quadratic analysis, and the proof of Proposition 2.
From Wu et al. [2018], we can compute the dynamics of SGD with learning rate as follows:
(8)  
(9) 
We now compute the dynamics of the slow weights of Lookahead.
The Lookahead slow weights have the following trajectories:
(10)  
(11) 
The expectation trajectory follows from SGD,
For the variance, we can write . We proceed by computing the covariance term recursively. For simplicity, we work with a single element, , of the vector (as is diagonal, each element evolves independently).
A similar derivation yields . After substituting the SGD variance formula and some rearranging we have,
∎
We now proceed with the proof of Proposition 2.
First note that if the learning rate is chosen as specified, then each of the trajectories is a contraction map. By Banach’s fixed point theorem, they each have a unique fixed point. Clearly the expectation trajectories contract to zero in each case.
For the variance we can solve for the fixed points directly. For SGD,
For Lookahead, we have,
where for the final equality, we used the identity . Some standard manipulations of the denominator on the first term lead to the final solution,
∎
For the same learning rate, Lookahead will achieve a smaller loss as the variance is reduced more. However, the convergence speed of the expectation term will be slower as we must compare to and the latter is always smaller for . In our experiments, we observe that Lookahead typically converges much faster than its inner optimizer. We speculate that the learning rate for the inner optimizer is set sufficiently high such that the variance reduction term is more important–this is the more common regime for neural networks that attain high validation accuracy, as higher initial learning rates are used to overcome the shorthorizon bias [Wu et al., 2018]
In Figure 3
we compared the convergence rates of SGD and Lookahead. We specified the eigenvalues of
according to the worstcase model from Li [2005] (also used by Wu et al. [2018] and set . We computed the expected loss (Equation 5) for learning rates in the range for SGD and Lookahead with a range of values, with , at time (by unrolling the above dynamics). We computed the variance fixed point for each learning rate under each optimizer and use this value to compute the optimal loss. Finally, we plot the difference between the expected loss at and the final loss, as a function of the final loss. This allows us to compare the convergence performance between SGD and Lookahead optimization settings which converge to the same solution.In Figure 11 we present additional plots comparing the convergence performance between SGD and Lookahead. In (a) we show the convergence of Lookahead for a single choice of , where our method is able to outperform SGD even for this fixed value. In (b) we show the convergence after only a few updates. Here SGD outperforms lookahead for some smaller choices of , this is because SGD is able to make progress on the expectation more rapidly and reduces this part of the loss quickly — this is related to the shorthorizon bias phenomenon [Wu et al., 2018]. However, even with only a few updates there are choices of which are able to outperform SGD.
Here we present additional details on the quadratic convergence analysis.
As in the main text, we will assume that the optimum lies at for simplicity, but the argument easily generalizes. Here we consider the more general case of a quadratic function . We use to denote the CM learning rate and for it’s momentum coefficient.
First we can stack together a full set of fast weights and write the following,
Here, represents the Lookahead interpolation, represents the update corresponding to classical momentum in the innerloop and is a transition matrix which realigns the fast weight iterates.
Each of these matrices takes the following form,
Each matrix consists of four blocks. The bottom left block is always an identity matrix that shifts the iterates along one index. The bottom right column is all zeros with the topright column being nonzero only for
which applies the Lookahead interpolation. The top left row is used to apply the Lookahead/CM updates in each matrix.After computing the appropriate product of these matrices, we can use standard solvers to compute the eigenvalues which bound the convergence of the linear dynamical system (see e.g. Lessard et al. [2016] for a recent exposition). Finally, note that because this linear dynamical systems corresponds to updates (or one slowweight update) we must compute the root of the eigenvalues to recover the correct convergence bound.
We present the proof of Proposition 1 for the optimal slow weight step size .
We compute the derivative with respect to
Setting the derivative to 0 and using :
(12)  
(13) 
∎
Here we present additional details on the experiments appearing in the main paper.
We run every experiment with three random seeds. Our plots show the mean value with error bars of one standard deviation. We use a standard training procedure that is the same as that of
Zagoruyko and Komodakis [2016]. That is, images are zeropadded with
pixels on each side and then a random crop is extracted and mirrored horizontally of the time. Inputs are normalized with perchannel means and standard deviations. For computing the loss curves, we iterate through the entire training dataset at the end of very epoch. Lookahead is evaluated on the slow weights of its inner optimizer. To make this evaluation consistent, we evaluate the training loss at the end of each epoch by iterating through the training set again, without performing any gradient updates.For SGD, we set the momentum to 0.9 and sweep over the learning rates {0.03, 0.05, 0.1, 0.2, 0.3} and weight decay values of {0.0003, 0.001, 0.003}. We found AdamW [Loshchilov and Hutter, 2017] to perform better than Adam and refer it to as Adam throughout our CIFAR experiment section. For Adam, we sweep do a grid search on learning rate of {3e4, 1e3, 3e3} and weight decay values of {0.1, 0.3, 1, 3}. For Polyak averaging, we compute the moving average of SGD use the best weight decay from SGD and sweep over the learning rates {0.05, 0.1, 0.2, 0.3, 0.5}.
For Lookahead, we set the inner optimizer SGD learning rate to {0.1, 0.2} and do a grid search over and . We report the verison of Lookahead that resets momentum in our CIFAR experiments.
We directly wrapped Lookahead around the settings provided in the official PyTorch repository repository with and . Observing the improved convergence of our algorithm, we tested Lookahead with the aggressive learning rate decay schedule (decaying at the 30th, 48th, and 58th epochs) and . We run our experiments on 4 Nvidia P100 GPUs with a batch size of 256 and weight decay of 1e4.
For the language modeling task we used the model and code provided by Merity et al. [2017]
. We used the default settings suggested in this codebase at the time of usage which we report here. The LSTM we trained had 3 layers each containing 1150 hidden units. We used word embeddings of dimension 400. Within each hidden layer we apply dropout with probability 0.3 and the input embedding layers use dropout with probability 0.65. We applied dropout to the embedding layer itself with probability 0.1. We used the weight drop method proposed in
Merity et al. [2017] with probability 0.5. We adopt the regularization proposed in section 4.6 in Merity et al. [2017]: RNN activations have L2 regularization applied to them with a scaling of 2.0, and temporal activation regularization is applied with scaling 1.0. Finally, all weights receive a weight decay of 1.2e6.We trained the model using variable sequence lengths and batch sizes of 80. We apply gradient clipping of 0.25 to all optimizers. During training, if validation loss has not decreased for 15 epochs then we reduce the learning rate by half. Before applying Lookahead, we completed a grid search over the Adam and SGD optimizers to find competitive baseline models. For SGD we did not apply momentum and searched learning rates in the range
. For Adam we kept the default momentum values of and searched over learning rates in the range . We chose the best model by picking the model which achieved the best validation performance at any point during training.After picking the best SGD/Adam hyperparameters we trained the models again using Lookahead with the best baseline optimizers for the innerloop. We tried using innerloop updates and interpolation coefficients. Once again, we reported Lookahead’s final performance by choosing the parameters which gave the best validation performance during training.
For this task, or and or worked best. As in our other experiments, we found that Lookahead was largely robust to different choices of and . We expect that we could achieve even better results with Lookahead if we jointly optimized the hyperparameters of Lookahead and the underlying optimizer.
For this task, we trained on a single TPU core that has 8 workers each with a minibatch size of 2048. We use the default hyperparameters for Adam [Vaswani et al., 2017] and AdaFactor [Shazeer and Stern, 2018] in the experiments. For Lookahead, we did a minor grid search over the learning rate and while setting . We found learning rate 0.04 and worked best. After we train those models for 250k steps, they can all reach around 27 BLEU on Newstest2014 respectively.
Throughout our paper, we maintain the state of our inner optimizer for simplicity. For SGD with heavyball momentum, this corresponds to preserving the momentum. Here, we present a sensitivity study by comparing the convergence of Lookahead when maintaining the momentum, interpolating the momentum, and resetting the momentum. All three improve convergence versus SGD.
Optimizer  CIFAR10 

Maintain  
Interpolate  
Reset 
Comments
There are no comments yet.