Restart techniques are common in gradient-free optimization to deal with multimodal functions. Partial warm restarts are also gaining popularity in gradient-based optimization to improve the rate of convergence in accelerated gradient schemes to deal with ill-conditioned functions. In this paper, we propose a simple warm restart technique for stochastic gradient descent to improve its anytime performance when training deep neural networks. We empirically study its performance on the CIFAR-10 and CIFAR-100 datasets, where we demonstrate new state-of-the-art results at 3.14 We also demonstrate its advantages on a dataset of EEG recordings and on a downsampled version of the ImageNet dataset. Our source code is available at https://github.com/loshchil/SGDRREAD FULL TEXT VIEW PDF
Stochastic gradient descent (SGD) with constant momentum and its variant...
Derivative-based optimization techniques such as Stochastic Gradient Des...
In many biomedical applications, outcome is measured as a “time-to-event...
Batch Normalization (BN) is a prominent deep learning technique. In spit...
We propose a novel algorithm for the fitting of 3D human shape to images...
We present an accurate method for estimation of the affine shape of loca...
It is hard to train Recurrent Neural Network (RNN) with stable convergen...
Deep neural networks (DNNs) are currently the best-performing method for many classification problems, such as object recognition from images (Krizhevsky et al., 2012a; Donahue et al., 2014) or speech recognition from audio data (Deng et al., 2013). Their training on large datasets (where DNNs perform particularly well) is the main computational bottleneck: it often requires several days, even on high-performance GPUs, and any speedups would be of substantial value.
The training of a DNN with free parameters can be formulated as the problem of minimizing a function . The commonly used procedure to optimize is to iteratively adjust
(the parameter vector at time step) using gradient information obtained on a relatively small -th batch of datapoints. The Stochastic Gradient Descent (SGD) procedure then becomes an extension of the Gradient Descent (GD) to stochastic optimization of as follows:
where is a learning rate. One would like to consider second-order information
but this is often infeasible since the computation and storage of the inverse Hessian is intractable for large . The usual way to deal with this problem by using limited-memory quasi-Newton methods such as L-BFGS (Liu & Nocedal, 1989)
is not currently in favor in deep learning, not the least due to (i) the stochasticity of, (ii) ill-conditioning of and (iii) the presence of saddle points as a result of the hierarchical geometric structure of the parameter space (Fukumizu & Amari, 2000). Despite some recent progress in understanding and addressing the latter problems (Bordes et al., 2009; Dauphin et al., 2014; Choromanska et al., 2014; Dauphin et al., 2015), state-of-the-art optimization techniques attempt to approximate the inverse Hessian in a reduced way, e.g., by considering only its diagonal to achieve adaptive learning rates. AdaDelta (Zeiler, 2012) and Adam (Kingma & Ba, 2014) are notable examples of such methods.
Intriguingly enough, the current state-of-the-art results on CIFAR-10, CIFAR-100, SVHN, ImageNet, PASCAL VOC and MS COCO
datasets were obtained by Residual Neural Networks
(He et al., 2015; Huang et al., 2016c; He et al., 2016; Zagoruyko & Komodakis, 2016) trained without the use of advanced methods such as AdaDelta and Adam. Instead, they simply use SGD with momentum 111More specifically, they employ Nesterov’s momentum (Nesterov, 1983, 2013):
where is a velocity vector initially set to 0, is a decreasing learning rate and is a momentum rate which defines the trade-off between the current and past observations of . The main difficulty in training a DNN is then associated with the scheduling of the learning rate and the amount of L2 weight decay regularization employed. A common learning rate schedule is to use a constant learning rate and divide it by a fixed constant in (approximately) regular intervals. The blue line in Figure 1 shows an example of such a schedule, as used by Zagoruyko & Komodakis (2016) to obtain the state-of-the-art results on CIFAR-10, CIFAR-100 and SVHN datasets.
In this paper, we propose to periodically simulate warm restarts of SGD, where in each restart the learning rate is initialized to some value and is scheduled to decrease. Four different instantiations of this new learning rate schedule are visualized in Figure 1. Our empirical results suggest that SGD with warm restarts requires 2 to 4 fewer epochs than the currently-used learning rate schedule schemes to achieve comparable or even better results. Furthermore, combining the networks obtained right before restarts in an ensemble following the approach proposed by Huang et al. (2016a) improves our results further to 3.14% for CIFAR-10 and 16.21% for CIFAR-100. We also demonstrate its advantages on a dataset of EEG recordings and on a downsampled version of the ImageNet dataset.
When optimizing multimodal functions one may want to find all global and local optima. The tractability of this task depends on the landscape of the function at hand and the budget of function evaluations. Gradient-free optimization approaches based on niching methods (Preuss, 2015) usually can deal with this task by covering the search space with dynamically allocated niches of local optimizers. However, these methods usually work only for relatively small search spaces, e.g.,
, and do not scale up due to the curse of dimensionality(Preuss, 2010). Instead, the current state-of-the-art gradient-free optimizers employ various restart mechanisms (Hansen, 2009; Loshchilov et al., 2012). One way to deal with multimodal functions is to iteratively sample a large number of candidate solutions, make a step towards better solutions and slowly shape the sampling distribution to maximize the likelihood of successful steps to appear again (Hansen & Kern, 2004). The larger the , the more global search is performed requiring more function evaluations. In order to achieve good anytime performance, it is common to start with a small and increase it (e.g., by doubling) after each restart. This approach works best on multimodal functions with a global funnel structure and also improves the results on ill-conditioned problems where numerical issues might lead to premature convergence when is small (Hansen, 2009).
Gradient-based optimization algorithms such as BFGS can also perform restarts to deal with multimodal functions (Ros, 2009). In large-scale settings when the usual number of variables is on the order of , the availability of gradient information provides a speedup of a factor of w.r.t. gradient-free approaches. Warm restarts are usually employed to improve the convergence rate rather than to deal with multimodality: often it is sufficient to approach any local optimum to a given precision and in many cases the problem at hand is unimodal. Fletcher & Reeves (1964) proposed to flesh the history of conjugate gradient method every or iterations. Powell (1977) proposed to check whether enough orthogonality between and has been lost to warrant another warm restart. Recently, O’Donoghue & Candes (2012) noted that the iterates of accelerated gradient schemes proposed by Nesterov (1983, 2013) exhibit a periodic behavior if momentum is overused. The period of the oscillations is proportional to the square root of the local condition number of the (smooth convex) objective function. The authors showed that fixed warm restarts of the algorithm with a period proportional to the conditional number achieves the optimal linear convergence rate of the original accelerated gradient scheme. Since the condition number is an unknown parameter and its value may vary during the search, they proposed two adaptive warm restart techniques (O’Donoghue & Candes, 2012):
The function scheme restarts whenever the objective function increases.
The gradient scheme restarts whenever the angle between the momentum term and the negative gradient is obtuse, i.e, when the momentum seems to be taking us in a bad direction, as measured by the negative gradient at that point. This scheme resembles the one of Powell (1977) for the conjugate gradient method.
O’Donoghue & Candes (2012) showed (and it was confirmed in a set of follow-up works) that these simple schemes provide an acceleration on smooth functions and can be adjusted to accelerate state-of-the-art methods such as FISTA on nonsmooth functions.
Yang & Lin (2015) showed that Stochastic subGradient Descent with restarts can achieve a linear convergence rate for a class of non-smooth and non-strongly convex optimization problems where the epigraph of the objective function is a polyhedron. In contrast to our work, they never increase the learning rate to perform restarts but decrease it geometrically at each epoch. To perform restarts, they periodically reset the current solution to the averaged solution from the previous epoch.
The existing restart techniques can also be used for stochastic gradient descent if the stochasticity is taken into account. Since gradients and loss values can vary widely from one batch of the data to another, one should denoise the incoming information: by considering averaged gradients and losses, e.g., once per epoch, the above-mentioned restart techniques can be used again.
In this work, we consider one of the simplest warm restart approaches. We simulate a new warm-started run / restart of SGD once epochs are performed, where is the index of the run. Importantly, the restarts are not performed from scratch but emulated by increasing the learning rate while the old value of is used as an initial solution. The amount of this increase controls to which extent the previously acquired information (e.g., momentum) is used.
Within the -th run, we decay the learning rate with a cosine annealing for each batch as follows:
where and are ranges for the learning rate, and accounts for how many epochs have been performed since the last restart. Since is updated at each batch iteration , it can take discredited values such as 0.1, 0.2, etc. Thus, when and . Once , the function will output and thus . The decrease of the learning rate is shown in Figure 1 for fixed , and ; note that the logarithmic axis obfuscates the typical shape of the cosine function.
In order to improve anytime performance, we suggest an option to start with an initially small and increase it by a factor of at every restart (see, e.g., Figure 1 for and ). It might be of great interest to decrease and at every new restart. However, for the sake of simplicity, here, we keep and the same for every
to reduce the number of hyperparameters involved.
Since our simulated warm restarts (the increase of the learning rate) often temporarily worsen performance, we do not always use the last as our recommendation for the best solution (also called the incumbent solution). While our recommendation during the first run (before the first restart) is indeed the last , our recommendation after this is a solution obtained at the end of the last performed run at . We emphasize that with the help of this strategy, our method does not require a separate validation data set to determine a recommendation.
|depth-||# params||# runs||CIFAR-10||CIFAR-100|
|original-ResNet (He et al., 2015)||110||1.7M||mean of 5||6.43||25.16|
|1202||10.2M||mean of 5||7.93||27.82|
|stoc-depth (Huang et al., 2016c)||110||1.7M||1 run||5.23||24.58|
|pre-act-ResNet (He et al., 2016)||110||1.7M||med. of 5||6.37||n/a|
|164||1.7M||med. of 5||5.46||24.33|
|1001||10.2M||med. of 5||4.62||22.71|
|WRN (Zagoruyko & Komodakis, 2016)||16-8||11.0M||1 run||4.81||22.07|
|with dropout||28-10||36.5M||1 run||n/a||20.04|
|default with||28-10||36.5M||med. of 5||4.24||20.33|
|default with||28-10||36.5M||med. of 5||4.13||20.21|
|28-10||36.5M||med. of 5||4.17||19.99|
|28-10||36.5M||med. of 5||4.07||19.87|
|28-10||36.5M||med. of 5||3.86||19.98|
|28-10||36.5M||med. of 5||4.09||19.74|
|28-10||36.5M||med. of 5||4.03||19.58|
|default with||28-20||145.8M||med. of 2||4.08||19.53|
|default with||28-20||145.8M||med. of 2||3.96||19.67|
|28-20||145.8M||med. of 2||4.01||19.28|
|28-20||145.8M||med. of 2||3.77||19.24|
|28-20||145.8M||med. of 2||3.66||19.69|
|28-20||145.8M||med. of 2||3.91||18.90|
|28-20||145.8M||med. of 2||3.74||18.70|
We consider the problem of training Wide Residual Neural Networks (WRNs; see Zagoruyko & Komodakis (2016) for details) on the CIFAR-10 and CIFAR-100 datasets (Krizhevsky, 2009). We will use the abbreviation WRN-- to denote a WRN with depth and width . Zagoruyko & Komodakis (2016) obtained the best results with a WRN-28-10 architecture, i.e., a Residual Neural Network with layers and times more filters per layer than used in the original Residual Neural Networks (He et al., 2015, 2016).
The CIFAR-10 and CIFAR-100 datasets (Krizhevsky, 2009) consist of 3232 color images drawn from 10 and 100 classes, respectively, split into 50,000 train and 10,000 test images. For image preprocessing Zagoruyko & Komodakis (2016)
performed global contrast normalization and ZCA whitening. For data augmentation they performed horizontal flips and random crops from the image padded by 4 pixels on each side, filling missing pixels with reflections of the original image.
For training, Zagoruyko & Komodakis (2016) used SGD with Nesterov’s momentum with initial learning rate set to , weight decay to 0.0005, dampening to 0, momentum to 0.9 and minibatch size to 128. The learning rate is dropped by a factor of 0.2 at 60, 120 and 160 epochs, with a total budget of 200 epochs. We reproduce the results of Zagoruyko & Komodakis (2016) with the same settings except that i) we subtract per-pixel mean only and do not use ZCA whitening; ii) we use SGD with momentum as described by eq. (3-4) and not Nesterov’s momentum.
Table 1 shows that our experiments reproduce the results given by Zagoruyko & Komodakis (2016) for WRN-28-10 both on CIFAR-10 and CIFAR-100. These “default” experiments with and correspond to the blue and red lines in Figure 2. The results for show better performance, and therefore we use in our later experiments.
SGDR with , and for perform warm restarts every 50, 100 and 200 epochs, respectively. A single run of SGD with the schedule given by eq. (5) for shows the best results suggesting that the original schedule of WRNs might be suboptimal w.r.t. the test error in these settings. However, the same setting with leads to the worst anytime performance except for the very last epochs.
SGDR with and performs its first restart after 1 and 10 epochs, respectively. Then, it doubles the maximum number of epochs for every new restart. The main purpose of this doubling is to reach good test error as soon as possible, i.e., achieve good anytime performance. Figure 2 shows that this is achieved and test errors around 4% on CIFAR-10 and around 20% on CIFAR-100 can be obtained about 2-4 times faster than with the default schedule used by Zagoruyko & Komodakis (2016).
Since SGDR achieves good performance faster, it may allow us to train larger networks. We therefore investigated whether results on CIFAR-10 and CIFAR-100 can be further improved by making WRNs two times wider, i.e., by training WRN-28-20 instead of WRN-28-10. Table 1 shows that the results indeed improved, by about 0.25% on CIFAR-10 and by about 0.5-1.0% on CIFAR-100. While network architecture WRN-28-20 requires roughly three-four times more computation than WRN-28-10, the aggressive learning rate reduction of SGDR nevertheless allowed us to achieve a better error rate in the same time on WRN-28-20 as we spent on 200 epochs of training on WRN-28-10. Specifically, Figure 2 (right middle and right bottom) show that after only 50 epochs, SGDR (even without restarts, using ) achieved an error rate below 19% (whereas none of the other learning methods performed better than 19.5% on WRN-28-10). We therefore have hope that – by enabling researchers to test new architectures faster – SGDR’s good anytime performance may also lead to improvements of the state of the art.
In a final experiment for SGDR by itself, Figure 7 in the appendix compares SGDR and the default schedule with respect to training and test performance. As the figure shows, SGDR optimizes training loss faster than the standard default schedule until about epoch 120. After this, the default schedule overfits, as can be seen by an increase of the test error both on CIFAR-10 and CIFAR-100 (see, e.g., the right middle plot of Figure 7). In contrast, we only witnessed very mild overfitting for SGDR.
|run of WRN-28-10 with snapshot (median of 16 runs)||4.03||19.57|
|run of WRN-28-10 with snapshots per run||3.51||17.75|
|runs of WRN-28-10 with snapshots per run||3.25||16.64|
|runs of WRN-28-10 with snapshots per run||3.14||16.21|
Our initial arXiv report on SGDR (Loshchilov & Hutter, 2016) inspired a follow-up study by Huang et al. (2016a) in which the authors suggest to take snapshots of the models obtained by SGDR (in their paper referred to as cyclical learning rate schedule and cosine annealing cycles) right before last restarts and to use those to build an ensemble, thereby obtaining ensembles “for free” (in contrast to having to perform multiple independent runs). The authors demonstrated new state-of-the-art results on CIFAR datasets by making ensembles of DenseNet models (Huang et al., 2016b). Here, we investigate whether their conclusions hold for WRNs used in our study. We used WRN-28-10 trained by SGDR with as our baseline model.
Figure 3 and Table 2 aggregate the results of our study. The original test error of 4.03% on CIFAR-10 and 19.57% on CIFAR-100 (median of 16 runs) can be improved to 3.51% on CIFAR-10 and 17.75% on CIFAR-100 when snapshots are taken at epochs 30, 70 and 150: when the learning rate of SGDR with is scheduled to achieve 0 (see Figure 1) and the models are used with uniform weights to build an ensemble. To achieve the same result, one would have to aggregate models obtained at epoch 150 of independent runs (see in Figure 3). Thus, the aggregation from snapshots provides a 3-fold speedup in these settings because additional (-th) snapshots from a single SGDR run are computationally free. Interestingly, aggregation of models from independent runs (when and ) does not scale up as well as from snapshots of independent runs when the same number of models is considered: the case of and provides better performance than the cases of with and . Not only the number of snapshots per run but also their origin is crucial. Thus, naively building ensembles from models obtained at last epochs only (i.e., snapshots at epochs 148, 149, 150) did not improve the results (i.e., the baseline of snapshot at ) thereby confirming the conclusion of Huang et al. (2016a) that snapshots of SGDR provide a useful diversity of predictions for ensembles.
Three runs () of SGDR with snapshots per run are sufficient to greatly improve the results to 3.25% on CIFAR-10 and 16.64% on CIFAR-100 outperforming the results of Huang et al. (2016a). By increasing to 16 one can achieve 3.14% on CIFAR-10 and 16.21% on CIFAR-100. We believe that these results could be further improved by considering better baseline models than WRN-28-10 (e.g., WRN-28-20).
To demonstrate the generality of SGDR, we also considered a very different domain: a dataset of electroencephalographic (EEG) recordings of brain activity for classification of actual right and left hand and foot movements of 14 subjects with roughly 1000 trials per subject (Schirrmeister et al., 2017)
. The best classification results obtained with the original pipeline based on convolutional neural networks designed bySchirrmeister et al. (2017) were used as our reference. First, we compared the baseline learning rate schedule with different settings of the total number of epochs and initial learning rates (see Figure 4). When 30 epochs were considered, we dropped the learning rate by a factor of 10 at epoch indexes 10, 15 and 20. As expected, with more epochs used and a similar (budget proportional) schedule better results can be achieved. Alternatively, one can consider SGDR and get a similar final performance while having a better anytime performance without defining the total budget of epochs beforehand.
Similarly to our results on the CIFAR datasets, our experiments with the EEG data confirm that snapshots are useful and the median reference error (about 9%) can be improved i) by 1-2 when model snapshots of a single run are considered, and ii) by 2-3 when model snapshots from both hyperparameter settings are considered. The latter would correspond to in Section (4.3).
In order to additionally validate our SGDR on a larger dataset, we constructed a downsampled version of the ImageNet dataset [P. Chrabaszcz, I. Loshchilov and F. Hutter. A Downsampled Variant of ImageNet as an Alternative to the CIFAR datasets., in preparation]. In contrast to earlier attempts (Pouransari & Ghili, 2015), our downsampled ImageNet contains exactly the same images from 1000 classes as the original ImageNet but resized with box downsampling to 32
32 pixels. Thus, this dataset is substantially harder than the original ImageNet dataset because the average number of pixels per image is now two orders of magnitude smaller. The new dataset is also more difficult than the CIFAR datasets because more classes are used and the relevant objects to be classified often cover only a tiny subspace of the image and not most of the image as in the CIFAR datasets.
We benchmarked SGD with momentum with the default learning rate schedule, SGDR with and SGDR with on WRN-28-10, all trained with 4 settings of the initial learning rate : 0.050, 0.025, 0.01 and 0.005. We used the same data augmentation procedure as for the CIFAR datasets. Similarly to the results on the CIFAR datasets, Figure 5 shows that SGDR demonstrates better anytime performance. SGDR with achieves top-1 error of 39.24% and top-5 error of 17.17% matching the original results by AlexNets (40.7% and 18.2%, respectively) obtained on the original ImageNet with full-size images of ca. 50 times more pixels per image (Krizhevsky et al., 2012b). Interestingly, when the dataset is permuted only within 10 subgroups each formed from 100 classes, SGDR also demonstrates better results (see Figure 8 in the Supplementary Material). An interpretation of this might be that while the initial learning rate seems to be very important, SGDR reduces the problem of improper selection of the latter by scanning / annealing from the initial learning rate to 0.
Clearly, longer runs (more than 40 epochs considered in this preliminary experiment) and hyperparameter tuning of learning rates, regularization and other hyperparameters shall further improve the results.
Our results suggest that even without any restarts the proposed aggressive learning rate schedule given by eq. (5) is competitive w.r.t. the default schedule when training WRNs on the CIFAR-10 (e.g., for ) and CIFAR-100 datasets. In practice, the proposed schedule requires only two hyper-parameters to be defined: the initial learning rate and the total number of epochs.
We found that the anytime performance of SGDR remain similar when shorter epochs are considered (see section 8.1 in the Supplemenary Material).
One should not suppose that the parameter values used in this study and many other works with (Residual) Neural Networks are selected to demonstrate the fastest decrease of the training error. Instead, the best validation or / and test errors are in focus. Notably, the validation error is rarely used when training Residual Neural Networks because the recommendation is defined by the final solution (in our approach, the final solution of each run). One could use the validation error to determine the optimal initial learning rate and then run on the whole dataset; this could further improve results.
The main purpose of our proposed warm restart scheme for SGD is to improve its anytime performance. While we mentioned that restarts can be useful to deal with multi-modal functions, we do not claim that we observe any effect related to multi-modality.
As we noted earlier, one could decrease and at every new warm restart to control the amount of divergence. If new restarts are worse than the old ones w.r.t. validation error, then one might also consider going back to the last best solution and perform a new restart with adjusted hyperparameters.
Our results reproduce the finding by Huang et al. (2016a) that intermediate models generated by SGDR can be used to build efficient ensembles at no cost. This finding makes SGDR especially attractive for scenarios when ensemble building is considered.
In this paper, we investigated a simple warm restart mechanism for SGD to accelerate the training of DNNs. Our SGDR simulates warm restarts by scheduling the learning rate to achieve competitive results on CIFAR-10 and CIFAR-100 roughly two to four times faster. We also achieved new state-of-the-art results with SGDR, mainly by using even wider WRNs and ensembles of snapshots from SGDR’s trajectory. Future empirical studies should also consider the SVHN, ImageNet and MS COCO datasets, for which Residual Neural Networks showed the best results so far. Our preliminary results on a dataset of EEG recordings suggest that SGDR delivers better and better results as we carry out more restarts and use more model snapshots. The results on our downsampled ImageNet dataset suggest that SGDR might also reduce the problem of learning rate selection because the annealing and restarts of SGDR scan / consider a range of learning rate values. Future work should consider warm restarts for other popular training algorithms such as AdaDelta (Zeiler, 2012) and Adam (Kingma & Ba, 2014).
Alternative network structures should be also considered; e.g., soon after our initial arXiv report (Loshchilov & Hutter, 2016), Zhang et al. (2016); Huang et al. (2016b); Han et al. (2016) reported that WRNs models can be replaced by more memory-efficient models. Thus, it should be tested whether our results for individual models and ensembles can be further improved by using their networks instead of WRNs. Deep compression methods (Han et al., 2015) can be used to reduce the time and memory costs of DNNs and their ensembles.
This work was supported by the German Research Foundation (DFG), under the BrainLinksBrainTools Cluster of Excellence (grant number EXC 1086). We thank Gao Huang, Kilian Quirin Weinberger, Jost Tobias Springenberg, Mark Schmidt and three anonymous reviewers for their helpful comments and suggestions. We thank Robin Tibor Schirrmeister for providing his pipeline for the EEG experiments and helping integrating SGDR.
The Journal of Machine Learning Research, 10:1737–1754, 2009.
Local minima and plateaus in hierarchical structures of multilayer perceptrons.Neural Networks, 13(3):317–327, 2000.
Proceedings of the 11th Annual Conference Companion on Genetic and Evolutionary Computation Conference: Late Breaking Papers, pp. 2389–2396. ACM, 2009.
Multimodal Optimization by Means of Evolutionary Algorithms, pp. 115–137. Springer, 2015.
Our data augmentation procedure code is inherited from the Lasagne Recipe code for ResNets where flipped images are added to the training set. This doubles the number of training examples per epoch and thus might impact the results because hyperparameter values defined as a function of epoch index have a different meaning. While our experimental results given in Table 1 reproduced the results obtained by Zagoruyko & Komodakis (2016), here we test whether SGDR still makes sense for WRN-28-1 (i.e., ResNet with 28 layers) where one epoch corresponds to 50k training examples. We investigate different learning rate values for the default learning rate schedule (4 values out of [0.01, 0.025, 0.05, 0.1]) and SGDR (3 values out of [0.025, 0.05, 0.1]). In line with the results given in the main paper, Figure 6 suggests that SGDR is competitive in terms of anytime performance.