AdaScale SGD: A User-Friendly Algorithm for Distributed Training

07/09/2020
by   Tyler B. Johnson, et al.
17

When using large-batch training to speed up stochastic gradient descent, learning rates must adapt to new batch sizes in order to maximize speed-ups and preserve model quality. Re-tuning learning rates is resource intensive, while fixed scaling rules often degrade model quality. We propose AdaScale SGD, an algorithm that reliably adapts learning rates to large-batch training. By continually adapting to the gradient's variance, AdaScale automatically achieves speed-ups for a wide range of batch sizes. We formally describe this quality with AdaScale's convergence bound, which maintains final objective values, even as batch sizes grow large and the number of iterations decreases. In empirical comparisons, AdaScale trains well beyond the batch size limits of popular "linear learning rate scaling" rules. This includes large-batch training with no model degradation for machine translation, image classification, object detection, and speech recognition tasks. AdaScale's qualitative behavior is similar to that of "warm-up" heuristics, but unlike warm-up, this behavior emerges naturally from a principled mechanism. The algorithm introduces negligible computational overhead and no new hyperparameters, making AdaScale an attractive choice for large-scale training in practice.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
08/13/2017

Large Batch Training of Convolutional Networks

A common way to speed up training of large convolutional networks is to ...
research
10/18/2019

Improving the convergence of SGD through adaptive batch sizes

Mini-batch stochastic gradient descent (SGD) approximates the gradient o...
research
07/13/2021

Automated Learning Rate Scheduler for Large-batch Training

Large-batch training has been essential in leveraging large-scale datase...
research
12/07/2018

Nonlinear Conjugate Gradients For Scaling Synchronous Distributed DNN Training

Nonlinear conjugate gradient (NLCG) based optimizers have shown superior...
research
10/18/2016

Big Batch SGD: Automated Inference using Adaptive Batch Sizes

Classical stochastic gradient methods for optimization rely on noisy gra...
research
03/14/2019

Inefficiency of K-FAC for Large Batch Size Training

In stochastic optimization, large batch training can leverage parallel r...

Please sign up or login with your details

Forgot password? Click here to reset