Which Algorithmic Choices Matter at Which Batch Sizes? Insights From a Noisy Quadratic Model

07/09/2019
by   Guodong Zhang, et al.
3

Increasing the batch size is a popular way to speed up neural network training, but beyond some critical batch size, larger batch sizes yield diminishing returns. In this work, we study how the critical batch size changes based on properties of the optimization algorithm, including acceleration and preconditioning, through two different lenses: large scale experiments, and analysis of a simple noisy quadratic model (NQM). We experimentally demonstrate that optimization algorithms that employ preconditioning, specifically Adam and K-FAC, result in much larger critical batch sizes than stochastic gradient descent with momentum. We also demonstrate that the NQM captures many of the essential features of real neural network training, despite being drastically simpler to work with. The NQM predicts our results with preconditioned optimizers, previous results with accelerated gradient descent, and other results around optimal learning rates and large batch training, making it a useful tool to generate testable predictions about neural network optimization.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/21/2019

Non-Gaussianity of Stochastic Gradient Noise

What enables Stochastic Gradient Descent (SGD) to achieve better general...
research
02/12/2021

A Large Batch Optimizer Reality Check: Traditional, Generic Optimizers Suffice Across Batch Sizes

Recently the LARS and LAMB optimizers have been proposed for training ne...
research
11/08/2018

Measuring the Effects of Data Parallelism on Neural Network Training

Recent hardware developments have made unprecedented amounts of data par...
research
10/01/2021

Batch size-invariance for policy optimization

We say an algorithm is batch size-invariant if changes to the batch size...
research
10/03/2019

Training Multiscale-CNN for Large Microscopy Image Classification in One Hour

Existing approaches to train neural networks that use large images requi...
research
11/30/2018

On the Computational Inefficiency of Large Batch Sizes for Stochastic Gradient Descent

Increasing the mini-batch size for stochastic gradient descent offers si...
research
07/25/2023

How to Scale Your EMA

Preserving training dynamics across batch sizes is an important tool for...

Please sign up or login with your details

Forgot password? Click here to reset