An Empirical Model of Large-Batch Training

12/14/2018
by   Sam McCandlish, et al.
0

In an increasing number of domains it has been demonstrated that deep learning models can be trained using relatively large batch sizes without sacrificing data efficiency. However the limits of this massive data parallelism seem to differ from domain to domain, ranging from batches of tens of thousands in ImageNet to batches of millions in RL agents that play the game Dota 2. To our knowledge there is limited conceptual understanding of why these limits to batch size differ or how we might choose the correct batch size in a new domain. In this paper, we demonstrate that a simple and easy-to-measure statistic called the gradient noise scale predicts the largest useful batch size across many domains and applications, including a number of supervised learning datasets (MNIST, SVHN, CIFAR-10, ImageNet, Billion Word), reinforcement learning domains (Atari and Dota), and even generative model training (autoencoders on SVHN). We find that the noise scale increases as the loss decreases over a training run and depends on the model size primarily through improved model performance. Our empirically-motivated theory also describes the tradeoff between compute-efficiency and time-efficiency, and provides a rough model of the benefits of adaptive batch-size training.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
12/06/2017

AdaBatch: Adaptive Batch Sizes for Training Deep Neural Networks

Training deep neural networks with Stochastic Gradient Descent, or its v...
research
10/24/2021

Micro Batch Streaming: Allowing the Training of DNN models Using a large batch size on Small Memory Systems

The size of the deep learning models has greatly increased over the past...
research
10/21/2022

A New Perspective for Understanding Generalization Gap of Deep Neural Networks Trained with Large Batch Sizes

Deep neural networks (DNNs) are typically optimized using various forms ...
research
05/24/2017

Train longer, generalize better: closing the generalization gap in large batch training of neural networks

Background: Deep learning models are typically trained using stochastic ...
research
10/28/2021

Cross-Batch Negative Sampling for Training Two-Tower Recommenders

The two-tower architecture has been widely applied for learning item and...
research
06/01/2021

Concurrent Adversarial Learning for Large-Batch Training

Large-batch training has become a commonly used technique when training ...
research
12/14/2018

Scaling shared model governance via model splitting

Currently the only techniques for sharing governance of a deep learning ...

Please sign up or login with your details

Forgot password? Click here to reset