Concurrent Adversarial Learning for Large-Batch Training

06/01/2021
by   Yong Liu, et al.
0

Large-batch training has become a commonly used technique when training neural networks with a large number of GPU/TPU processors. As batch size increases, stochastic optimizers tend to converge to sharp local minima, leading to degraded test performance. Current methods usually use extensive data augmentation to increase the batch size, but we found the performance gain with data augmentation decreases as batch size increases, and data augmentation will become insufficient after certain point. In this paper, we propose to use adversarial learning to increase the batch size in large-batch training. Despite being a natural choice for smoothing the decision surface and biasing towards a flat region, adversarial learning has not been successfully applied in large-batch training since it requires at least two sequential gradient computations at each step, which will at least double the running time compared with vanilla training even with a large number of processors. To overcome this issue, we propose a novel Concurrent Adversarial Learning (ConAdv) method that decouple the sequential gradient computations in adversarial learning by utilizing staled parameters. Experimental results demonstrate that ConAdv can successfully increase the batch size on both ResNet-50 and EfficientNet training on ImageNet while maintaining high accuracy. In particular, we show ConAdv along can achieve 75.3% top-1 accuracy on ImageNet ResNet-50 training with 96K batch size, and the accuracy can be further improved to 76.2% when combining ConAdv with data augmentation. This is the first work successfully scales ResNet-50 training batch size to 96K.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/02/2018

Large batch size training of neural networks with adversarial training and second-order information

Stochastic Gradient Descent (SGD) methods using randomly selected batche...
research
03/05/2022

Towards Efficient and Scalable Sharpness-Aware Minimization

Recently, Sharpness-Aware Minimization (SAM), which connects the geometr...
research
09/14/2017

ImageNet Training in Minutes

Finishing 90-epoch ImageNet-1k training with ResNet-50 on a NVIDIA M40 G...
research
05/27/2021

Drawing Multiple Augmentation Samples Per Image During Training Efficiently Decreases Test Error

In computer vision, it is standard practice to draw a single sample from...
research
11/13/2018

ImageNet/ResNet-50 Training in 224 Seconds

Scaling the distributed deep learning to a massive GPU cluster level is ...
research
06/15/2020

The Limit of the Batch Size

Large-batch training is an efficient approach for current distributed de...
research
12/14/2018

An Empirical Model of Large-Batch Training

In an increasing number of domains it has been demonstrated that deep le...

Please sign up or login with your details

Forgot password? Click here to reset