Nonlinear Conjugate Gradients For Scaling Synchronous Distributed DNN Training

12/07/2018
by   Saurabh Adya, et al.
6

Nonlinear conjugate gradient (NLCG) based optimizers have shown superior loss convergence properties compared to gradient descent based optimizers for traditional optimization problems. However, in Deep Neural Network (DNN) training, the dominant optimization algorithm of choice is still Stochastic Gradient Descent (SGD) and its variants. In this work, we propose and evaluate the stochastic preconditioned nonlinear conjugate gradient algorithm for large scale DNN training tasks. We show that a nonlinear conjugate gradient algorithm improves the convergence speed of DNN training, especially in the large mini-batch scenario, which is essential for scaling synchronous distributed DNN training to large number of workers. We show how to efficiently use second order information in the NLCG pre-conditioner for improving DNN training convergence. For the ImageNet classification task, at extremely large mini-batch sizes of greater than 65k, NLCG optimizer is able to improve top-1 accuracy by more than 10 percentage points for standard training of the Resnet-50 model for 90 epochs. For the CIFAR-100 classification task, at extremely large mini-batch sizes of greater than 16k, NLCG optimizer is able to improve top-1 accuracy by more than 15 percentage points for standard training of the Resnet-32 model for 200 epochs.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
08/13/2017

Large Batch Training of Convolutional Networks

A common way to speed up training of large convolutional networks is to ...
research
04/17/2023

Pointwise convergence theorem of generalized mini-batch gradient descent in deep neural network

The theoretical structure of deep neural network (DNN) has been clarifie...
research
03/12/2018

High Throughput Synchronous Distributed Stochastic Gradient Descent

We introduce a new, high-throughput, synchronous, distributed, data-para...
research
02/01/2023

Weight Prediction Boosts the Convergence of AdamW

In this paper, we introduce weight prediction into the AdamW optimizer t...
research
12/08/2017

Neumann Optimizer: A Practical Optimization Algorithm for Deep Neural Networks

Progress in deep learning is slowed by the days or weeks it takes to tra...
research
10/04/2020

Feature Whitening via Gradient Transformation for Improved Convergence

Feature whitening is a known technique for speeding up training of DNN. ...
research
07/09/2020

AdaScale SGD: A User-Friendly Algorithm for Distributed Training

When using large-batch training to speed up stochastic gradient descent,...

Please sign up or login with your details

Forgot password? Click here to reset