Gradient Noise Convolution (GNC): Smoothing Loss Function for Distributed Large-Batch SGD

06/26/2019
by   Kosuke Haruki, et al.
4

Large-batch stochastic gradient descent (SGD) is widely used for training in distributed deep learning because of its training-time efficiency, however, extremely large-batch SGD leads to poor generalization and easily converges to sharp minima, which prevents naive large-scale data-parallel SGD (DP-SGD) from converging to good minima. To overcome this difficulty, we propose gradient noise convolution (GNC), which effectively smooths sharper minima of the loss function. For DP-SGD, GNC utilizes so-called gradient noise, which is induced by stochastic gradient variation and convolved to the loss function as a smoothing effect. GNC computation can be performed by simply computing the stochastic gradient on each parallel worker and merging them, and is therefore extremely easy to implement. Due to convolving with the gradient noise, which tends to spread along a sharper direction of the loss function, GNC can effectively smooth sharp minima and achieve better generalization, whereas isotropic random noise cannot. We empirically show this effect by comparing GNC with isotropic random noise, and show that it achieves state-of-the-art generalization performance for large-scale deep neural network optimization.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/01/2018

The Regularization Effects of Anisotropic Noise in Stochastic Gradient Descent

Understanding the generalization of deep learning has raised lots of con...
research
05/21/2018

SmoothOut: Smoothing Out Sharp Minima for Generalization in Large-Batch Deep Learning

In distributed deep learning, a large batch size in Stochastic Gradient ...
research
09/28/2020

Improved generalization by noise enhancement

Recent studies have demonstrated that noise in stochastic gradient desce...
research
12/28/2018

A continuous-time analysis of distributed stochastic gradient

Synchronization in distributed networks of nonlinear dynamical systems p...
research
06/05/2023

Decentralized SGD and Average-direction SAM are Asymptotically Equivalent

Decentralized stochastic gradient descent (D-SGD) allows collaborative l...
research
06/10/2020

Extrapolation for Large-batch Training in Deep Learning

Deep learning networks are typically trained by Stochastic Gradient Desc...
research
12/04/2020

Effect of the initial configuration of weights on the training and function of artificial neural networks

The function and performance of neural networks is largely determined by...

Please sign up or login with your details

Forgot password? Click here to reset