SmoothOut: Smoothing Out Sharp Minima for Generalization in Large-Batch Deep Learning

05/21/2018
by   Wei Wen, et al.
0

In distributed deep learning, a large batch size in Stochastic Gradient Descent is required to fully exploit the computing power in distributed systems. However, generalization gap (accuracy loss) was observed because large-batch training converges to sharp minima which have bad generalization [1][2]. This contradiction hinders the scalability of distributed deep learning. We propose SmoothOut to smooth out sharp minima in Deep Neural Networks (DNNs) and thereby close generalization gap. SmoothOut perturbs multiple copies of the DNN in the parameter space and averages these copies. We prove that SmoothOut can eliminate sharp minima. Perturbing and training multiple DNN copies is inefficient, we propose a stochastic version of SmoothOut which only introduces overhead of noise injection and denoising per iteration. We prove that the Stochastic SmoothOut is an unbiased approximation of the original SmoothOut. In experiments on a variety of DNNs and datasets, SmoothOut consistently closes generalization gap in large-batch training within the same epochs. Moreover, SmoothOut can guide small-batch training to flatter minima and improve generalization. Our source code is in https://github.com/wenwei202/smoothout

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/26/2019

Gradient Noise Convolution (GNC): Smoothing Loss Function for Distributed Large-Batch SGD

Large-batch stochastic gradient descent (SGD) is widely used for trainin...
research
06/10/2020

Extrapolation for Large-batch Training in Deep Learning

Deep learning networks are typically trained by Stochastic Gradient Desc...
research
12/02/2021

On Large Batch Training and Sharp Minima: A Fokker-Planck Perspective

We study the statistical properties of the dynamic trajectory of stochas...
research
03/15/2022

Surrogate Gap Minimization Improves Sharpness-Aware Training

The recently proposed Sharpness-Aware Minimization (SAM) improves genera...
research
07/24/2020

Deforming the Loss Surface

In deep learning, it is usually assumed that the shape of the loss surfa...
research
03/15/2017

Sharp Minima Can Generalize For Deep Nets

Despite their overwhelming capacity to overfit, deep learning architectu...
research
10/16/2017

Generalization in Deep Learning

This paper explains why deep learning can generalize well, despite large...

Please sign up or login with your details

Forgot password? Click here to reset