Parle: parallelizing stochastic gradient descent

07/03/2017
by   Pratik Chaudhari, et al.
0

We propose a new algorithm called Parle for parallel training of deep networks that converges 2-4x faster than a data-parallel implementation of SGD, while achieving significantly improved error rates that are nearly state-of-the-art on several benchmarks including CIFAR-10 and CIFAR-100, without introducing any additional hyper-parameters. We exploit the phenomenon of flat minima that has been shown to lead to improved generalization error for deep networks. Parle requires very infrequent communication with the parameter server and instead performs more computation on each client, which makes it well-suited to both single-machine, multi-GPU settings and distributed implementations.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/10/2020

A Diffusion Theory for Deep Learning Dynamics: Stochastic Gradient Descent Escapes From Sharp Minima Exponentially Fast

Stochastic optimization algorithms, such as Stochastic Gradient Descent ...
research
11/29/2016

Gossip training for deep learning

We address the issue of speeding up the training of convolutional networ...
research
03/12/2021

EventGraD: Event-Triggered Communication in Parallel Machine Learning

Communication in parallel systems imposes significant overhead which oft...
research
06/16/2020

Directional Pruning of Deep Neural Networks

In the light of the fact that the stochastic gradient descent (SGD) ofte...
research
11/27/2018

Stochastic Gradient Push for Distributed Deep Learning

Large mini-batch parallel SGD is commonly used for distributed training ...
research
02/06/2019

A Scale Invariant Flatness Measure for Deep Network Minima

It has been empirically observed that the flatness of minima obtained fr...
research
10/16/2017

Generalization in Deep Learning

This paper explains why deep learning can generalize well, despite large...

Please sign up or login with your details

Forgot password? Click here to reset