A Hitchhiker's Guide On Distributed Training of Deep Neural Networks

10/28/2018
by   Karanbir Chahal, et al.
0

Deep learning has led to tremendous advancements in the field of Artificial Intelligence. One caveat however is the substantial amount of compute needed to train these deep learning models. Training a benchmark dataset like ImageNet on a single machine with a modern GPU can take upto a week, distributing training on multiple machines has been observed to drastically bring this time down. Recent work has brought down ImageNet training time to a time as low as 4 minutes by using a cluster of 2048 GPUs. This paper surveys the various algorithms and techniques used to distribute training and presents the current state of the art for a modern distributed training framework. More specifically, we explore the synchronous and asynchronous variants of distributed Stochastic Gradient Descent, various All Reduce gradient aggregation strategies and best practices for obtaining higher throughout and lower latency over a cluster such as mixed precision training, large batch training and gradient compression.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/30/2018

Highly Scalable Deep Learning Training System with Mixed-Precision: Training ImageNet in Four Minutes

Synchronized stochastic gradient descent (SGD) optimizers with data para...
research
06/11/2018

Gear Training: A new way to implement high-performance model-parallel training

The training of Deep Neural Networks usually needs tremendous computing ...
research
11/19/2015

SparkNet: Training Deep Networks in Spark

Training deep networks is a time-consuming process, with networks for ob...
research
11/24/2018

Hydra: A Peer to Peer Distributed Training & Data Collection Framework

The world needs diverse and unbiased data to train deep learning models....
research
07/08/2020

Distributed Training of Deep Learning Models: A Taxonomic Perspective

Distributed deep learning systems (DDLS) train deep neural network model...
research
11/02/2017

Efficient Training of Convolutional Neural Nets on Large Distributed Systems

Deep Neural Networks (DNNs) have achieved im- pressive accuracy in many ...
research
04/19/2019

Analyzing the benefits of communication channels between deep learning models

As artificial intelligence systems spread to more diverse and larger tas...

Please sign up or login with your details

Forgot password? Click here to reset