Democratizing Production-Scale Distributed Deep Learning

10/31/2018
by   Minghuang Ma, et al.
12

The interest and demand for training deep neural networks have been experiencing rapid growth, spanning a wide range of applications in both academia and industry. However, training them distributed and at scale remains difficult due to the complex ecosystem of tools and hardware involved. One consequence is that the responsibility of orchestrating these complex components is often left to one-off scripts and glue code customized for specific problems. To address these restrictions, we introduce Alchemist - an internal service built at Apple from the ground up for easy, fast, and scalable distributed training. We discuss its design, implementation, and examples of running different flavors of distributed training. We also present case studies of its internal adoption in the development of autonomous systems, where training times have been reduced by 10x to keep up with the ever-growing data collection.

READ FULL TEXT

page 5

page 7

page 9

research
05/18/2021

TRIM: A Design Space Exploration Model for Deep Neural Networks Inference and Training Accelerators

There is increasing demand for specialized hardware for training deep ne...
research
01/24/2018

On Scale-out Deep Learning Training for Cloud and HPC

The exponential growth in use of large deep neural networks has accelera...
research
10/28/2021

OneFlow: Redesign the Distributed Deep Learning Framework from Scratch

Deep learning frameworks such as TensorFlow and PyTorch provide a produc...
research
08/18/2020

Benchmarking network fabrics for data distributed training of deep neural networks

Artificial Intelligence/Machine Learning applications require the traini...
research
10/03/2021

Distributed Optimization using Heterogeneous Compute Systems

Hardware compute power has been growing at an unprecedented rate in rece...
research
12/10/2020

Building Graphs at a Large Scale: Union Find Shuffle

Large scale graph processing using distributed computing frameworks is b...
research
03/09/2018

Stable and Consistent Membership at Scale with Rapid

We present the design and evaluation of Rapid, a distributed membership ...

Please sign up or login with your details

Forgot password? Click here to reset