Democratizing Production-Scale Distributed Deep Learning

10/31/2018
by   Minghuang Ma, et al.
12

The interest and demand for training deep neural networks have been experiencing rapid growth, spanning a wide range of applications in both academia and industry. However, training them distributed and at scale remains difficult due to the complex ecosystem of tools and hardware involved. One consequence is that the responsibility of orchestrating these complex components is often left to one-off scripts and glue code customized for specific problems. To address these restrictions, we introduce Alchemist - an internal service built at Apple from the ground up for easy, fast, and scalable distributed training. We discuss its design, implementation, and examples of running different flavors of distributed training. We also present case studies of its internal adoption in the development of autonomous systems, where training times have been reduced by 10x to keep up with the ever-growing data collection.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset