Distributed Training of Deep Neural Networks: Theoretical and Practical Limits of Parallel Scalability

09/22/2016
by   Janis Keuper, et al.
0

This paper presents a theoretical analysis and practical evaluation of the main bottlenecks towards a scalable distributed solution for the training of Deep Neuronal Networks (DNNs). The presented results show, that the current state of the art approach, using data-parallelized Stochastic Gradient Descent (SGD), is quickly turning into a vastly communication bound problem. In addition, we present simple but fixed theoretic constraints, preventing effective scaling of DNN training beyond only a few dozen nodes. This leads to poor scalability of DNN training in most practical scenarios.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset