Performance Modeling and Evaluation of Distributed Deep Learning Frameworks on GPUs

11/16/2017
by   Shaohuai Shi, et al.
0

Deep learning frameworks have been widely deployed on GPU servers for deep learning applications in both academia and industry. In the training of deep neural networks (DNNs), there are many standard processes or algorithms, such as convolution and stochastic gradient descent (SGD), but the running performance of different frameworks might be different even running the same deep model on the same GPU hardware. In this paper, we evaluate the running performance of four state-of-the-art distributed deep learning frameworks (i.e., Caffe-MPI, CNTK, MXNet and TensorFlow) over single-GPU, multi-GPU and multi-node environments. We first build performance models of standard processes in training DNNs with SGD, and then we benchmark the running performance of these frameworks with three popular convolutional neural networks (i.e., AlexNet, GoogleNet and ResNet-50), after that we analyze what factors that results in the performance gap among these four frameworks. Through both analytical and experimental analysis, we identify bottlenecks and overheads which could be further optimized. The main contribution is two-fold. First, the testing results provide a reference for end users to choose the proper framework for their own scenarios. Second, the proposed performance models and the detailed analysis provide further optimization directions in both algorithmic design and system configuration.

READ FULL TEXT

page 5

page 7

research
05/10/2018

Modeling and Evaluation of Synchronous Stochastic Gradient Descent in Distributed Deep Learning on Multiple GPUs

With huge amounts of training data, deep learning has made great breakth...
research
03/16/2019

swCaffe: a Parallel Framework for Accelerating Deep Learning Applications on Sunway TaihuLight

This paper reports our efforts on swCaffe, a highly efficient parallel f...
research
11/09/2017

Performance Evaluation of Deep Learning Tools in Docker Containers

With the success of deep learning techniques in a broad range of applica...
research
03/16/2018

TBD: Benchmarking and Analyzing Deep Neural Network Training

The recent popularity of deep neural networks (DNNs) has generated a lot...
research
09/15/2019

Performance and Power Evaluation of AI Accelerators for Training Deep Learning Models

Deep neural networks (DNNs) have become widely used in many AI applicati...
research
11/18/2018

Analyzing Machine Learning Workloads Using a Detailed GPU Simulator

Most deep neural networks deployed today are trained using GPUs via high...
research
05/19/2023

A Generic Performance Model for Deep Learning in a Distributed Environment

Performance modelling of a deep learning application is essential to imp...

Please sign up or login with your details

Forgot password? Click here to reset