Modeling and Evaluation of Synchronous Stochastic Gradient Descent in Distributed Deep Learning on Multiple GPUs

05/10/2018 ∙ by Shaohuai Shi, et al. ∙ Hong Kong Baptist University The Hong Kong University of Science and Technology 0

With huge amounts of training data, deep learning has made great breakthroughs in many artificial intelligence (AI) applications. However, such large-scale data sets present computational challenges, requiring training to be distributed on a cluster equipped with accelerators like GPUs. With the fast increase of GPU computing power, the data communications among GPUs have become a potential bottleneck on the overall training performance. In this paper, we first propose a general directed acyclic graph (DAG) model to describe the distributed synchronous stochastic gradient descent (S-SGD) algorithm, which has been widely used in distributed deep learning frameworks. To understand the practical impact of data communications on training performance, we conduct extensive empirical studies on four state-of-the-art distributed deep learning frameworks (i.e., Caffe-MPI, CNTK, MXNet and TensorFlow) over multi-GPU and multi-node environments with different data communication techniques, including PCIe, NVLink, 10GbE, and InfiniBand. Through both analytical and experimental studies, we identify the potential bottlenecks and overheads that could be further optimized. At last, we make the data set of our experimental traces publicly available, which could be used to support simulation-based studies.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

Recently, deep learning (DL) techniques have achieved great success in many AI applications [1]

such as image classification, speech recognition and generation, and natural language processing. Deep model training aims to learn the set of model parameters by minimizing a loss function iteratively

[2][3]. With increases in the training data size and the complexity of deep models, it becomes necessary to scale out the time-consuming training jobs to GPU clusters through either model parallelization [4] or data parallelization [5][6][7][8][9]. This pushes the technology giants to deploy their cloud-based AI services with highly scalable deep learning tools. For example, Amazon adopts MXNet [10] as the main DL framework for cloud service AWS; Google develops TensorFlow [11] for Google Cloud; and Microsoft develops Microsoft Cognitive Toolkit (CNTK) [12] for Microsoft Azure.

When training a deep model on distributed systems, the computing tasks are typically carried out by a set of worker GPUs in many iterations. Each iteration includes two major steps: feed-forward and back propagation. Within each iteration, the GPUs need to exchange huge amount of information about the model parameters or gradients. Most of the existing work focuses on the optimization of the computing primitives in deep learning, such as dense and sparse matrix multiplication, convolution, FFT, etc. With the advance of GPU computing power and newly designed parallel algorithms (e.g., cuDNN [13]), the computing time of feed-forward and back propagation has been significantly reduced, making the data communication overhead a potential system bottleneck. Some existing empirical studies have shown that the current deep learning frameworks do not scale very well on GPU clusters due to the overhead of data communications [14][15][16][17], but there lacks an in-depth study of the impact of data communications on the overall training performance.

In this paper, we try to investigate the impact of data communications on distributed training with the synchronous stochastic gradient descent (S-SGD) algorithm, which has been widely used by mainstream deep learning frameworks. We first propose a general directed acyclic graph (DAG) model to describe the S-SGD algorithm. Different from the traditional DAG model whose nodes represent computing tasks, our DAG model includes two types of nodes: computing and communication nodes. The edges are used to describe the precedence constraint between two tasks. We use the DAG model to explore the possible strategies of reducing the training time, and compare four distributed DL frameworks (i.e., Caffe-MPI[18], CNTK, MXNet and TensorFlow). We then conduct empirical studies on these DL framework, aiming at understanding how different data communication techniques (e.g., PCIe, NVLink, 10GbE and InfiniBand) and optimization strategies affect the training performance. Our major findings are summarized as follows:

  1. The different implementations of the S-SGD algorithm on all studied frameworks can be well described by our general DAG model. The speedup depends on three major factors: I/O performance, computing performance, and communication performance.

  2. Through the DAG model, we show two optimization opportunities: overlapping I/O with computing, and overlapping gradient aggregation with computing. In S-SGD with multiple GPUs, CNTK does not hide the overhead of gradient communication, while Caffe-MPI, MXNet and TensorFlow parallelize the gradient aggregation with the gradient computation. By hiding the overhead of gradient communication, the scaling performance could be improved.

  3. All the frameworks scale not very well on the most advanced GPUs (Nvidia Tesla V100) when Tensor Cores are utilized. The current implementations of inter-node gradient communication via 100Gbps InfiniBand are still not good enough to match the computing power of V100.

  4. On certain deep neural networks, we observe very low utilization of network bandwidth on InfiniBand due to the layer-wise pattern of data communications during back propagation.

The rest of the paper is organized as follows. Section II presents some related work. Section III introduces the SGD and S-SGD algorithms. We propose our general DAG model and discuss different performance optimization strategies in Section IV. Our experiments and analysis are presented in Section V, followed by the introduction to the published layer-wise trace data set in Section VI. We conclude the paper in Section VII.

Ii Related Work

Data parallelism synchronous SGD (S-SGD) is widely used in the training of deep neural networks to scale to multiple GPUs or machines without affecting the convergence performance [19][9][3][6][7][20], and hence becomes the embedded component in mainstream DL frameworks such as Caffe-MPI, CNTK, MXNet and TensorFlow. However, because of the different design philosophy of software by vendors, these frameworks implement S-SGD differently so that the scaling performance varies. Bahrampour et al. [14] and Shi et al. [15] have evaluated the performance of some DL frameworks on a GPU. In the distributed environment, Shams et al. [16] have studied the performance of Nvidia’s NVLink and Intel’s Knights Landing on different CPU and GPU technologies. However, some other popular DL frameworks (e.g., Caffe-MPI, CNTK and MXNet) are not evaluated in [16]. In addition, there lacks an in-depth analysis of the scalability performance of S-SGD algorithm in distributed clusters. Shi et al., [21] evaluate the same distributed frameworks with high-speed networks, but they do not provide a comparison between slow- and high- speed connections. The main differences of this work compared to the work in [21] are that we build a directed acyclic graph to generalize the performance of S-SGD and compare the impact of the high- and slow- speed networks on training DNNs.

S-SGD requires the set of computing units (e.g., GPUs) to exchange data iteratively, which can be implemented by either parameter server (PS) based methods [22][23] or decentralized methods. In PS-based methods, there is one or more PSes that store the global model. The PS aggregates parameters at each iteration, updates the model, and then pushes the updated model to each computing unit. Performance models have been built by S. Zou et al. [24] to generalize the performance of the PS-based methods, which provides guidelines for better system scalability.

Decentralized methods implement the gradients aggregation by using the reduction tree (RT) or ring based all-reduce [25][26][27]. The gradients are exchanged via MPI-like collectives (e.g., all-reduce). Very recently, some new collective communications libraries like Gloo111https://github.com/facebookincubator/gloo and NCCL2222https://developer.nvidia.com/nccl have been developed to support efficient communications among a set of GPUs. A. Awan et al. [27][28] propose a high performance CUDA-Aware MPI to reduce the overhead of data communications across a GPU cluster. He et al. have shown that the optimized all-reduce implementation and the pipeline of all-reduce operations with gradient computation can lead to very good scalability [29].

Different from the above studies, in this paper we first propose a DAG model to generalize the workflow of distributed training with S-SGD, and then study several state-of-the-art deep learning frameworks under multi-GPU and multi-node environments through theoretical analysis and real-world experiments.

Iii SGD and S-SGD

In this section, the algorithms of SGD and S-SGD are introduced. For easy reference, some mathematical notations are summarized in Table I.

Name Description
# of machines in the cluster
# of GPUs on each node
# of total GPUs,
# of training samples per GPU in a mini-batch
The number of layers in a deep neural network
Time of an iteration
Time of I/O in each iteration
Data transfer time from CPU memory to GPU memory in each iteration
Time of the forward phase in each iteration
Time of the forward phase of layer in each iteration
Time of the backward phase in each iteration
Time of the backward phase of layer in each iteration
Time of the model update in each iteration
Time of the gradients aggregation in each iteration
Gradients aggregation time of layer in each iteration
Time of non-overlapped gradient communications
TABLE I: Summary of notations

Iii-a Mini-batch SGD

Assume that an -layer model is trained with the mini-batch SGD on a GPU, the layer-wise parameters of the model are updated iteratively. Each iteration generally contains five steps: 1) Fetch data: Load a mini-batch of training data from the disk or the cache; 2) Data transfer through PCIe: Transfer the training data from CPU memory to GPU memory; 3) Feed-forward: Use GPU to perform feed-forward calculations from layer to layer ; 4) Back propagation: Use GPU to calculate gradients from layer back to layer ; 5) Update: The model are updated by the calculated gradients in the previous step. The time of one iteration can be represented by

(1)

Iii-B S-SGD on multiple GPUs

The pseudo-code of S-SGD is shown in Algorithm 1. S-SGD makes each GPU perform feed-forward and backward propagation in parallel with different training data on the same model. Compared to SGD, S-SGD contains six steps, and the first four steps are the same with SGD (i.e., Fetch data, data transfer through PCIe, feed-forward and back propagation). The fifth step is an extra operation (i.e., gradient aggregation) before udpating the model in the sixth step. The iteration time for the naive S-SGD implementation can be represented as:

(2)

In the single-GPU environment, .

1:procedure S-SGD(parameters, data, )
2:     for  do
3:         FeedForward(parameters, )
4:         
5:     end for
6:     Synchronous()
7:     
8:     for  do
9:         UpdateModel()
10:     end for
11:end procedure
Algorithm 1 S-SGD

Iv A DAG Model of S-SGD

In this section, we first define two types of tasks in distributed training of deep neural networks, and then propose a general directed acyclic graph (DAG) model for S-SGD based training. After that, we apply the DAG model to discuss different optimization strategies in Caffe-MPI, CNTK, MXNet and TensorFlow.

Iv-a Definitions

We define the following two types of tasks in a training job:

  • Computing task, whose resource requirement is mainly on the computational units (e.g., CPUs and GPUs).

  • Communication task, whose resource requirement is the disk I/O or the interconnect (e.g., PCIe, NVLink, Ethernet, and InfiniBand) between computing units.

Considering S-SGD, in the first step, each GPU needs data samples separately, and the data should be read from the disk, so fetching samples can be regarded as a communication task. In the second step, the fetched data should be transferred to GPUs via PCIe, so we regard each transmission of samples between CPU memory and GPU memory as a communication task. In the third and fourth steps, the data is fed-forward on GPU layer by layer, and then back propagated layer by layer, both of which require GPU to carry out calculations. So each layer’s feed-forward can be regarded as a computing task, and so does each layer’s back propagation. In the fifth step, the gradients of each layer should be aggregated by all GPUs via intra-connect (e.g., PCIe and NVLink) and/or inter-connect (e.g., Ethernet and InfiniBand) communications, which is regarded as a communication task.

Iv-B A DAG model of S-SGD

Directed acyclic graph (DAG) is a popular approach to modeling complex computing jobs, in which the nodes represent the computing tasks and the edges represent the precedence constraint between two tasks. In this paper, we focus on distributed training jobs that involve extensive data communications. To this end, we introduce communication nodes into the DAG to represent the communication tasks throughout the training job. To be more specific, a job is represented by a DAG , where , , are the set of computing nodes (or tasks), communication nodes (or tasks), and directed edges, respectively. A directed edge from node to node represents the precedence constraint that task can only begin after task is finished.

A DAG example of a distributed training job using S-SGD is shown in Fig. 1. The training job is to train a 3-layer model using 4 GPUs. The yellow circle nodes represent the computing tasks; the orange square nodes represent the communication tasks; and the directed edges represent the precedence constraint between two tasks. Tasks -

read the training data from the disk or the network file system, which are classified as communication tasks. Tasks

- transfer data from CPU memory to GPU memory, which are also classified as communication tasks. Tasks - represent the feed-forward computing tasks of layer 1, followed by Tasks - for layer 2 and Tasks - for layer 3. Tasks - represent the back propagation computing tasks of layer 3, followed by Tasks - for layer 2 and Tasks - for layer 1. Tasks , and aggregate the gradients of layer 3, 2 and 1, respectively, which can be implemented by all-reduce communication tasks. Task updates the model, which is a computing task that depends on , and . Tasks - represent the loading of another mini-batch of data for the next iteration.

Fig. 1: A DAG model of training a 3-layer neural network with 4 GPUs.

Iv-C Optimization opportunities

According to the proposed DAG model for S-SGD, the computing tasks of a new iteration cannot begin before the model update task of the previous iteration. We can observe two possible optimization opportunities with pipeline techniques. The first one is to parallelize the tasks of data reading (e.g., tasks -) with the computing tasks (e.g., tasks -), which could hide the time cost of disk I/O. The second one is to parallelize the gradient communication tasks (e.g., tasks -) with the back propagation computing tasks (e.g., tasks -).

Overlapping I/O with computing. The tasks of data fetching for an iteration has no edge connections with the computing tasks of its previous iteration, so one can parallelize these two types of tasks such that the I/O time can overlap with the computing tasks. Taking the example of Fig. 1, tasks - can immediately begin after tasks - have finished, and then be followed by the communication tasks -. In such a way, computing tasks - in the next iteration can begin immediately after

is finished. The average iteration time when we overlap I/O with computing can be estimated by

(3)

Notice that if the tasks of data transfer from CPU memory to GPU memory begin immediately after the data fetching is finished, the system requires extra GPU memory for holding new training data.

Overlapping gradient communication with computing. The gradient communication tasks can also be parallelized with the back propagation computing tasks. For instance, the communication task can be parallelized with computing tasks of -, and the communication task can be parallelized with computing tasks of -. This strategy is also known as the wait-free back-propagation (WFBP) algorithm [30][27]. Let and denote the start and the end time of gradient communication of layer during one iteration, respectively. Then the iteration time is

(4)

where is the gradient computation time of the last layer. Since the gradient communication task of layer depends on the gradient computing task of layer , we have and , where is the start time of the gradient computing task of layer . We use to denote the non-overlapped time cost of gradient communication tasks. Thus, we have

(5)

Let and denote the iteration time with a total of GPUs and the I/O time of GPUs per machine respectively. The speedup of using GPUs can be formulated by

(6)

Therefore, in order to achieve good scalability, one should reduce the overheads of I/O and gradient communications.

Regarding the I/O overhead, all DL frameworks exploit multi-threading to read data and buffer data for GPU computing. However, except Caffe-MPI, the other three frameworks do not use GPU buffers to parallelize the tasks of transferring data from CPU memory to GPU memory. In other words, Caffe-MPI starts the tasks - after - are finished, while CNTK, MXNet and TensorFlow wait until is finished. Regarding the gradient communication overhead, Caffe-MPI, MXNet and TensorFlow overlap the gradient communication tasks with the back propagation computing tasks, while CNTK dose not do so. For example, task begins after tasks - are finished in Caffe-MPI, MXNet and TensorFlow, while it is executed after - are finished in CNTK. So, we have in Caffe-MPI, MXNet and TensorFlow, but in CNTK. The model can also be verified in the section of exeperimental results.

V Experiments and Analysis

In this section, we first introduce the experimental environment, and then we present the experiment design and methods with the purpose of identifying how communication tasks impact the scalability of S-SGD.

V-a Experimental environment

We use two different 4-node GPU clusters for the experiments. Cluster 1 uses the slow intra/inter connections (i.e., PCIe and 10GbE) between Nvidia Tesla K80 GPUs, and Cluster 2 uses the fast intra/inter connections (i.e., NVLink and 100Gbps InfiniBand) between Nvidia Tesla V100 GPUs. The autoboost feature is disabled on all GPUs. Both K80 and V100 GPUs run at their default frequencies (i.e., 562 MHz and 1370 MHz, respectively). Table II shows the hardware setting of the two GPU clusters.

Hardware Cluster 1 Cluster 2
GPU (Nvidia) Tesla K80 GPU x4 Tesla V100 GPU x4
Connection PCIe (15GB/s) NVLink (95GB/s)
CPU (Intel) Xeon E5-2650v4 Dual Xeon Gold 6126 CPU Dual
Network 10Gbps Ethernet 100Gbps InfiniBand
Memory 256 GB (3.5GB/s) 256 GB (3.5GB/s)
Storage system NFS (1.1GB/s) SSD (367.30MB/s)
  • Note: The speed of connections is measured by p2pBandwidthLatencyTest from CUDA SDK samples. The memory speed and storage performance are both measured by the utility.

TABLE II: The experimental hardware setting.

The operating system of the K80 GPU cluster is CentOS 7.2 with CUDA-8.0 installed, while the newer GPU V100 cluster is installed with CentOS 7.3 and CUDA-9.0. We choose four state-of-the-art distributed DL frameworks at their current latest release versions (at the time that we did the experiments) for evaluation. Versions of tested frameworks installed in two clusters are shown in Table III. The versions of CUDA related libraries are cuDNN v7.0 and NCCL v2.1.5.

Software Cluster 1 Cluster 2
Caffe-MPI 2.0 2.0
CNTK 2.3 2.4
MXNet 1.1.0 1.1.0
TensorFlow 1.7 1.7
TABLE III: The softwares used for experiments.

V-B Methodology

Three popular CNNs (i.e., AlexNet [31], GoogleNet [32] and ResNet-50 [29]

), that are successfully applied on the ILSVRC-2012 ImageNet data set

[33], are used to do the performance comparison under different software and hardware configurations. The details of the tested CNNs are shown in Table IV. Notice that the machines in Cluster 1 share the data set via NFS, while each machine in Cluster 2 has a complete copy of the data set. The data formats for different frameworks are not the same, and we use the methods proposed in [21] to fetch data when running the experiments.

Network Number of Layers Number of Parameters Batch size
AlexNet 8 ~60 millions 1024
GoogleNet 22 ~53 millions 64
ResNet-50 50 ~24 millions 32
  • Note: The local response normalization (LRN) operation in AlexNet is excluded because it is not supported by CNTK by default. The batch size specified here is used for a single GPU, and it needs a total of samples per iteration on GPUs.

TABLE IV: The tested deep neural networks.

For each experiment, we run more than 100 iterations to calculate the average training time of one mini-batch, and the performance of the training system is represented by the average samples per second.

V-C Experimental Results and Analysis

We first present the results of multiple GPUs on a single node to illustrate the impact of the intra-node communications (i.e., PCIe and NVLink), and then present the results of GPU clusters to show the impact of inter-node communications (i.e., 10GbE and 100Gbps InfiniBand). In this paper, we adopt weak scaling, which means the valid mini-batch size scales with the number of GPUs, and each GPU keeps the same number of samples [7]. The performance is measured by the throughput of the system (i.e., the number of training samples can be processed per second). Ideally, the system performance should be proportional to the number of GPUs.

(a) The server with K80 GPUs and PCIe
(b) The server with V100 GPUs and NVLink
Fig. 2: Scaling performance on a single node.

V-C1 Multiple GPUs on a Single Machine

Fig. 2 shows the scaling performance of four DL frameworks running on a machine with one, two, and four GPUs. The baseline is the performance of a single GPU.

On the K80 server (Fig. 2a), all frameworks achieve good scaling efficiencies (up to 95%) except that CNTK and TensorFlow don’t perform well in AlexNet with 4 GPUs. This is mainly because we use a much larger batch size for AlexNet and the cost of data preprocessing is proportional to the mini-batch size. The data set for Caffe-MPI and MXNet are pre-converted binary formats of input data, and do not need further decoding during the training, while CNTK and TensorFlow need to decode the JPEG files by CPUs before being transferred to GPUs. Since a large number of samples (4096 images per iteration for 4 GPUs) need to be decoded, it takes a relatively long time to decode the data on CPUs compared to the GPU computing tasks, which results in poor scaling efficiency of CNTK and TensorFlow.

On the V100 server (Fig. 2b), the speedup of every framework is worse than that achieved on the K80 server, although the high-speed NVLink is used. From Eq. 6, we can see that the speedup depends on three key factors: the I/O speed, GPU computing performance, and gradient communication performance. Faster computing speed requires faster I/O and communication to maintain a good scaling efficiency. Notice that the K80 GPU has 4.37 TFlops peak computation capability, while the V100 GPU has 125 TFlops peak computation capability with Tensor Cores. Our experiments show that V100 is about 10x faster than K80 in the computing tasks. However, the storage system on the V100 server is about 3x slower than the K80 server, so on the I/O-bound neural network like AlexNet, the scaling efficiency on the V100 server is much worse than that of the K80 server. Regarding GoogleNet and ResNet which require a small number of samples per iteration, the I/O time is negligible; but the gradient communication turns out to limit the scalability because NVLink is only about 6x faster than PCIe.

(a) The K80 cluster with 10GbE
(b) The V100 cluster with 100Gbps InfiniBand
Fig. 3: Scaling performance with multiple machines (each machine has 4 GPUs).

V-C2 Multiple GPUs on Multiple Machines

We show the speedups of multiple machines with up to 16 GPUs in Fig. 3. The baseline is the performance of a single server with 4 GPUs.

In general, we can observe that all frameworks scale better on the slow K80 cluster than on the fast V100 cluster. On the K80 cluster, the 10Gbps Ethernet is good enough to serve the gradient communications such that they can be fully overlapped with computing tasks. For example, Caffe-MPI and MXNet achieves nearly linear speedup on GoogleNet and ResNet due to their careful design of gradient aggregations. On AlexNet, however, due to its large mini-batch size and huge amount of model parameters (60 millions), all frameworks don’t scale very well on the 4-node cluster. On ResNet, TensorFlow performs the worst mainly because it uses grpc333grpc: https://grpc.io/ for gradient communications which results in relatively high latencies as compared to NCCL2 which is used by Caffe-MPI and CNTK.

On the V100 cluster, it is seen that except Caffe-MPI, the other three frameworks scale poorly across multiple machines. With V100 GPUs, the time cost of computing tasks is significantly reduced, while the overhead of inter-node gradient communication becomes much larger than the case of intra-node (12.5GB/s for InfiniBand vs. 95GB/s for NVLink). As we have already seen in Fig. 2 that the fast intra-node communication cannot be totally hidden, so the slower communication through a network results in worse scaling efficiencies. Taking the training of ResNet with Caffe-MPI as an example, the back propagation time on a K80 GPU is about 0.243s, while the overhead of gradient communication is about 0.23s; so the overhead of data communications can be hidden to achieve nearly linear scaling efficiency. However, on the V100 GPUs, the back propagation time is reduced to 0.0625s while the cost of gradient communication is about 0.0797s, so the system becomes communication-bounded. We notice that the communication efficiency on 100Gbps InfiniBand with NCCL2 is only about when training ResNet, which indicates a large room of further optimization.

To summarize, we find three main factors to improve the scaling efficiency, i.e., overlapping the pre-fetch of data with computing, overlapping communications with computing, and the efficient data exchanging algorithm. CNTK, MXNet and TensorFlow only implement some of these optimization strategies. Caffe-MPI considers all factors and achieves the best scaling performance, but there is still room for further improvement because even NCCL2 can only achieve communication efficiency on the 100Gbps InfiniBand network when training ResNet.

Name Description
The bandwidth of the hard disk, and it is measured via the command
The bandwidth of PCIe, and it is measured via the CUDA SDK
The data size of the input data, which is related to the mini-batch size
The time of I/O is calculated by the division of the input data size and
The time of data transfer from the CPU side to the GPU side is measured by the division of the input data size and
The layer-wise feed-forward time is measured from Caffe-MPI which invokes the cuDNN library
The layer-wise backward propagation time is measured from Caffe-MPI which invokes the cuDNN library
The layer-wise gradient communication time is measured from Caffe-MPI which invokes the NCCL2 library
TABLE V: The measurements for prediction of DAG.

V-D Accuracy of the DAG model

Fig. 4: Comparison of DAG-based prediction and measurement results on the K80 cluster and the V100 cluster.

In this section, we demonstrate the accuracy of the DAG model by comparing the predicted speedup with the experimental results of Caffe-MPI.

To do the prediction of the time performance of DAG, we need to measure the numbers from the evaluated CNNs on the specific platforms. The measurement details from the two clusters for prediction are shown in Table V.

Using the known time of each operation in Fig. 1 on two main platforms (i.e., the V100 GPU cluster connected with 100Gbps InfiniBand and the K80 GPU cluster conncected with 10Gbps Ethernet), we can predict the average iteration time when training AlexNet, GoogleNet and ResNet. The comparison between the prediction with DAG and measurements by Caffe-MPI is shown in Fig. 4. The average prediction errors are 9.4%, 4.7%, and 4.6% on AlexNet, GoogleNet and ResNet respectively. Using AlexNet, it is obvious that the speedup over multiple GPUs is hard to be linear on the fast V100 GPUs. The reason is that even using the optimal DAG scheduling, the communication time of gradients cannot be hidden by the computation time.

The DAG model can serve as a fundamental tool for performance optimization and task scheduling.

Vi Layer-wise trace data set

Id Name Forward Backward Comm. Size
0 data 1.20e+06 0 0 0
1 conv1 3.27e+06 288202 123.424 139776
2 relu1 17234.5 27650.9 0 0
3 pool1 32175.7 60732.6 0 0
4 conv2 3.14e+06 1.03216e+06 292.032 1229824
5 relu2 11507.5 18422.5 0 0
6 pool2 19831.2 32459 0 0
7 conv3 3.886e+06 791825 288214 3540480
8 relu3 4770.3 10996.3 0 0
9 conv4 1.87e+06 510405 1.03218e+06 2655744
10 relu4 4760.26 7872.45 0 0
11 conv5 1.13e+06 306129 275772 1770496
12 relu5 3201.22 4939.42 0 0
13 pool5 5812 18666.2 0 0
14 fc6 44689.7 73935 311170 151011328
15 relu6 295.168 1092.83 0 0
16 drop6 359.744 131247 0 0
17 fc7 19787.8 34423.8 610376 67125248
18 relu7 295.04 451.904 0 0
19 drop7 358.048 317.312 0 0
20 fc8 8033.12 9922.72 130964 16388000
21 loss 1723.49 293.024 0 0
TABLE VI: An example of the trace data with one iteration of AlexNet on the K80 GPU.

We make the trace data set from Caffe-MPI publicly available444Download address: http://dlbench.comp.hkbu.edu.hk/s/data/traces.zip, which could be used for further simulation studies (e.g., tasks scheduling and communication optimizations) for those who do not have access to the expensive GPUs. The trace data set includes the layer-wise time cost of the evaluated three types of CNNs (i.e., AlexNet, GoogleNet and ResNet-50) on the V100 GPU cluster and the K80 GPU cluster. Each record contains the time of feed forward, back propagation, intra/inter-node communication and the size of gradients in an iteration.

Each trace file contains 100 iterations of the layer-wise time speed. One can use the average time for more accurate measurements. An example with one iteration of AlexNet on two K80 GPUs is shown in Table VI. There are 22 layers including the data layer and some non-learnable layers like activations. Each line indicates the time performance of one layer of that CNN. The meaning of each column in the trace file is as follows:

  • The first column indicates the layer id;

  • The second column indicates the pre-defined name of that layer;

  • The third column is the feed-forward time in microsecond of that layer;

  • The fourth column is the backward propagation time in microsecond;

  • The fifth column is the gradient communication time in microsecond, and zero values of some layers indicate that layers are not learnable (i.e., no need to exchange gradients);

  • The sixth column is the size of gradients that need to be exchanged among GPUs in bytes, and it is the same as the size of model parameters of that layer.

Vii Conclusion and Future work

This work aims to understand the impact of data communication techniques on the distributed training performance of deep neural networks. We first propose a DAG model to describe the workflow of synchronized SGD in deep learning. Through the DAG model, we identify that the communication tasks (including the I/O tasks) could affect the scaling efficiency of the system. We then conduct extensive empirical studies on the performance of four state-of-the-art DL frameworks (Caffe-MPI, CNTK, MXNet and TensorFlow) by training three DNNs (AlexNet, GoogleNet and ResNet-50) across multiple GPUs and multiple machines. According to our experimental results and analysis, we show some performance gaps among four distribute DL frameworks due to their different optimization strategies. We also show that even the most advanced NVLink and InfiniBand techniques cannot catch up with the fast growth of GPU computing power. This demands for more research efforts from the data communication and networking community to address the communication issues in deep learning.

We will further optimize the pipeline between gradient exchange operations and backward propagation operations to achieve better effective bandwidth since current implementations have no good utilization of network resources.

=0mu plus 1mu

References