Asynchronous Decentralized Parallel Stochastic Gradient Descent

10/18/2017 ∙ by Xiangru Lian, et al. ∙ 0

Recent work shows that decentralized parallel stochastic gradient decent (D-PSGD) can outperform its centralized counterpart both theoretically and practically. While asynchronous parallelism is a powerful technology to improve the efficiency of parallelism in distributed machine learning platforms and has been widely used in many popular machine learning softwares and solvers based on centralized parallel protocols such as Tensorflow, it still remains unclear how to apply the asynchronous parallelism to improve the efficiency of decentralized parallel algorithms. This paper proposes an asynchronous decentralize parallel stochastic gradient descent algorithm to apply the asynchronous parallelism technology to decentralized algorithms. Our theoretical analysis provides the convergence rate or equivalently the computational complexity, which is consistent with many special cases and indicates we can achieve nice linear speedup when we increase the number of nodes or the batchsize. Extensive experiments in deep learning validate the proposed algorithm.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

It often takes hours to train large deep learning tasks such as ImageNet, even with hundreds of GPUs Goyal et al. (2017). At this scale, how workers communicate becomes a crucial design choice. Most existing systems such as TensorFlow (Abadi et al., 2016), MXNet (Chen et al., 2015), and CNTK (Seide and Agarwal, 2016) support two communication modes: (1) synchronous communication via parameter servers or AllReduce, or (2) asynchronous communication via parameter servers. When there are stragglers (i.e., slower workers) in the system, which is common especially at the scale of hundreds devices, asynchronous approaches are more robust. However, most asynchronous implementations have a centralized design, as illustrated in Figure 1(a) — a central server holds the shared model for all other workers. Each worker calculates its own gradients and updates the shared model asynchronously. The parameter server may become a communication bottleneck and slow down the convergence. We focus on the question: Can we remove the central server bottleneck in asynchronous distributed learning systems while maintaining the best possible convergence rate?

Figure 1: Centralized network and decentralized network.
Communication complexity (n.t./n.h.)222n.t. means number of gradients/models transferred at the busiest worker per (minibatches of) stochastic gradients updated. n.h. means number of handshakes at the busiest worker per (minibatches of) stochastic gradients updated. Idle time
S-PSGD (Ghadimi et al., 2016) Long (/) Long
A-PSGD (Lian et al., 2015) Long (/) Short
AllReduce-SGD (Luehr, 2016) Medium () Long
D-PSGD (Lian et al., 2017) Short () Long
AD-PSGD (this paper) Short () Short
Table 1: Comparison of different distributed machine learning algorithms on a network graph . Long idle time means in each iteration the whole system needs to wait for the slowest worker. Short idle time means the corresponding algorithm breaks this synchronization per iteration. Note that if is a ring network as required in AllReduce-SGD, .

Recent work (Lian et al., 2017) shows that synchronous decentralized parallel stochastic gradient descent (D-PSGD) can achieve comparable convergence rate as its centralized counterparts without any central bottleneck. Figure 1-(b) illustrates one communication topology of D-PSGD in which each worker only talks to its neighbors. However, the synchronous nature of D-PSGD makes it vulnerable to stragglers because of the synchronization barrier at each iteration among all workers. Is it possible to get the best of both worlds of asynchronous SGD and decentralized SGD?

In this paper, we propose the asynchronous decentralized parallel stochastic gradient decent algorithm (AD-PSGD) that is theoretically justified to keep the advantages of both asynchronous SGD and decentralized SGD. In AD-PSGD, workers do not wait for all others and only communicate in a decentralized fashion. AD-PSGD can achieve linear speedup with respect to the number of workers and admit a convergence rate of , where is the number of updates. This rate is consistent with D-PSGD and centralized parallel SGD. By design, AD-PSGD enables wait-free computation and communication, which ensures AD-PSGD always converges better (w.r.t epochs or wall time) than D-PSGD as the former allows much more frequent information exchanging.

In practice, we found that AD-PSGD is particularly useful in heterogeneous computing environments such as cloud-computing, where computing/communication devices’ speed often varies. We implement AD-PSGD in Torch and MPI and evaluate it on an IBM S822LC cluster of up to 128 P100 GPUs. We show that, on real-world datasets such as ImageNet, AD-PSGD has the same empirical convergence rate as its centralized and/or synchronous counterpart. In heterogeneous environments, AD-PSGD can be faster than its fastest synchronous counterparts by orders of magnitude. On an HPC cluster with homogeneous computing devices but shared network, AD-PSGD can still outperform its synchronous counterparts by 4X-8X.

Both the theoretical analysis and system implementations of AD-PSGD are non-trivial, and they form the two technical contributions of this work.

2 Related work

We review related work in this section. In the following, and refer to the number of iterations and the number of workers, respectively. A comparison of the algorithms can be found in Table 1.

The Stochastic Gradient Descent (SGD) Nemirovski et al. (2009); Moulines and Bach (2011); Ghadimi and Lan (2013) is a powerful approach to solve large scale machine learning problems, with the optimal convergence rate on nonconvex problems.

For Synchronous Parallel Stochastic Gradient Descent (S-PSGD), every worker fetches the model saved in a parameter server and computes a minibatch of stochastic gradients. Then they push the stochastic gradients to the parameter server. The parameter server synchronizes all the stochastic gradients and update their average into the model saved in the parameter server, which completes one iteration. The convergence rate is proved to be on nonconvex problems (Ghadimi et al., 2016). Results on convex objectives can be found in Dekel et al. (2012). Due to the synchronization step, all other workers have to stay idle to wait for the slowest one. In each iteration the parameter server needs to synchronize workers, which causes high communication cost at the parameter server especially when is large.

The Asynchronous Parallel Stochastic Gradient Descent (A-PSGD) (Recht et al., 2011; Agarwal and Duchi, 2011; Feyzmahdavian et al., 2016; Paine et al., 2013) breaks the synchronization in S-PSGD by allowing workers to use stale weights to compute gradients. Asynchronous algorithms significantly reduce the communication overhead by avoiding idling any worker and can still work well when part of the computing workers are down. On nonconvex problems, when the staleness of the weights used is upper bounded, A-PSGD is proved to admit the same convergence rate as S-PSGD (Lian et al., 2015, 2016).

In AllReduce Stochastic Gradient Descent implementation (AllReduce-SGD) (Luehr, 2016; Patarasuk and Yuan, 2009; MPI contributors, 2015), the update rule per iteration is exactly the same as in S-PSGD, so they share the same convergence rate. However, there is no parameter server in AllReduce-SGD. The workers are connected with a ring network and each worker keeps the same local copy of the model. In each iteration, each worker calculates a minibatch of stochastic gradients. Then all the workers use AllReduce to synchronize the stochastic gradients, after which each worker will get the average of all stochastic gradients. In this procedure, only amount of gradient is sent/received per worker, but handshakes are needed on each worker. This makes AllReduce slow on high latency network. At the end of the iteration the averaged gradient is updated into the local model of each worker. Since we still have synchronization in each iteration, the idle time is still high as in S-PSGD.

In Decentralized Parallel Stochastic Gradient Descent (D-PSGD) (Lian et al., 2017), all workers are connected with a network that forms a connected graph . Every worker has its local copy of the model. In each iteration, all workers compute stochastic gradients locally and at the same time average its local model with its neighbors. Finally the locally computed stochastic gradients are updated into the local models. In this procedure, the busiest worker only sends/receives models and has handshakes per iteration. Note that in D-PSGD the computation and communication can be done in parallel, which means, when communication time is smaller than the computation time, the communication can be completely hidden. The idle time is still high in D-PSGD because all workers need to finish updating before stepping into the next iteration. Before Lian et al. (2017) there are also previous studies on decentralized stochastic algorithms (both synchronous and asynchronous versions) though none of them is proved to have speedup when the number of workers increases. For example, Lan et al. (2017) proposed a decentralized stochastic primal-dual type algorithm with a computational complexity of for general convex objectives and for strongly convex objectives. Sirb and Ye (2016) proposed an asynchronous decentralized stochastic algorithm with a complexity for convex objectives. These bounds do not imply any speedup for decentralized algorithms. Bianchi et al. (2013)

proposed a similar decentralized stochastic algorithm. The authors provided a convergence rate for the consensus of the local models when the local models are bounded. The convergence to a solution was provided by using central limit theorem. However, they did not provide the convergence rate to the solution. A very recent paper

(Tang et al., 2018)

extended D-PSGD so that it works better on data with high variance.

Ram et al. (2010) proposed an asynchronous subgradient variations of the decentralized stochastic optimization algorithm for convex problems. The asynchrony was modeled by viewing the update event as a Poisson process and the convergence to the solution was shown. Srivastava and Nedic (2011); Sundhar Ram et al. (2010) are similar. The main differences from this work are 1) we take the situation where a worker calculates gradients based on old model into consideration, which is the case in the asynchronous setting; 2) we prove that our algorithm can achieve linear speedup when we increase the number of workers, which is important if we want to use the algorithm to accelerate training; 3) Our implementation guarantees deadlock-free, wait-free computation and communication. Nair and Gupta (2017) proposed another distributed stochastic algorithm, but it requires a centralized arbitrator to decide which two workers are exchanging weights and it lacks convergence analysis. Tsianos and Rabbat (2016) proposed a gossip based dual averaging algorithm that achieves linear speedup in the computational complexity, but in each iteration it requires multiple rounds of communication to limit the difference between all workers within a small constant.

We next briefly review decentralized algorithms. Decentralized algorithms were initially studied by the control community for solving the consensus problem where the goal is to compute the mean of all the data distributed on multiple nodes (Boyd et al., 2005; Carli et al., 2010; Aysal et al., 2009; Fagnani and Zampieri, 2008; Olfati-Saber et al., 2007; Schenato and Gamba, 2007). For decentralized algorithms used for optimization problems, Lu et al. (2010) proposed two non-gradient-based algorithms for solving one-dimensional unconstrained convex optimization problems where the objective on each node is strictly convex, by calculating the inverse function of the derivative of the local objectives and transmitting the gradients or local objectives to neighbors, and the algorithms can be used over networks with time-varying topologies. A convergence rate was not shown but the authors did prove the algorithms will converge to the solution eventually. Mokhtari and Ribeiro (2016) proposed a fast decentralized variance reduced algorithm for strongly convex optimization problems. The algorithm is proved to have linear convergence rate and a nice stochastic saddle point method interpretation is given. However, the speedup property is unclear and a table of stochastic gradients need to be stored. Yuan et al. (2016) studied decentralized gradient descent on convex and strongly convex objectives. The algorithm in each iteration averages the models of the nodes with their neighbors’ and then updates the full gradient of the local objective function on each node. The subgradient version was considered in Nedic and Ozdaglar (2009); Ram et al. (2009). The algorithm is intuitive and easy to understand. However, the limitation of the algorithm is that it does not converge to the exact solution because the exact solution is not a fixed point of the algorithm’s update rule. This issue was fixed later by Shi et al. (2015a); Wu et al. (2016) by using the gradients of last two instead of one iterates in each iteration, which was later improved in Shi et al. (2015b); Li et al. (2017) by considering proximal gradients. Decentralized ADMM algorithms were analyzed in Zhang and Kwok (2014); Shi et al. ; Aybat et al. (2015). Wang et al. (2016) develops a decentralized algorithm for recursive least-squares problems.

3 Algorithm

We introduce the AD-PSGD algorithm in this section.

Definitions and notations

Throughout this paper, we use the following notation and definitions:

  • denotes the vector

    norm or the matrix spectral norm depending on the argument.

  • denotes the matrix Frobenius norm.

  • denotes the gradient of a function .

  • denotes the column vector in with for all elements.

  • denotes the optimal solution to (1).

  • denotes the

    -th largest eigenvalue of a matrix.

  • denotes the th element of the standard basis of .

3.1 Problem definition

The decentralized communication topology is represented as an undirected graph: , where denotes the set of workers and

is the set of the edges in the graph. Each worker represents a machine/gpu owning its local data (or a sensor collecting local data online) such that each worker is associated with a local loss function

where is a distribution associated with the local data at worker and is a data point sampled via . The edge means that the connected two workers can exchange information. For the AD-PSGD algorithm, the overall optimization problem it solves is

(1)

where ’s define a distribution, that is, and , and indicates the updating frequency of worker or the percentage of the updates performed by worker . The faster a worker, the higher the corresponding . The intuition is that if a worker is faster than another worker, then the faster worker will run more epochs given the same amount of time, and consequently the corresponding worker has a larger impact.

Remark 1.

To solve the common form of objectives in machine learning using AD-PSGD

we can appropriately distribute data such that Eq. (1) solves the target objective above:

Strategy-1

Let and , that is, all worker can access all data, and consequently , that is, all ’s are the same;

Strategy-2

Split the data into all workers appropriately such that the portion of data is on worker and define

to be the uniform distribution over the assigned data samples.

3.2 AD-PSGD algorithm

The AD-PSGD algorithm can be described in the following: each worker maintains a local model in its local memory and (using worker as an example) repeats the following steps:

  • Sample data: Sample a mini-batch of training data denoted by , where is the batch size.

  • Compute gradients: Use the sampled data to compute the stochastic gradient , where is read from the model in the local memory.

  • Gradient update: Update the model in the local memory by . Note that may not be the same as as it may be modified by other workers in the averaging step.

  • Averaging: Randomly select a neighbor (e.g. worker ) and average the local model with the worker ’s model (both models on both workers are updated to the averaged model). More specifically, .

Note that each worker runs the procedure above on its own without any global synchronization. This reduces the idle time of each worker and the training process will still be fast even if part of the network or workers slow down.

The averaging step can be generalized into the following update for all workers:

where

can be an arbitrary doubly stochastic matrix. This generalization gives plenty flexibility to us in implementation without hurting our analysis.

All workers run the procedure above simultaneously, as shown in Algorithm 1. We use a virtual counter to denote the iteration counter – every single gradient update happens no matter on which worker will increase by . denotes the worker performing the th update.

3.3 Implementation details

We briefly describe two interesting aspects of system designs and leave more discussions to Appendix A.

3.3.1 Deadlock avoidance

A naive implementation of the above algorithm may cause deadlock — the averaging step needs to be atomic and involves updating two workers (the selected worker and one of its neighbors). As an example, given three fully connected workers , , and , sends its local model to and waits for from ; has already sent out to and waits for ’s response; and has sent out to and waits for from .

We prevent the deadlock in the following way: The communication network is designed to be a bipartite graph, that is, the worker set can be split into two disjoint sets (active set) and (passive set) such that any edge in the graph connects one worker in and one worker in . Due to the property of the bipartite graph, the neighbors of any active worker can only be passive workers and the neighbors of any passive worker can only be active workers. This implementation avoids deadlock but still fits in the general algorithm Algorithm 1 we are analyzing. We leave more discussions and a detailed implementation for wait-free training to Appendix A.

3.3.2 Communication topology

The simplest realization of AD-PSGD algroithm is a ring-based topology. To accelerate information exchanging, we also implement a communication topology in which each sender communicates with a reciever that is hops away in the ring, where is an integer from 0 to ( is the number of learners). It is easy to see it takes at most steps for any pair of workers to exchange information instead of in the simple ring-based topology. In this way, (as defined in Section 4) becomes smaller and the scalability of AD-PSGD improves. This implementation also enables robustness against slow or failed network links because there are multiple routes for a worker to disseminate its information.

1:Initialize local models with the same initialization, learning rate , batch size , and total number of iterations . 2:for  do 3:     Randomly sample a worker of the graph and randomly sample an averaging matrix which can be dependent on . 4:     Randomly sample a batch from local data of the -th worker. 5:      Compute the stochastic gradient locally . 6:      Average local models by 333Note that creftypecap 5 and creftypecap 6 can run in parallel. 7:     Update the local model and 8:end for 9:Output the average of the models on all workers for inference.
Algorithm 1 AD-PSGD (logical view)

4 Theoretical analysis

In this section we provide theoretical analysis for the AD-PSGD algorithm. We will show that the convergence rate of AD-PSGD is consistent with SGD and D-PSGD.

Note that by counting each update of stochastic gradients as one iteration, the update of each iteration in Algorithm 1 can be viewed as

where is the iteration number, is the local model of the th worker at the th iteration, and

and for some nonnegative integer .

Assumption 1.

Throughout this paper, we make the following commonly used assumptions:

  1. Lipschitzian gradient: All functions ’s are with -Lipschitzian gradients.

  2. Doubly stochastic averaging: is doubly stochastic for all .

  3. Spectral gap: There exists a such that444A smaller means a faster information spreading in the network, leading to a faster convergence.

    (2)
  4. Unbiased estimation: 555Note that this is easily satisfied when all workers can access all data so that . When each worker can only access part of the data, we can also meet these assumptions by appropriately distributing data.

    (3)
    (4)
  5. Bounded variance: Assume the variance of the stochastic gradient

    is bounded for any with sampled from the distribution and from the distribution . This implies there exist constants and such that

    (5)
    (6)

    Note that if all workers can access all data, then .

  6. Dependence of random variables:

    are independent random variables. is a random variable dependent on .

  7. Bounded staleness: and there exists a constant such that .

Throughout this paper, we define the following notations for simpler notation

Under Assumption 1 we have the following results:

Theorem 1 (Main theorem).

While and and are satisfied we have

Noting that , this theorem characterizes the convergence of the average of all local models. By appropriately choosing the learning rate, we obtain the following corollary

Corollary 2.

Let . We have the following convergence rate

(7)

if the total number of iterations is sufficiently large, in particular,

(8)

This corollary indicates that if the iteration number is big enough, AD-PSGD’s convergence rate is . We compare the convergence rate of AD-PSGD with existing results for SGD and D-PSGD to show the tightness of the proved convergence rate. We will also show the efficiency and the linear speedup property for AD-PSGD w.r.t. batch size, number of workers, and staleness respectively. Further discussions on communication topology and intuition will be provided at the end of this section.

Remark 2 (Consistency with SGD).

Note that if and the proposed AD-PSGD reduces to the vanilla SGD algorithm Nemirovski et al. (2009); Moulines and Bach (2011); Ghadimi and Lan (2013). Since , we do not have the variance among workers, that is, , the convergence rate becomes which is consistent with the convergence rate with SGD.

Remark 3 (Linear speedup w.r.t. batch size).

When is large enough the second term on the RHS of (7) dominates the first term. Note that the second term converges at a rate if , which means the convergence efficiency gets boosted with a linear rate if increase the mini-batch size. This observation indicates the linear speedup w.r.t. the batch size and matches the results of mini-batch SGD. 666 Note that when , AD-PSGD does not admit this linear speedup w.r.t. batch size. It is unavoidable because increasing the minibatch size only decreases the variance of the stochastic gradients within each worker, while characterizes the variance of stochastic gradient among different workers, independent of the batch size.

Remark 4 (Linear speedup w.r.t. number of workers).

Note that every single stochastic gradient update counts one iteration in our analysis and our convergence rate in Corollary 2 is consistent with SGD / mini-batch SGD. It means that the number of required stochastic gradient updates to achieve a certain precision is consistent with SGD / mini-batch SGD, as long as the total number of iterations is large enough. It further indicates the linear speedup with respect to the number of workers ( workers will make the iteration number advance times faster in the sense of wall-clock time, which means we will converge times faster). To the best of our knowledge, the linear speedup property w.r.t. to the number of workers for decentralized algorithms has not been recognized until the recent analysis for D-PSGD by Lian et al. (2017). Our analysis reveals that by breaking the synchronization AD-PSGD can maintain linear speedup, reduce the idle time, and improve the robustness in heterogeneous computing environments.

Remark 5 (Linear speedup w.r.t. the staleness).

From (8) we can also see that as long as the staleness is bounded by (if other parameters are considered to be constants), linear speedup is achievable.

5 Experiments

We describe our experimental methodologies in Section 5.1 and we evaluate the AD-PSGD algorithm in the following sections:

  • Section 5.2: Compare AD-PSGD’s convergence rate (w.r.t epochs) with other algorithms.

  • Section 5.3: Compare AD-PSGD’s convergence rate (w.r.t runtime) and its speedup with other algorithms.

  • Section 5.4: Compare AD-PSGD’s robustness to other algorithms in heterogeneous computing and heterogeneous communication environments.

  • Appendix B

    : Evaluate AD-PSGD on IBM proprietary natural language processing dataset and model.

5.1 Experiments methodology

5.1.1 Dataset, model, and software

We use CIFAR10 and ImageNet-1K as the evaluation dataset and we use Torch-7 as our deep learning framework. We use MPI to implement the communication scheme. For CIFAR10, we evaluate both VGG (Simonyan and Zisserman, 2015) and ResNet-20 (He et al., 2016) models. VGG, whose size is about 60MB, represents a communication intensive workload and ResNet-20, whose size is about 1MB, represents a computation intensive workload. For the ImageNet-1K dataset, we use the ResNet-50 model whose size is about 100MB.

Additionally, we experimented on an IBM proprietary natural language processing datasets and models Zhang et al. (2017) in Appendix B.

5.1.2 Hardware

We evaluate AD-PSGD in two different environments:

  • IBM S822LC HPC cluster: Each node with 4 Nvidia P100 GPUs, 160 Power8 cores (8-way SMT) and 500GB memory on each node. 100Gbit/s Mellanox EDR infiniband network. We use 32 such nodes.

  • x86-based cluster: This cluster is a cloud-like environment with 10Gbit/s ethernet connection. Each node has 4 Nvidia P100 GPUs, 56 Xeon E5-2680 cores (2-way SMT), and 1TB DRAM. We use 4 such nodes.

5.1.3 Compared algorithms

We compare the proposed AD-PSGD algorithm to AllReduce-SGD, D-PSGD Lian et al. (2017) and a state of the art asynchronous SGD implementation EAMSGD. Zhang et al. (2015)777In this paper, we use ASGD and EAMSGD interchangeably. In EAMSGD, each worker can communicate with the parameter server less frequently by increasing the “communication period” parameter .

5.2 Convergence w.r.t. epochs

AllReduce D-PSGD EAMSGD AD-PSGD
VGG 87.04% 86.48% 85.75% 88.58%
ResNet-20 90.72% 90.81% 89.82% 91.49%
Table 2: Testing accuracy comparison for VGG and ResNet-20 model on CIFAR10. 16 workers in total.
Cifar10

Figure 4 plots training loss w.r.t. epochs for each algorithm, which is evaluated for VGG and ResNet-20 models on CIFAR10 dataset with 16 workers. Table 2 reports the test accuracy of all algorithms.

For EAMSGD, we did extensive hyper-parameter tuning to get the best possible model, where . We set momentum moving average to be (where is the number of workers) as recommended in Zhang et al. (2015) for EAMSGD.

For other algorithms, we use the following hyper-parameter setup as prescribed in Zagoruyko (2015) and FAIR (2017):

  • Batch size: 128 per worker for VGG, 32 for ResNet-20.

  • Learning rate: For VGG start from 1 and reduce by half every 25 epochs. For ResNet-20 start from 0.1 and decay by a factor of 10 at the 81st epoch and the 122nd epoch.

  • Momentum: 0.9.

  • Weight decay: .

Figure 4 show that w.r.t epochs, AllReduce-SGD, D-PSGD and AD-PSGD converge similar, while ASGD converges worse. Table 2 shows AD-PSGD does not sacrifice test accuracy.

(a) VGG loss
(b) ResNet-20 loss
Figure 4: Training loss comparison for VGG and ResNet-20 model on CIFAR10. AllReduce-SGD, D-PSGD and AD-PSGD converge alike, EAMSGD converges the worst. 16 workers in total.
AllReduce D-PSGD AD-PSGD
16 Workers 74.86% 74.74% 75.28%
32 Workers 74.78% 73.66% 74.66%
64 Workers 74.90% 71.18% 74.20%
128 Workers 74.78% 70.90% 74.23%
Table 3: Testing accuracy comparison for ResNet-50 model on ImageNet dataset for AllReduce, D-PSGD, and AD-PSGD. The ResNet-50 model is trained for 90 epochs. AD-PSGD and AllReduce-SGD achieve similar model accuracy.
ImageNet

We further evaluate the AD-PSGD’s convergence rate w.r.t. epochs using ImageNet-1K and ResNet-50 model. We compare AD-PSGD with AllReduce-SGD and D-PSGD as they tend to converge better than A-PSGD.

Figure 11 and Table 3 demonstrate that w.r.t. epochs AD-PSGD converges similarly to AllReduce and converges better than D-PSGD when running with 16,32,64,128 workers. How to maintain convergence while increasing 888 is mini-batch size per worker and is the number of workers is an active ongoing research area Zhang et al. (2016); Goyal et al. (2017) and it is orthogonal to the topic of this paper. For 64 and 128 workers, we adopted similar learning rate tuning scheme as proposed in Goyal et al. (2017) (i.e., learning rate warm-up and linear scaling)999In AD-PSGD, we decay the learning rate every 25 epochs instead of 30 epochs as in AllReduce. It worths noting that we could further increase the scalability of AD-PSGD by combining learners on the same computing node as a super-learner (via Nvidia NCCL AllReduce collectives). In this way, a 128-worker system can easily scale up to 512 GPUs or more, depending on the GPU count on a node.

Above results show AD-PSGD converges similarly to AllReduce-SGD w.r.t epochs and better than D-PSGD. Techniques used for tuning learning rate for AllReduce-SGD can be applied to AD-PSGD when batch size is large.

5.3 Speedup and convergence w.r.t runtime

Figure 5: Runtime comparison for VGG (communication intensive) and ResNet-20 (computation intensive) models on CIFAR10. Experiments run on IBM HPC w/ 100Gbit/s network links and on x86 system w/ 10Gbit/s network links. AD-PSGD consistently converges the fastest. 16 workers in total.
(a) 16 workers
(b) 32 workers
(c) 64 workers
(d) 128 workers
(e) training time variation (64 workers)
Figure 11: Training loss and training time per epoch comparison for ResNet-50 model on ImageNet dataset, evaluated up to 128 workers. AD-PSGD and AllReduce-SGD converge alike, better than D-PSGD. For 64 workers AD-PSGD finishes each epoch in 264 seconds, whereas AllReduce-SGD and D-PSGD can take over 1000 sec/epoch.
Figure 12: Speedup comparison for VGG (communication intensive) and ResNet-20 (computation intensive) models on CIFAR10. Experiments run on IBM HPC w/ 100Gbit/s network links and on x86 system w/ 10Gbit/s network links. AD-PSGD consistently achieves the best speedup.

On CIFAR10, Figure 5 shows the runtime convergence results on both IBM HPC and x86 system. The EAMSGD implementation deploys parameter server sharding to mitigate the network bottleneck at the parameter servers. However, the central parameter server quickly becomes a bottleneck on a slow network with a large model as shown in Figure 5-(b).

Figure 12 shows the speedup for different algorithms w.r.t. number of workers. The speedup for ResNet-20 is better than VGG because ResNet-20 is a computation intensive workload.

Above results show that regardless of workload type (computation intensive or communication intensive) and communication networks (fast or slow), AD-PSGD consistently converges the fastest w.r.t. runtime and achieves the best speedup.

5.4 Robustness in a heterogeneous environment

In a heterogeneous environment, the speed of computation device and communication device may often vary, subject to architectural features (e.g., over/under-clocking, caching, paging), resource-sharing (e.g., cloud computing) and hardware malfunctions. Synchronous algorithms like AllReduce-SGD and D-PSGD perform poorly when workers’ computation and/or communication speeds vary. Centralized asynchronous algorithms, such as A-PSGD, do poorly when the parameter server’s network links slow down. In contrast, AD-PSGD localizes the impact of slower workers or network links.

On ImageNet, Figure (e)e shows the epoch-wise training time of the AD-PSGD, D-PSGD and AllReduce run over 64 GPUs (16 nodes) over a reserved window of 10 hours when the job shares network links with other jobs on IBM HPC. AD-PSGD finishes each epoch in 264 seconds, whereas AllReduce-SGD and D-PSGD can take over 1000 sec/epoch.

We then evaluate AD-PSGD’s robustness under different situations by randomly slowing down 1 of the 16 workers and its incoming/outgoing network links. Due to space limit, we will discuss the results for ResNet-20 model on CIFAR10 dataset as the VGG results are similar.

Robustness against slow computation
(a) AD-PSGD converges steadily w.r.t. epochs despite a slower computing device.
(b) AD-PSGD converges much faster than AllReduce-SGD and D-PSGD in the presence of a slower computing device.
Figure 15: Training loss for ResNet-20 model on CIFAR10 w.r.t (a) epochs and (b) runtime, when a computation device slows down by 2X-100X. 16 workers in total.
Figure 16: Training loss for ResNet-20 on CIFAR10 w.r.t. runtime, when a network link slows down by 2X-100X.
Slowdown of
one node
AD-PSGD AllReduce/D-PSGD
Time/epoch(sec) Speedup Time/epoch (sec) Speedup
no slowdown 1.22 14.78 1.47/1.45 12.27/12.44
2X 1.28 14.09 2.6/2.36 6.93/7.64
10X 1.33 13.56 11.51/11.24 1.56/1.60
100X 1.33 13.56 100.4/100.4 0.18/0.18
Table 4: Runtime efficiency comparison for ResNet-20 model on CIFAR-10 dataset when a computation device slows down by 2X-100X. AD-PSGD converges faster than AllReduce-SGD and D-PSGD, by orders of magnitude. 16 workers in total.

Figure 15 and Table 4 shows that AD-PSGD’s convergence is robust against slower workers. AD-PSGD can converge faster than AllReduce-SGD and D-PSGD by orders of magnitude when there is a very slow worker.

Robustness against slow communication

Figure 16 shows that AD-PSGD is robust when one worker is connected to slower network links. In contrast, centralized asynchronous algorithm EAMSGD uses a larger communication period to overcome slower links, which significantly slows down the convergence.

These results show only AD-PSGD is robust against both heterogeneous computation and heterogeneous communication.

6 Conclusion

This paper proposes an asynchronous decentralized stochastic gradient descent algorithm (AD-PSGD). The algorithm is not only robust in heterogeneous environments by combining both decentralization and asynchronization, but it is also theoretically justified to have the same convergence rate as its synchronous and/or centralized counterparts and can achieve linear speedup w.r.t. number of workers. Extensive experiments validate the proposed algorithm.

Acknowledgment

This project is supported in part by NSF CCF1718513, NEC fellowship, IBM faculty award, Swiss NSF NRP 75 407540_167266, IBM Zurich, Mercedes-Benz Research & Development North America, Oracle Labs, Swisscom, Zurich Insurance, and Chinese Scholarship Council. We thank David Grove, Hillery Hunter, and Ravi Nair for providing valuable feedback. We thank Anthony Giordano and Paul Crumley for well-maintaining the computing infrastructure that enables the experiments conducted in this paper.

References

Appendix A Wait-free (continuous) training and communication

The theoretical guarantee of AD-PSGD relies on the the doubly stochastic property of matrix . The implication is the averaging of the weights between two workers should be atomic. This brings a special challenge for current distributed deep learning frameworks where the computation (gradients calculation and weights update) runs on GPU devices and the communication runs on CPU (or its peripherals such as infiniband or RDMA), because when there is averaging happening on a worker, the GPU is not allowed to update gradients into the weights. This can be solve by using CPU to update weights while GPUs only calculate gradients. Every worker (including active and passive workers) runs two threads in parallel with a shared buffer , one thread for computation and the other for communication. Algorithm 2, Algorithm 3, and Algorithm 4 illustrate the task on each thread. The communication thread is run by CPUs, while the computation thread is run by GPUs. In this way GPUs can continuously calculate new gradients by putting the results in CPUs’ buffer regardless of whether there is averaging happening. Recall in D-PSGD, communication only occurs once in each iteration. In contrast, AD-PSGD can exchange weights at any time by using this implementation.

1:Batch size 2:while not terminated do 3:     Pull model from the communication thread. 4:     Update locally in the thread .101010At this time the communication thread may have not update into so the computation thread pulls an old model. We compensate this by doing local update in computation thread. We observe this helps the scaling. 5:     Randomly sample a batch from local data of the -th worker and compute the stochastic gradient locally. 6:     wait until then 7:         Local buffer .111111We can also make a queue of gradients here to avoid the waiting. Note that doing this will make the effective batch-size different from . 8:     end wait until 9:end while
Algorithm 2 Computation thread on active or passive worker (worker index is )
1:Initialize local model , learning rate . 2:while not terminated do 3:     if  then 4:         . 5:     end if 6:     Randomly select a neighbor (namely worker ). Send to worker and fetch from it. 7:     . 8:end while
Algorithm 3 Communication thread on active worker (worker index is )
1:Initialize local model , learning rate . 2:while not terminated do 3:     if   then 4:         . 5:     end if 6:     if receive the request of reading local model (say from worker then 7:         Send to worker . 8:         . 9:     end if 10:end while
Algorithm 4 Communication thread on passive worker (worker index is )

Appendix B NLC experiments

In this section, we use IBM proprietary natural language processing dataset and model to evaluate AD-PSGD against other algorithms.

(a) NLC-Joule
(b) NLC-Yelp
Figure 19: Training loss comparison for IBM NLC model on Joule and Yelp datasets. AllReduce-SGD, D-PSGD and AD-PSGD converge alike w.r.t. epochs.

The IBM NLC task is to classify input sentences into a target category in a predefined label set. The NLC model is a CNN model that has a word-embedding lookup table layer, a convolutional layer and a fully connected layer with a softmax output layer. We use two datasets in our evaluation. The first dataset Joule is an in-house customer dataset that has 2.5K training samples, 1K test samples, and 311 different classes. The second dataset Yelp, which is a public dataset, has 500K training samples, 2K test samples and 5 different classes.

Figure 19 shows that AD-PSGD converges (w.r.t epochs) similarly to AllReduce-SGD and D-PSGD on NLC tasks.

Above results show AD-PSGD converges similarly (w.r.t) to AllReduce-SGD and D-PSGD for IBM NLC workload, which is an example of proprietary workloads.

Appendix C Appendix: proofs

In the following analysis we define

(9)

and

(10)

We also define

Proof to Theorem 1.

We start from

Using the upper bound of in Lemma 5:

(11)

For we have

(12)

From (11) and (12) we obtain