1 Introduction
Decentralized machine learning methods are becoming core aspects of many important applications, both in view of scalability to larger datasets and systems, but also from the perspective of data locality, ownership and privacy. In this work we address the general dataparallel setting where the data is distributed across different compute devices, and consider decentralized optimization methods that do not rely on a central coordinator (e.g. parameter server) but instead only require ondevice computation and local communication with neighboring devices. This covers for instance the classic setting of training machine learning models in large datacenters, but also emerging applications were the computations are executed directly on the consumer devices, which keep their part of the data private at all times.^{1}^{1}1Note the optimization process itself (as for instance the computed result) might leak information about the data of other nodes. We do not focus on quantifying notions of privacy in this work.
Formally, we consider optimization problems distributed across devices or nodes of the form
(1) 
where for are the objectives defined by the local data available on each node. We also allow each local objective
to have stochastic optimization (or sum) structure, covering the important case of empirical risk minimization in distributed machine learning and deep learning applications.
Decentralized Communication.
We model the network topology as a graph with edges if and only if nodes and are connected by a communication link, meaning that these nodes directly can exchange messages (for instance computed model updates). The decentralized setting is motivated by centralized topologies (corresponding to a star graph) often not being possible, and otherwise often posing a significant bottleneck on the central node in terms of communication latency, bandwidth and fault tolerance. Decentralized topologies avoid these bottlenecks and thereby offer hugely improved potential in scalability. For example, while the master node in the centralized setting receives (and sends) in each round messages from all workers, in total^{2}^{2}2For better connected topologies sometimes more efficient allreduce and broadcast implementations are available., in decentralized topologies the maximal degree of the network is often constant (e.g. ring or torus) or a slowly growing function in (e.g. scalefree networks).
Decentralized Optimization.
For the case of deterministic (fullgradient) optimization, recent seminal theoretical advances show that the network topology only affects higherorder terms of the convergence rate of decentralized optimization algorithms on convex problems (Scaman et al., 2017, 2018). We prove the first analogue result for the important case of decentralized stochastic gradient descent (SGD), proving convergence at rate (ignoring for now higher order terms) on strongly convex functions where denotes the number of iterations.
This result is significant since stochastic methods are highly preferred for their efficiency over deterministic gradient methods in machine learning applications. Our algorithm, ChocoSGD, is as efficient in terms of iterations as centralized minibatch SGD (and consequently also achieves a speedup of factor compared to the serial setting on a single node) but avoids the communication bottleneck that centralized algorithms suffer from.
Communication Compression.
In distributed training, model updates (or gradient vectors) have to be exchanged between the worker nodes. To reduce the amount of data that has to be send, gradient
compression has become a popular strategy. For instance by quantization (Alistarh et al., 2017; Wen et al., 2017; Lin et al., 2018) or sparsification (Wangni et al., 2018; Stich et al., 2018).These ideas have recently been introduced also to the decentralized setting by Tang et al. (2018a). However, their analysis only covers unbiased compression operators with very (unreasonably) high accuracy constraints. Here we propose the first method that supports arbitrary low accuracy and even biased compression operators, such as in (Alistarh et al., 2018; Lin et al., 2018; Stich et al., 2018).
Contributions.
Our contributions can be summarized as follows:

We show that the proposed ChocoSGD converges at rate , where denotes the number of iterations, the number of workers, the eigengap of the gossip (connectivity) matrix and the compression quality factor ( meaning no compression). We show that the decentralized method achieves the same speedup as centralized minibatch SGD when the number of workers grows. The network topology and the compression only mildly affect the convergence rate. This is verified experimentally on the ring topology and by reducing the communication by a factor of 100 ().

We present the first provablyconverging gossip algorithm with communication compression, for the distributed average consensus problem. Our algorithm, ChocoGossip, converges linearly at rate for accuracy , and allows arbitrary communication compression operators (including biased and unbiased ones). In contrast, previous work required very highprecision quantization and could only show convergence towards a neighborhood of the optimal solution.

ChocoSGD significantly outperforms stateoftheart methods for decentralized optimization with gradient compression, such as ECDSGD and DCDSGD introduced in (Tang et al., 2018a), in all our experiments.
2 Related Work
Stochastic gradient descent (SGD) (Robbins & Monro, 1951; Bottou, 2010) and variants thereof are the standard algorithms for machine learning problems of the form (1), though it is an inherit serial algorithm that does not take the distributed setting into account. Minibatch SGD (Dekel et al., 2012) is the natural parallelization of SGD for (1) in the centralized setting, i.e. when a master node collects the updates from all worker nodes, and serves a baseline here.
Decentralized Optimization.
The study of decentralized optimization algorithms can be tracked back at least to the 1980s (Tsitsiklis, 1984).
Decentralized algorithms are sometimes referred to as gossip algorithms (Kempe et al., 2003; Xiao & Boyd, 2004; Boyd et al., 2006)
as the information is not broadcasted by a central entity, but spreads—similar as gossip—along the edges specified by the communication graph.
The most popular algorithms are based on
(sub)gradient descent
(Nedić & Ozdaglar, 2009; Johansson et al., 2010),
alternating direction method of multipliers (ADMM) (Wei & Ozdaglar, 2012; Iutzeler et al., 2013) or dual averaging (Duchi et al., 2012; Nedić et al., 2015).
He et al. (2018) address the more specific problem class of generalized linear models.
For the deterministic (nonstochastic) convex version of (1) a recent line of work developed optimal algorithms based on acceleration (Jakovetić et al., 2014; Scaman et al., 2017, 2018; Uribe et al., 2018). Rates for the stochastic setting are derived in (Shamir & Srebro, 2014; Rabbat, 2015), under the assumption that the distributions on all nodes are equal. This is a strong restriction which prohibits most distributed machine learning applications. Our algorithm ChocoSGD avoids any such assumption.
Also, (Rabbat, 2015) requires multiple communication rounds per stochastic gradient computation and so is not suited for sparse communication, as the required number of communication rounds would increase proportionally to the sparsity.
Lan et al. (2018) applied gradient sliding techniques allowing to skip some of the communication rounds.
Lian et al. (2017); Tang et al. (2018b, a); Assran et al. (2018) consider the nonconvex setting with Tang et al. (2018a) also applying gradient quantization techniques to reduce the communication cost. However, their algorithms require very high precision quantization, a constraint we can overcome here.
Gradient Compression.
Instead of transmitting a full dimensional (gradient) vector , methods with gradient compression transmit a compressed vector instead, where is a (random) operator chosen such that can be more efficiently represented, for instance by using limited bit representation (quantization) or enforcing sparsity. A class of very common quantization operators is based on random dithering (Goodall, 1951; Roberts, 1962) that is in addition also unbiased, , , see (Alistarh et al., 2017; Wen et al., 2017; Zhang et al., 2017). Much sparser vectors can be obtained by random sparsification techniques that randomly mask the input vectors and only preserve a constant number of coordinates (Wangni et al., 2018; Konecny & Richtárik, 2018; Stich et al., 2018). Techniques that do not directly quantize gradients, but instead maintain additional states are known to perform better in theory and practice (Seide et al., 2014; Lin et al., 2018; Stich et al., 2018), an approach that we pick up here. Our analysis also covers deterministic and biased compression operators, such as in (Alistarh et al., 2018; Stich et al., 2018). We will not further distinguish between sparsification and quantization approaches, and refer to both of them as compression operators in the following.
Distributed Average Consensus.
In the decentralized setting, the average consensus problem consists in finding the average vector of local vectors (see (2) below for a formal definition).
The problem is an important subroutine of many decentralized algorithms.
It is well known that gossiptype algorithms converge linearly for average consensus (Kempe et al., 2003; Xiao & Boyd, 2004; OlfatiSaber & Murray, 2004; Boyd et al., 2006).
However, for consensus algorithms with compressed communication it has been remarked that the standard gossip algorithm does not converge to the correct solution (Xiao et al., 2005).
The proposed schemes in (Carli et al., 2007; Nedić et al., 2008; Aysal et al., 2008; Carli et al., 2010b; Yuan et al., 2012) do only converge to a neighborhood (whose size depends on the compression accuracy) of the solution.
In order to converge, adaptive schemes (with varying compression accuracy) have been proposed (Carli et al., 2010a; Fang & Li, 2010; Li et al., 2011; Thanou et al., 2013).
However, these approaches fall back to full (uncompressed) communication to reach high accuracy. In contrast, our method converges linearly to the true solution, even for arbitrary compressed communication, without requiring adaptive accuracy. We are not aware of a method in the literature with similar guarantees.
3 Average Consensus with Communication Compression
In this section we present ChocoGossip, a novel gossip algorithm for distributed average consensus with compressed communication. As mentioned, the average consensus problem is an important special case of type (1), and formalized as
(2) 
for vectors distributed on nodes (consider in (1)). Our proposed algorithm will later serve as a crucial primitive in our optimization algorithm for the general optimization problem (1), but is of independent interest for any average consensus problem with communication constraints.
In Sections 3.1–3.3 below we first review existing schemes that we later consider as baselines for the numerical comparison. The novel algorithm follows in Section 3.4.
3.1 Gossip algorithms
The classic decentralized algorithms for the average consensus problem are gossip type algorithms (see e.g. (Xiao & Boyd, 2004)) that generate sequences on every node by iterations of the form
(3) 
Here denotes a stepsize parameter, averaging weights and denotes a vector that is sent from node to node in iteration . Note that no communication is required if . If we assume symmetry, , the weights naturally define the communication graph with edges if and selfloops for . The convergence rate of scheme (3) crucially depends on the connectivity matrix of the network defined as , also called the interaction or gossip matrix.
Definition 1 (Gossip matrix).
We assume that is a symmetric () doubly stochastic (,
) matrix with eigenvalues
and spectral gap(4) 
It will also be convenient to define
and  (5) 
Table 1 gives a few values of the spectral gap for commonly used network topologies (with uniform averaging between the nodes). It is well known that simple matrices with do exist for every connected graph.
graph/topology  node degree  

ring  2  
2dtorus  4  
fully connected 
3.2 Gossip with Exact Communication
For a fixed gossip matrix , the classical algorithm analyzed in (Xiao & Boyd, 2004) corresponds to the choice
(EG) 
in (3), with (EG) standing for exact gossip. This scheme can also conveniently be written in matrix notation as
(6) 
for iterates .
Theorem 1.
Let and be the spectral gap of . Then the iterates of (EG) converge linearly to the average with the rate
For this corresponds to the classic result in e.g. (Xiao & Boyd, 2004), here we slightly extend the analysis for arbitrary stepsizes. The short proof shows the elegance of the matrix notation (that we will later also adapt for the proofs that will follow).
Proof for ..
3.3 Gossip with Quantized Communication
In every round of scheme (EG) a full dimensional vector is exchanged between two neighboring nodes for every link on the communication graph (node sends to all its neighbors , ). A natural way to reduce the communication is to compress before sending it, denoted as , for a (potentially random) compression . Informally, we can think of as either a sparsification operator (that enforces sparsity of ) or a quantization operator that reduces the number of bits required to represent . For instance random rounding to less precise floating point numbers or to integers.
Aysal et al. (2008) propose the quantized gossip (Q1G),
(Q1G) 
in scheme (3), i.e. to apply the compression operator directly on the message that is send out from node to node . However, this algorithm does not preserve the average of the iterates over the iterations, for , and as a consequence does not converge to the optimal solution of (2) (though in practice often to a close neighborhood).
An alternative proposal by Carli et al. (2007) alleviates this drawback. The scheme
(Q2G) 
preserves the average of the iterates over the iterations. However, the scheme also fails to converge for arbitrary precision. If , the noise introduced by the compression, , does not vanish for . As a consequence, the iterates oscillate around when compression error becomes larger than the suboptimality .
Both these schemes have been theoretically studied in (Carli et al., 2010b) under assumption of unbiasendness, i.e. assuming for all (and we will later also adopt this theoretically understood setting in our experiments).
3.4 Proposed Method for Compressed Communication
We propose the novel compressed gossip scheme ChocoGossip that supports a much larger class of compression operators, beyond unbiased quantization as for the schemes above. The algorithm can be summarized as
(ChocoG) 
for a stepsize depending on the specific compression operator (this will be detailed below). Here denote additional variables that are stored^{3}^{3}3A closer look reveals that actually only 2 additional vectors have to be stored per node (refer to Appendix E). by all neighbors of node , , as well as on node itself.
We will show in Theorem 2 below that this scheme (i) preserves the averages of the iterates , over the iterations . Moreover, (ii) the noise introduced by the compression operator vanishes as . Precisely, we will show that for for every . Consequently, the argument for in (ChocoG) goes to zero, and the noise introduced by can be controlled.
The proposed scheme is summarized in Algorithm 1. Every worker stores and updates its own local variable as well as the variables for all neighbors (including itself) .
Algorithm 1 seems to require each machine to store vectors. This is not necessary and the algorithm could be rewritten in a way that every node stores only three vectors: , and . For simplicity, we omit this technical modification here and refer to Appendix E for the exact form of the memoryefficient algorithm.
3.5 Convergence Analysis for ChocoGossip
We analyze Algorithm 1 under the following general quality notion for the compression operator .
Assumption 1 (Compression operator).
We assume that the compression operator satisfies
(7) 
for a parameter . Here denotes the expectation over the internal randomness of operator .
Example operators that satisfy (7) include

sparsification: Randomly selecting out of coordinates (), or the coordinates with highest magnitude values () give (Stich et al., 2018, Lemma A.1).

random quantization: For precision (levels) , and the quantization operator
for random variable
satisfies (7) with (Alistarh et al., 2017, Lemma 3.1).
Theorem 2.
For the proof we refer to the appendix, where we used matrix notation to simplify derivations. For the exact communication case we recover the rate from Theorem 1 for stepsize up to constant factors (which seems to be a small artifact of our proof technique). The theorem shows convergence for arbitrary , showing the superiority of scheme (ChocoG) over (Q1G) and (Q2G).
4 Decentralized Stochastic Optimization
In this section we leverage our proposed average consensus Algorithm 1 to achieve consensus among the compute nodes in a decentralized optimization setting with communication restrictions.
In the decentralized optimization setting (1), not only does every node have a different local objective , but we also allow each to have stochastic optimization (or sum) structure, that is
(8) 
for a loss function
and distributions which can be different on every node. Our framework therefore covers both stochastic optimization (e.g. when all are identical) and empirical risk minimization (as in machine learning and deep learning applications) when the ’s are discrete with disjoint support.4.1 Proposed Scheme for Decentralized Optimization
Our proposed method ChocoSGD—CommunicationCompressed Decentralized SGD—is stated in Algorithm 2.
The algorithm consists of four parts. The stochastic gradient step in line 3, application of the compression operator in step 4, and the (ChocoG) local communication in lines 5–8 followed by the final iterate update in line 9.
Remark 3.
As a special case without any communication compression, and for consensus stepsize as in exact gossip (EG), ChocoSGD (Algorithm 2) recovers the following standard variant of decentralized SGD with gossip (similar e.g. to (Sirb & Ye, 2016; Lian et al., 2017)), stated for illustration in Algorithm 3.
4.2 Convergence Analysis for ChocoSGD
Assumption 2.
We assume that each function for is smooth and
strongly convex and that the variance on each worker is bounded
where denotes the expectation over . It will be also convenient to denote
For the (standard) definitions of smoothness and strong convexity we refer to Appendix A.1. These assumptions could be relaxed to only hold for , the set of iterates of Algorithm 2.
Theorem 4.
For the proof we refer to the appendix. When and are sufficiently large, the second two terms become negligible compared to —and we recover the convergence rate of of minibatch SGD in the centralized setting and with exact communication. This is because topology (parameter ) and compression (parameter only affect the higherorder terms in the rate. We also see that we obtain in this setting a speed up compared to the serial implementation of SGD on only one worker.
5 Experiments
In this section we first compare ChocoGossip to the gossip baselines from Section 5.2 and then compare the ChocoSGD to state of the art decentralized stochastic optimization schemes (that also support compressed communication) in Section 5.3.
5.1 Shared Experimental Setup
For our experiments we always report the number of iterations of the respective scheme, as well as the number of transmitted bits. These quantities are independent of systems architectures and network bandwidth.
Datasets.
Compression operators.
We use the (), () and compression operators as described in Section 3.5, where we choose to be of all coordinates and , only requiring , respectively bits to represent a coordinate.
5.2 Average Consensus
algorithm  

Plain  0.1    1  1    
Choco,  0.34  1  1  0.078  
Choco, ()  0.1  0.01  1  1  0.016  
Choco, ()  0.1  0.04  1  1  0.04  
DCD, ()      
DCD,  0.01      
ECD, ()      
ECD, ()     
We compare the performance of the gossip schemes (EG) (exact communication), (Q1G), (Q2G) (both with unbiased compression), and our scheme (ChocoG) in Figure 3 for the compression scheme and in Figure 3 for the random () compression scheme. In addition, we also depict the performance of ChocoGossip with biased () compression. We use ring topology with uniformly averaging mixing matrix W as in Figure 1, left. The stepsizes that were used for ChocoGossip are listed in the Table 4. We consider here the consensus problem (2) with data on the machine was chosen to be the th vector in the dataset. We depict the errors .
The proposed scheme (ChocoG) with 8 bit quantization converges with the same rate as (EG) that uses exact communications (Fig. 3, left), while it requires much less data to be transmitted (Fig. 3, right). The schemes (Q1G) and (Q2G) can do not converge and reach only accuracies of –. The scheme (Q1G) even starts to diverge, because the quantization error becomes larger than the optimization error.
With sparsified communication (), i.e. transmitting only of all the coordinates, the scheme (Q1G) quickly zeros out all the coordinates, and (Q2G) diverges because quantization error is too large already from the first step (Fig. 3). ChocoGossip proves to be more robust and converges. The observed rate matches with the theoretical findings, as we expect the scheme with factor compression to be slower than (EG) without compression. In terms of total data transmitted, both schemes converge at the same speed (Fig. 3, right). We also see that () sparsification can give additional gains and comes out as the most dataefficient method in these experiments.
5.3 Decentralized SGD
We asses the performance of ChocoSGD
on logistic regression, defined as
, where and are the data samples and denotes the number of samples in the dataset. We distribute the data samples evenly among the workers and consider two settings: (i) randomly shuffled, where datapoints are randomly assigned to workers, and the more difficult (ii) sortedsetting, where each worker only gets data samples just from one class (with the possible exception of one worker that gets two labels assigned). Moreover, we try to make the setting as difficult as possible, meaning that e.g. on the ring topology the machines with the same label form two connected clusters. We repeat each experiment three times and depict the mean curve and the area corresponding to one standard deviation. We plot suboptimality, i.e.
(obtained by optimizer from scikitlearn (Pedregosa et al., 2011)) versus number of iterations and the number of transmitted bits between workers, which is proportional to the actual running time if communication is a bottleneck.Algorithms.
As baselines we consider Alg. 3 with exact communication (denoted as ‘plain’) and the communication efficient stateoftheart optimization schemes DCDSGD and ECDSGD recently proposed in (Tang et al., 2018a) (for unbiased quantization operators) and compare them to ChocoSGD. We use decaying stepsize where the parameters are individually tuned for each algorithm and compression scheme, with values given in Table 4.
Impact of Topology.
In Figure 4 we depict the performance of the baseline Algorithm 3 with exact communication on different topologies (ring, torus and fullyconnected; Fig. 1) with uniformly averaging mixing matrix . Note that Algorithm 3 for fullyconnected graph corresponds to minibatch SGD. Increasing the number of workers from to and shows the mild effect of the network topology on the convergence. We observe that the sorted setting is more difficult than the randomly shuffled setting (see Fig. 7 in the Appendix G), where the convergence behavior remains almost unaffected. In the following we focus on the hardest case, i.e. the ring topology.
Comparison to Baselines.
In Figures 5 and 6 depict the performance of these algorithms on the ring topology with nodes for sorted data of the and datasets. ChocoSGD performs almost as good as the exact Algorithm 3 in all situations, but using less communication with () sparsification (Fig. 5, right) and approximately less communication for quantization. The () variant performs slightly better than () sparsification.
ChocoSGD consistently outperforms DCDSGD in all settings. We also observed that DCDSGD starts to perform better for larger number of levels in the in the quantification operator (increasing communication cost). This is consistent with the reporting in (Tang et al., 2018a) that assumed high precision quantization. As a surprise to us, ECDSGD, which was proposed in (Tang et al., 2018a) a the preferred alternative over DCDSGD for less precise quantization operators, always performs worse than DCDSGD, and often diverges.
Conclusion.
The experiments verify our theoretical findings: ChocoGossip is the first linearly convergent gossip algorithm with quantized communication and ChocoSGD consistently outperforms the baselines for decentralized optimization, reaching almost the same performance as the exact algorithm without communication restrictions while significantly reducing communication cost. In view of the striking popularity of SGD as opposed to fullgradient methods for deeplearning, the application of ChocoSGD to decentralized deep learning—an instance of problem (1)— is a promising direction.
Acknowledgments.
We acknowledge funding from SNSF grant 200021_175796, as well as a Google Focused Research Award.
References

Aldous & Fill (2002)
Aldous, D. and Fill, J. A.
Reversible markov chains and random walks on graphs, 2002.
Unfinished monograph, recompiled 2014, available at http://www.stat.berkeley.edu/$∼$aldous/RWG/book.html.  Alistarh et al. (2017) Alistarh, D., Grubic, D., Li, J., Tomioka, R., and Vojnovic, M. QSGD: Communicationefficient SGD via gradient quantization and encoding. In Guyon, I., Luxburg, U. V., Bengio, S., Wallach, H., Fergus, R., Vishwanathan, S., and Garnett, R. (eds.), NIPS  Advances in Neural Information Processing Systems 30, pp. 1709–1720. Curran Associates, Inc., 2017. URL http://papers.nips.cc/paper/6768qsgdcommunicationefficientsgdviagradientquantizationandencoding.pdf.
 Alistarh et al. (2018) Alistarh, D., Hoefler, T., Johansson, M., Konstantinov, N., Khirirat, S., and Renggli, C. The convergence of sparsified gradient methods. In Bengio, S., Wallach, H., Larochelle, H., Grauman, K., CesaBianchi, N., and Garnett, R. (eds.), Advances in Neural Information Processing Systems 31, pp. 5977–5987. Curran Associates, Inc., 2018. URL http://papers.nips.cc/paper/7837theconvergenceofsparsifiedgradientmethods.pdf.
 Assran et al. (2018) Assran, M., Loizou, N., Ballas, N., and Rabbat, M. Stochastic Gradient Push for Distributed Deep Learning. arXiv, November 2018.
 Aysal et al. (2008) Aysal, T. C., Coates, M. J., and Rabbat, M. G. Distributed average consensus with dithered quantization. IEEE Transactions on Signal Processing, 56(10):4905–4918, Oct 2008. ISSN 1053587X. doi: 10.1109/TSP.2008.927071.
 Bottou (2010) Bottou, L. Largescale machine learning with stochastic gradient descent. In Lechevallier, Y. and Saporta, G. (eds.), Proceedings of COMPSTAT’2010, pp. 177–186, Heidelberg, 2010. PhysicaVerlag HD. ISBN 9783790826043.
 Boyd et al. (2006) Boyd, S., Ghosh, A., Prabhakar, B., and Shah, D. Randomized gossip algorithms. IEEE/ACM Trans. Netw., 14(SI):2508–2530, June 2006. ISSN 10636692. doi: 10.1109/TIT.2006.874516. URL https://doi.org/10.1109/TIT.2006.874516.
 Carli et al. (2007) Carli, R., Fagnani, F., Frasca, P., Taylor, T., and Zampieri, S. Average consensus on networks with transmission noise or quantization. In 2007 European Control Conference (ECC), pp. 1852–1857, July 2007. doi: 10.23919/ECC.2007.7068829.
 Carli et al. (2010a) Carli, R., Bullo, F., and Zampieri, S. Quantized average consensus via dynamic coding/decoding schemes. International Journal of Robust and Nonlinear Control, 20:156–175, 2010a. ISSN 10498923.
 Carli et al. (2010b) Carli, R., Frasca, P., Fagnani, F., and Zampieri, S. Gossip consensus algorithms via quantized communication. Automatica, 46:70–80, 2010b. ISSN 00051098.
 Dekel et al. (2012) Dekel, O., GiladBachrach, R., Shamir, O., and Xiao, L. Optimal distributed online prediction using minibatches. J. Mach. Learn. Res., 13(1):165–202, January 2012. ISSN 15324435. URL http://dl.acm.org/citation.cfm?id=2503308.2188391.
 Duchi et al. (2012) Duchi, J. C., Agarwal, A., and Wainwright, M. J. Dual averaging for distributed optimization: Convergence analysis and network scaling. IEEE Transactions on Automatic Control, 57(3):592–606, March 2012. ISSN 00189286. doi: 10.1109/TAC.2011.2161027.
 Fang & Li (2010) Fang, J. and Li, H. Distributed estimation of gauss  markov random fields with onebit quantized data. IEEE Signal Processing Letters, 17(5):449–452, May 2010. ISSN 10709908. doi: 10.1109/LSP.2010.2043157.
 Goodall (1951) Goodall, W. M. Television by pulse code modulation. The Bell System Technical Journal, 30(1):33–49, Jan 1951. ISSN 00058580. doi: 10.1002/j.15387305.1951.tb01365.x.
 He et al. (2018) He, L., Bian, A., and Jaggi, M. Cola: Decentralized linear learning. In Bengio, S., Wallach, H., Larochelle, H., Grauman, K., CesaBianchi, N., and Garnett, R. (eds.), Advances in Neural Information Processing Systems 31, pp. 4541–4551. Curran Associates, Inc., 2018. URL http://papers.nips.cc/paper/7705coladecentralizedlinearlearning.pdf.
 Iutzeler et al. (2013) Iutzeler, F., Bianchi, P., Ciblat, P., and Hachem, W. Asynchronous distributed optimization using a randomized alternating direction method of multipliers. In Proceedings of the 52nd IEEE Conference on Decision and Control, CDC 2013, December 1013, 2013, Firenze, Italy, pp. 3671–3676. IEEE, 2013. doi: 10.1109/CDC.2013.6760448. URL https://doi.org/10.1109/CDC.2013.6760448.
 Jakovetić et al. (2014) Jakovetić, D., Xavier, J., and Moura, J. M. F. Fast distributed gradient methods. IEEE Transactions on Automatic Control, 59(5):1131–1146, May 2014. ISSN 00189286. doi: 10.1109/TAC.2014.2298712.
 Johansson et al. (2010) Johansson, B., Rabi, M., and Johansson, M. A randomized incremental subgradient method for distributed optimization in networked systems. SIAM Journal on Optimization, 20(3):1157–1170, 2010. doi: 10.1137/08073038X. URL https://doi.org/10.1137/08073038X.
 Kempe et al. (2003) Kempe, D., Dobra, A., and Gehrke, J. Gossipbased computation of aggregate information. In Proceedings of the 44th Annual IEEE Symposium on Foundations of Computer Science, FOCS ’03, pp. 482–, Washington, DC, USA, 2003. IEEE Computer Society. ISBN 0769520405. URL http://dl.acm.org/citation.cfm?id=946243.946317.
 Konecny & Richtárik (2018) Konecny, J. and Richtárik, P. Randomized Distributed Mean Estimation: Accuracy vs. Communication. Frontiers in Applied Mathematics and Statistics, 4:1502, December 2018.
 Lan et al. (2018) Lan, G., Lee, S., and Zhou, Y. Communicationefficient algorithms for decentralized and stochastic optimization. Mathematical Programming, Dec 2018. ISSN 14364646. doi: 10.1007/s1010701813554. URL https://doi.org/10.1007/s1010701813554.
 Lewis et al. (2004) Lewis, D. D., Yang, Y., Rose, T. G., and Li, F. Rcv1: A new benchmark collection for text categorization research. J. Mach. Learn. Res., 5:361–397, December 2004. ISSN 15324435. URL http://dl.acm.org/citation.cfm?id=1005332.1005345.
 Li et al. (2011) Li, T., Fu, M., Xie, L., and Zhang, J. Distributed consensus with limited communication data rate. IEEE Transactions on Automatic Control, 56(2):279–292, Feb 2011. ISSN 00189286. doi: 10.1109/TAC.2010.2052384.
 Lian et al. (2017) Lian, X., Zhang, C., Zhang, H., Hsieh, C.J., Zhang, W., and Liu, J. Can decentralized algorithms outperform centralized algorithms? a case study for decentralized parallel stochastic gradient descent. In Guyon, I., Luxburg, U. V., Bengio, S., Wallach, H., Fergus, R., Vishwanathan, S., and Garnett, R. (eds.), Advances in Neural Information Processing Systems 30, pp. 5330–5340. Curran Associates, Inc., 2017. URL http://papers.nips.cc/paper/7117candecentralizedalgorithmsoutperformcentralizedalgorithmsacasestudyfordecentralizedparallelstochasticgradientdescent.pdf.
 Lin et al. (2018) Lin, Y., Han, S., Mao, H., Wang, Y., and Dally, B. Deep gradient compression: Reducing the communication bandwidth for distributed training. In ICLR 2018  International Conference on Learning Representations, 2018. URL https://openreview.net/forum?id=SkhQHMW0W.
 Nedić & Ozdaglar (2009) Nedić, A. and Ozdaglar, A. Distributed subgradient methods for multiagent optimization. IEEE Transactions on Automatic Control, 54(1):48–61, Jan 2009. ISSN 00189286. doi: 10.1109/TAC.2008.2009515.
 Nedić et al. (2008) Nedić, A., Olshevsky, A., Ozdaglar, A., and Tsitsiklis, J. N. Distributed subgradient methods and quantization effects. In Proceedings of the 47th IEEE Conference on Decision and Control, CDC 2008, pp. 4177–4184, 2008. ISBN 9781424431243. doi: 10.1109/CDC.2008.4738860.
 Nedić et al. (2015) Nedić, A., Lee, S., and Raginsky, M. Decentralized online optimization with global objectives and local communication. In 2015 American Control Conference (ACC), pp. 4497–4503, July 2015. doi: 10.1109/ACC.2015.7172037.
 OlfatiSaber & Murray (2004) OlfatiSaber, R. and Murray, R. M. Consensus problems in networks of agents with switching topology and timedelays. IEEE Transactions on Automatic Control, 49(9):1520–1533, Sep. 2004. ISSN 00189286. doi: 10.1109/TAC.2004.834113.
 Pedregosa et al. (2011) Pedregosa, F., Varoquaux, G., Gramfort, A., Michel, V., Thirion, B., Grisel, O., Blondel, M., Prettenhofer, P., Weiss, R., Dubourg, V., Vanderplas, J., Passos, A., Cournapeau, D., Brucher, M., Perrot, M., and Duchesnay, E. Scikitlearn: Machine learning in Python. Journal of Machine Learning Research, 12:2825–2830, 2011.
 Rabbat (2015) Rabbat, M. Multiagent mirror descent for decentralized stochastic optimization. In 2015 IEEE 6th International Workshop on Computational Advances in MultiSensor Adaptive Processing (CAMSAP), pp. 517–520, Dec 2015. doi: 10.1109/CAMSAP.2015.7383850.
 Rakhlin et al. (2012) Rakhlin, A., Shamir, O., and Sridharan, K. Making gradient descent optimal for strongly convex stochastic optimization. In Proceedings of the 29th International Coference on International Conference on Machine Learning, ICML’12, pp. 1571–1578, USA, 2012. Omnipress. ISBN 9781450312851. URL http://dl.acm.org/citation.cfm?id=3042573.3042774.
 Robbins & Monro (1951) Robbins, H. and Monro, S. A Stochastic Approximation Method. The Annals of Mathematical Statistics, 22(3):400–407, September 1951.
 Roberts (1962) Roberts, L. Picture coding using pseudorandom noise. IRE Transactions on Information Theory, 8(2):145–154, February 1962. ISSN 00961000. doi: 10.1109/TIT.1962.1057702.
 Scaman et al. (2017) Scaman, K., Bach, F., Bubeck, S., Lee, Y. T., and Massoulié, L. Optimal algorithms for smooth and strongly convex distributed optimization in networks. In Precup, D. and Teh, Y. W. (eds.), Proceedings of the 34th International Conference on Machine Learning, volume 70 of Proceedings of Machine Learning Research, pp. 3027–3036, International Convention Centre, Sydney, Australia, 06–11 Aug 2017. PMLR. URL http://proceedings.mlr.press/v70/scaman17a.html.
 Scaman et al. (2018) Scaman, K., Bach, F., Bubeck, S., Massoulié, L., and Lee, Y. T. Optimal algorithms for nonsmooth distributed optimization in networks. In Bengio, S., Wallach, H., Larochelle, H., Grauman, K., CesaBianchi, N., and Garnett, R. (eds.), Advances in Neural Information Processing Systems 31, pp. 2745–2754. Curran Associates, Inc., 2018. URL http://papers.nips.cc/paper/7539optimalalgorithmsfornonsmoothdistributedoptimizationinnetworks.pdf.
 Seide et al. (2014) Seide, F., Fu, H., Droppo, J., Li, G., and Yu, D. 1bit stochastic gradient descent and its application to dataparallel distributed training of speech DNNs. In Li, H., Meng, H. M., Ma, B., Chng, E., and Xie, L. (eds.), INTERSPEECH, pp. 1058–1062. ISCA, 2014. URL http://dblp.unitrier.de/db/conf/interspeech/interspeech2014.html#SeideFDLY14.
 Shamir & Srebro (2014) Shamir, O. and Srebro, N. Distributed stochastic optimization and learning. 2014 52nd Annual Allerton Conference on Communication, Control, and Computing (Allerton), pp. 850–857, 2014.
 Sirb & Ye (2016) Sirb, B. and Ye, X. Consensus optimization with delayed and stochastic gradients on decentralized networks. In 2016 IEEE International Conference on Big Data (Big Data), pp. 76–85, Dec 2016. doi: 10.1109/BigData.2016.7840591.
 Sonnenburg et al. (2008) Sonnenburg, S., Franc, V., YomTov, E., and Sebag, M. Pascal large scale learning challenge. 25th International Conference on Machine Learning (ICML2008) Workshop. J. Mach. Learn. Res, 10:1937–1953, 01 2008.
 Stich (2018) Stich, S. U. Local SGD Converges Fast and Communicates Little. arXiv eprints, art. arXiv:1805.09767, May 2018.
 Stich et al. (2018) Stich, S. U., Cordonnier, J.B., and Jaggi, M. Sparsified SGD with memory. In Bengio, S., Wallach, H., Larochelle, H., Grauman, K., CesaBianchi, N., and Garnett, R. (eds.), Advances in Neural Information Processing Systems 31, pp. 4452–4463. Curran Associates, Inc., 2018. URL http://papers.nips.cc/paper/7697sparsifiedsgdwithmemory.pdf.
 Tang et al. (2018a) Tang, H., Gan, S., Zhang, C., Zhang, T., and Liu, J. Communication compression for decentralized training. In Bengio, S., Wallach, H., Larochelle, H., Grauman, K., CesaBianchi, N., and Garnett, R. (eds.), Advances in Neural Information Processing Systems 31, pp. 7663–7673. Curran Associates, Inc., 2018a. URL http://papers.nips.cc/paper/7992communicationcompressionfordecentralizedtraining.pdf.
 Tang et al. (2018b) Tang, H., Lian, X., Yan, M., Zhang, C., and Liu, J. : Decentralized training over decentralized data. In Dy, J. and Krause, A. (eds.), Proceedings of the 35th International Conference on Machine Learning, volume 80 of Proceedings of Machine Learning Research, pp. 4848–4856, Stockholmsmässan, Stockholm Sweden, 10–15 Jul 2018b. PMLR. URL http://proceedings.mlr.press/v80/tang18a.html.
 Thanou et al. (2013) Thanou, D., Kokiopoulou, E., Pu, Y., and Frossard, P. Distributed average consensus with quantization refinement. IEEE Transactions on Signal Processing, 61(1):194–205, Jan 2013. ISSN 1053587X. doi: 10.1109/TSP.2012.2223692.
 Tsitsiklis (1984) Tsitsiklis, J. N. Problems in decentralized decision making and computation. PhD thesis, Massachusetts Institute of Technology, 1984.
 Uribe et al. (2018) Uribe, C. A., Lee, S., and Gasnikov, A. A Dual Approach for Optimal Algorithms in Distributed Optimization over Networks. arXiv, September 2018.
 Wangni et al. (2018) Wangni, J., Wang, J., Liu, J., and Zhang, T. Gradient sparsification for communicationefficient distributed optimization. In Bengio, S., Wallach, H., Larochelle, H., Grauman, K., CesaBianchi, N., and Garnett, R. (eds.), Advances in Neural Information Processing Systems 31, pp. 1306–1316. Curran Associates, Inc., 2018. URL http://papers.nips.cc/paper/7405gradientsparsificationforcommunicationefficientdistributedoptimization.pdf.
 Wei & Ozdaglar (2012) Wei, E. and Ozdaglar, A. Distributed alternating direction method of multipliers. In 2012 IEEE 51st IEEE Conference on Decision and Control (CDC), pp. 5445–5450, Dec 2012. doi: 10.1109/CDC.2012.6425904.
 Wen et al. (2017) Wen, W., Xu, C., Yan, F., Wu, C., Wang, Y., Chen, Y., and Li, H. Terngrad: Ternary gradients to reduce communication in distributed deep learning. In Guyon, I., Luxburg, U. V., Bengio, S., Wallach, H., Fergus, R., Vishwanathan, S., and Garnett, R. (eds.), NIPS  Advances in Neural Information Processing Systems 30, pp. 1509–1519. Curran Associates, Inc., 2017. URL http://papers.nips.cc/paper/6749terngradternarygradientstoreducecommunicationindistributeddeeplearning.pdf.
 Xiao & Boyd (2004) Xiao, L. and Boyd, S. Fast linear iterations for distributed averaging. Systems & Control Letters, 53(1):65–78, 2004. ISSN 01676911. doi: https://doi.org/10.1016/j.sysconle.2004.02.022. URL http://www.sciencedirect.com/science/article/pii/S0167691104000398.
 Xiao et al. (2005) Xiao, L., Boyd, S., and Lall, S. A scheme for robust distributed sensor fusion based on average consensus. In IPSN 2005. Fourth International Symposium on Information Processing in Sensor Networks, 2005., pp. 63–70, April 2005. doi: 10.1109/IPSN.2005.1440896.
 Yuan et al. (2012) Yuan, D., Xu, S., Zhao, H., and Rong, L. Distributed dual averaging method for multiagent optimization with quantized communication. Systems & Control Letters, 61(11):1053 – 1061, 2012. ISSN 01676911. doi: https://doi.org/10.1016/j.sysconle.2012.06.004. URL http://www.sciencedirect.com/science/article/pii/S0167691112001193.
 Zhang et al. (2017) Zhang, H., Li, J., Kara, K., Alistarh, D., Liu, J., and Zhang, C. ZipML: Training linear models with endtoend low precision, and a little bit of deep learning. In Precup, D. and Teh, Y. W. (eds.), Proceedings of the 34th International Conference on Machine Learning, volume 70 of Proceedings of Machine Learning Research, pp. 4035–4043, International Convention Centre, Sydney, Australia, 06–11 Aug 2017. PMLR. URL http://proceedings.mlr.press/v70/zhang17e.html.
Appendix A Basic Identities and Inequalities
a.1 Smooth and Strongly Convex Functions
Definition 2.
A differentiable function is strongly convex for parameter if
(9) 
Definition 3.
A differentiable function is strongly convex for parameter if
(10) 
Remark 5.
If is smooth with minimizer s.t , then
(11) 
a.2 Vector and Matrix Inequalities
Remark 6.
For ,
(12) 
Remark 7.
For arbitrary set of vectors ,
(13) 
Remark 8.
For given two vectors
(14) 
Remark 9.
For given two vectors
(15) 
This inequality also holds for the sum of two matrices in Frobenius norm.
a.3 Implications of the bounded gradient and bounded variance assumption
Remark 10.
If are convex functions with ,
Remark 11 (Minibatch variance).
Proof.
This follows from
for . Expectation of scalar product is equal to zero because is independent of since . ∎
Appendix B Consensus in Matrix notation
In the proofs in the next section we will use the matrix notation, as already introduced in the main text. We define
(16) 
Then using matrix notation we can rewrite Algorithm 1 as
Comments
There are no comments yet.