Sparsified SGD with Memory

by   Sebastian U. Stich, et al.

Huge scale machine learning problems are nowadays tackled by distributed optimization algorithms, i.e. algorithms that leverage the compute power of many devices for training. The communication overhead is a key bottleneck that hinders perfect scalability. Various recent works proposed to use quantization or sparsification techniques to reduce the amount of data that needs to be communicated, for instance by only sending the most significant entries of the stochastic gradient (top-k sparsification). Whilst such schemes showed very promising performance in practice, they have eluded theoretical analysis so far. In this work we analyze Stochastic Gradient Descent (SGD) with k-sparsification or compression (for instance top-k or random-k) and show that this scheme converges at the same rate as vanilla SGD when equipped with error compensation (keeping track of accumulated errors in memory). That is, communication can be reduced by a factor of the dimension of the problem (sometimes even more) whilst still converging at the same rate. We present numerical experiments to illustrate the theoretical findings and the better scalability for distributed applications.



There are no comments yet.


page 1

page 2

page 3

page 4


Escaping Saddle Points with Compressed SGD

Stochastic gradient descent (SGD) is a prevalent optimization technique ...

Error Compensated Quantized SGD and its Applications to Large-scale Distributed Optimization

Large-scale distributed optimization is of great importance in various a...

Sparse Communication for Training Deep Networks

Synchronous stochastic gradient descent (SGD) is the most common method ...

On the Convergence of Memory-Based Distributed SGD

Distributed stochastic gradient descent (DSGD) has been widely used for ...

ErrorCompensatedX: error compensation for variance reduced algorithms

Communication cost is one major bottleneck for the scalability for distr...

rTop-k: A Statistical Estimation Approach to Distributed SGD

The large communication cost for exchanging gradients between different ...

NUQSGD: Provably Communication-efficient Data-parallel SGD via Nonuniform Quantization

As the size and complexity of models and datasets grow, so does the need...

Code Repositories


Code for Sparsified SGD.

view repo
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Stochastic Gradient Descent (SGD) [28] and variants thereof (e.g. [9, 15]

) are among the most popular optimization algorithms in machine- and deep-learning 

[5]. SGD consists of iterations of the form


for iterates , stepsize (or learning rate) , and stochastic gradient with the property

, for a loss function

. SGD addresses the computational bottleneck of full gradient descent, as the stochastic gradients can in general be computed much more efficiently than a full gradient . However, note that in general both and are densevectors111 Note that the stochastic gradients

are dense vectors for the setting of training neural networks. The

themselves can be sparse for generalized linear models under the additional assumption that the data is sparse.
of size , i.e. SGD does not address the communication bottleneck of gradient descent, which occurs as a roadblock both in distributed as well as parallel training. In the setting of distributed training, communicating the stochastic gradients to the other workers is a major limiting factor for many large scale (deep) learning applications, see e.g. [32, 3, 43, 20]. The same bottleneck can also appear for parallel training, e.g. in the increasingly common setting of a single multi-core machine or device, where locking and bandwidth of memory write operations for the common shared parameter often forms the main bottleneck, see e.g. [24, 13, 17].

A remedy to address these issues seems to enforce applying smaller and more efficient updates instead of , where generates a compression of the gradient, such as by lossy quantization or sparsification. We discuss different schemes below. However, too aggressive compression can hurt the performance, unless it is implemented in a clever way: 1Bit-SGD [32, 36] combines gradient quantization with an error compensation technique, which is a memory or feedback mechanism. We in this work leverage this key mechanism but apply it within more the more general setting of SGD. We will now sketch how the algorithm uses feedback of to correct for errors accumulated in previous iterations. Roughly speaking, the method keeps track of a memory vector which contains the sum of the information that has been suppressed thus far, i.e. , and injects this information back in the next iteration, by transmitting instead of only . Note that updates of this kind are not unbiased (even if would be) and there is also no control over the delay after which the single coordinates are applied. These are some (technical) reasons why there exists no theoretical analysis of this scheme up to now.

In this paper we give a concise convergence rate analysis for SGD with memory and -compression operators222See Definition 2.1., such as (but not limited to) top- sparsification. Our analysis also supports ultra-sparsification operators for which , i.e. where less than one coordinate of the stochastic gradient is applied on average in (1). We not only provide the first convergence result of this method, but the result also shows that the method converges at the same rate as vanilla SGD.

1.1 Related Work

There are several ways to reduce the communication in SGD. For instance by simply increasing the amount of computation before communication, i.e. by using large mini-batches (see e.g. [42, 11]), or by designing communication-efficient schemes [44]. These approaches are a bit orthogonal to the methods we consider in this paper, which focus on quantization or sparsification of the gradient.

Several papers consider approaches that limit the number of bits to represent floating point numbers [12, 23, 30]. Recent work proposes adaptive tuning of the compression ratio [7]

. Unbiased quantization operators not only limit the number of bits, but quantize the stochastic gradients in such a way that they are still unbiased estimators of the gradient 

[3, 40]. The ZipML framework also applies this technique to the data [43]. Sparsification methods reduce the number of non-zero entries in the stochastic gradient [3, 39].

A very aggressive sparsification method is to keep only very few coordinates of the stochastic gradient by considering only the coordinates with the largest magnitudes [8, 1]. In contrast to the unbiased schemes it is clear that such methods can only work by using some kind of error accumulation or feedback procedure, similar to the one the we have already discussed [32, 36], as otherwise certain coordinates could simply never be updated. However, in certain applications no feedback mechanism is needed [37]. Also more elaborate sparsification schemes have been introduced [20].

Asynchronous updates provide an alternative solution to disguise the communication overhead to a certain amount [18]. However, those methods usually rely on a sparsity assumption on the updates [24, 30], which is not realistic e.g. in deep learning. We like to advocate that combining gradient sparsification with those asynchronous schemes seems to be a promising approach, as it combines the best of both worlds. Other scenarios that could profit from sparsification are heterogeneous systems or specialized hardware, e.g. accelerators [10, 43].

Convergence proofs for SGD [28] typically rely on averaging the iterates [29, 26, 22], though convergence of the last iterate can also be proven [33]. For our convergence proof we rely on averaging techniques that give more weight to more recent iterates [16, 33, 27], as well as the perturbed iterate framework from Mania et al. [21] and techniques from [17, 35].

Simultaneous to our work, [4, 38] at NIPS 2018 propose related schemes. We will update this discussion once they become available. Another simultaneous analysis of [41]

at ICML 2018 is restricted to unbiased gradient compression. This scheme also critically relies on an error compensation technique, but in contrast to our work the analysis is restricted to quadratic functions and the scheme introduces two additional hyperparameters that control the feedback mechanism.

1.2 Contributions

We consider finite-sum convex optimization problems of the form


where each is -smooth333, , . and is -strongly convex444, .. We consider a sequential sparsified SGD algorithm with error accumulation technique and prove convergence for -compression operators, (for instance the sparsification operators top- or random-). For appropriately chosen stepsizes and an averaged iterate after steps we show convergence


for and . Not only is this, to the best of our knowledge, the first convergence result for sparsified SGD with memory, but the result also shows that for the first term is dominating and the convergence rate is the same as for vanilla SGD.

We introduce the method formally in Section 2 and show a sketch of the convergence proof in Section 3. In Section 4 we include a few numerical experiments for illustrative purposes. The experiments highlight that top- sparsification yields a very effective compression method and does not hurt convergence. Our multi-core simulations demonstrate that SGD with memory scales better than asynchronous SGD thanks to the enforced sparsity of the updates. It also drastically decreases the communication cost without sacrificing the rate of convergence. We like to stress that the effectiveness of SGD variants with sparsification techniques has already been demonstrated in practice [8, 1, 20, 36, 32].

Although we do not yet provide convergence guarantees for parallel and asynchronous variants of the scheme, this is the main application of this method. For instance, we like to highlight that asynchronous SGD schemes [24, 2] could profit from the gradient sparsification. To demonstrate this use-case, we include in Section 4 a set of experiments for a multi-core implementation.

2 SGD with Memory

In this section we present the sparse SGD algorithm with memory. First we introduce the sparsification operators that we use to drastically reduce the communication cost in comparison with vanilla SGD.

2.1 Compression and Sparsification Operators

We consider compression operators that satisfy the following contraction property:

Definition 2.1 (-contraction).

For a parameter , a -contraction operator is a (possibly randomized) operator that satisfies the contraction property


The contraction property is sufficient to obtain all mathematical results that are derived in this paper. However, note that (4) does not imply that is a necessarily sparse vector. Also dense vectors can satisfy (4). One of the main goals of this work is to derive communication efficient schemes, thus we are particularly interested in operators that also ensure that can be encoded much more efficiently than the original .

The following two operators are examples of -sparsification operators, that is contraction operators with the additional property of being -sparse vectors:

Definition 2.2.

For a parameter , the operators and , where denotes the set of all element subsets of , are defined for as


where is a permutation of such that for . We abbreviate whenever the second argument is chosen uniformly at random, .

It is easy to see that both operators satisfy Definition 2.1 of being a -contraction. For completeness the proof is included in Appendix A.1.

We note that our setting is more general than simply measuring sparsity in terms of the cardinality, i.e. the non-zero elements of vectors in . Instead, Definition 2.1 can also be considered for quantization or e.g. floating point representation of each entry of the vector. In this setting we would for instance measure sparsity in terms of the number of bits that are needed to encode the vector.

Remark 2.3 (Ultra-sparsification).

We like to highlight that many other operators do satisfy Definition 2.1, not only the two examples given in Definition 2.2

. As a notable variant is to pick a random coordinate of a vector with probability

, for , property (4) holds even if . I.e. it suffices to transmit on average less than one coordinate per iteration (this would then correspond to a mini-batch update).

2.2 Variance Blow-up for Unbiased Updates

Before introducing SGD with memory we first discuss a motivating example. Consider the following variant of SGD, where random coordinates of the stochastic gradient are dropped:


where . It is important to note that the update is unbiased, i.e. . For carefully chosen stepsizes this algorithm converges at rate on strongly convex and smooth functions , where

is an upper bound on the variance, see for instance 

[45]. We have

where we used the variance decomposition and the standard assumption . Hence, when is small this algorithm requires times more iterations to achieve the same error guarantee as vanilla SGD with .

It is well known that by using mini-batches the variance of the gradient estimator can be reduced. If we consider in (6) the estimator for , and instead, we have


This shows that, when using mini-batches of appropriate size, the sparsification of the gradient does not hurt convergence. However, by increasing the mini-batch size, we increase the computation by a factor of .

These two observations seem to indicate that the factor is inevitably lost, either by increased number of iterations or increased computation. However, this is no longer true when the information in (6) is not dropped, but kept in memory. To illustrate this, assume and that index has not been selected by the operator in iterations , but is selected in iteration . Then the memory contains this past information . Intuitively, we would expect that the variance of this estimator is now reduced by a factor of compared to the naïve estimator in (6), similar to the mini-batch update in (7). Indeed, SGD with memory converges at the same rate as vanilla SGD, as we will demonstrate below.

2.3 SGD with Memory: Algorithm and Convergence Results

We consider the following algorithm for parameter , using a compression operator which is a -contraction (Definition 2.1)


where , and denotes a sequence of stepsizes. The pseudocode is given in Algorithm 1. Note that the gradients get multiplied with the stepsize at the timestep when they put into memory, and not when they are (partially) retrieved from the memory.

We state the precise convergence result for Algorithm 1 in Theorem 2.4 below. In Remark 2.6 we give a simplified statement in big- notation for a specific choice of the stepsizes .

1:Initialize variables and
2:for  in  do
3:     Sample uniformly in
7:end for
Algorithm 1 Mem-SGD
1:Initialize shared variable
2:and ,
3:parallel for  in  do
4:     for  in  do
5:         Sample uniformly in
7:          shared memory
9:     end for
10:end parallel for
Algorithm 2 Parallel-Mem-SGD
Figure 1: Left: The Mem-SGD algorithm. Right: Implementation for multi-core experiments.
Theorem 2.4.

Let be -smooth, be -strongly convex, , for , where are generated according to (8) for stepsizes and shift parameter . Then for such that , with , it holds


where , for , and .

Remark 2.5 (Choice of the shift ).

Theorem 2.4 says that for any shift there is a parameter such that (9) holds. However, for the choice one has to set such that and the last term in (9) will be of order , thus requiring steps to yield convergence. For we have and the last term is only of order instead. However, this requires typically a large shift. Observe , i.e. setting is enough. We like to stress that in general it is not advisable to set as the first two terms in (9) depend on . In practice, it often suffices to set , as we will discuss in Section 4.

Remark 2.6.

As discussed in Remark 2.5 above, setting and is feasible. With this choice, equation (9) simplifies to


for . To estimate the second term in (9) we used the property for -strongly convex , as derived in [27, Lemma 2]. We observe that for the first term is dominating, and Algorithm 1 converges at rate , the same rate as vanilla SGD [16].

3 Proof Outline

We now give an outline of the proof. The proofs of the lemmas are given in Appendix A.2.

Perturbed iterate analysis.

Inspired by the perturbed iterate framework in [21] and [17] we first define a virtual sequence in the following way:


where the sequences and are the same as in (8). Notice that

Lemma 3.1.

Let and be defines as in (8) and (11) and let be -smooth and be -strongly convex with . Then


where .

Bounding the memory.

From equation (13) it becomes clear that we should derive an upper bound on . For this we will use the contraction property (4) of the compression operators.

Lemma 3.2.

Let as defined in (8) for , and stepsizes with , , as in Theorem 2.4. Then


Optimal averaging.

Similar as discussed in [16, 33, 27] we have to define a suitable averaging scheme for the iterates to get the optimal convergence rate. In contrast to [16] that use linearly increasing weights, we use quadratically increasing weights, as for instance [33, 35].

Lemma 3.3.

Let , , , , be sequences satisfying


for and constants , , . Then


for and .

Proof of Theorem 2.4.

The proof of the theorem immediately follows from the three lemmas that we have presented in this section and convexity of , i.e. we have in (16), for constants and .

4 Experiments

We present numerical experiments to illustrate the excellent convergence properties and communication efficiency of Mem-SGD. As the usefulness of SGD with sparsification techniques has already been shown in practical applications [8, 1, 20, 36, 32] we focus here on a few particular aspects. First, we verify the impact of the initial learning rate that did come up in the statement of Theorem 2.4. We then compare our method with QSGD [3] which decreases the communication cost in SGD by using random quantization operators, but without memory. Finally, we show the performance of the parallel SGD depicted in Algorithm 2 in a multi-core setting with shared memory and compare the speed-up to asynchronous Hogwild! [24].

4.1 Experimental Setup


The experiments focus on the performance of Mem-SGD

applied to logistic regression. The associated objective function is

, where and are the data samples, and we employ a standard -regularizer. The regularization parameter is set to for both datasets following [31].


We consider a dense dataset, epsilon [34], as well as a sparse dataset, RCV1 [19] where we train on the larger test set. Statistics on the datasets are listed in Table 1 below:

epsilon 400’000 2’000 100%
RCV1-test 677’399 47’236 0.15%
Table 1: Datasets statistics.
parameter value
epsilon 2
RCV1-test 2
Table 2: Learning rate .


We use Python3 and the numpy library [14]. Our code is open-source and publicly available at: We emphasize that our high level implementation is not optimized for speed per iteration but for readability and simplicity. We only report convergence per iteration and relative speedups, but not wall-clock time because unequal efforts have been made to speed up the different implementations. Plots additionally show the baseline computed with the standard optimizer LogisticSGD of scikit-learn [25]. Experiments were run on an Ubuntu 16.04 machine with a 24 cores processor Intel® Xeon® CPU E5-2680 v3 @ 2.50GHz.

4.2 Verifying the Theory

We study the convergence of the method using the stepsizes and hyperparameters and  set as in Table 2. We compute the final estimate as a weighted average of all iterates with weights as indicated by Theorem 2.4. The results are depicted in Figure 2. We use for epsilon and for RCV1 to increase the difference with large number of features. The variant consistently outperforms and sometimes outperforms vanilla SGD, which is surprising and might come from feature characteristics of the datasets. We also evaluate the impact of the delay in the learning rate: setting it to  instead of order dramatically hurts the memory and requires time to recover from the high initial learning rate (labeled “without delay” in Figure 2).

Figure 2: Convergence of Mem-SGD using different sparsification operators compared to full SGD with theoretical learning rates (parameters in Table 2).

We experimentally verified the convergence properties of Mem-SGD for different sparsification operators and stepsizes but we want to further evaluate its fundamental benefits in terms of sparsity enforcement and reduction of the communication bottleneck. The gain in communication cost of SGD with memory is very high for dense datasets—using the strategy on epsilon dataset improves the amount of communication by compared to SGD. For the sparse dataset, SGD can readily use the given sparsity of the gradients. Nevertheless, the improvement for on RCV1 is of approximately an order of magnitude.

4.3 Comparison with QSGD

Now we compare Mem-SGD with the QSGD compression scheme [3] which reduces communication cost by random quantization. The accuracy (and the compression ratio) in QSGD is controlled by a parameter , corresponding to the number of quantization levels. Ideally, we would like to set the quantization precision in QSGD such that the number of bits transmitted by QSGD and Mem-SGD are identical and compare their convergence properties. However, even for the lowest precision, QSGD needs to send the sign and index of coordinates. It is therefore not possible to reach the compression level of sparsification operators such as top- or random-, that only transmit a constant number of bits per iteration (up to logarithmic factors).555Encoding the indices of the top- or random- elements can be done with additional bits. Note that for both our examples. Hence, we did not enforce this condition and resorted to pick reasonable levels of quantization in QSGD ( with ). Note that -bits stands for the number of bits used to encode levels but the actual number of bits transmitted in QSGD can be reduced using Elias coding. As a fair comparison in practice, we chose a standard learning rate [6], tuned the hyperparameter on a subset of each dataset (see Appendix B). Figure 3 shows that Mem-SGD with on epsilon and RCV1 converges as fast as QSGD in term of iterations for 8 and 4-bits. As shown in the bottom of Figure 3, we are transmitting two orders of magnitude fewer bits with the sparsifier concluding that sparsification offers a much more aggressive and performant strategy than quantization.

Figure 3: Mem-SGD and QSGD convergence comparison. Top row: convergence in number of iterations. Bottom row:

cumulated size of the communicated gradients during training. We display 10 points per epoch and remove the point at 0MB for clarity.

4.4 Multicore experiment

We implement a parallelized version of Mem-SGD, as depicted in Algorithm 2. The enforced sparsity allows us to do the update in shared memory using a lock-free mechanism as in [24]. For this experiment we evaluate the final iterate instead of the weighted average above, and we also investigate constant learning rate, which turns out to work well in practice for epsilon. We use constant for epsilon and reuse the parameters from Table 2 for RCV1.

Figure 4: Multicore CPU time speed up comparison between Mem-SGD and lock-free SGD.

Figure 4 shows the speed-up obtained when increasing the number of cores. We see that the speed-up is almost linear up to 10 cores. This is especially remarkable for the dense epsilon dataset, as the dense gradient updates would usually imply many update conflicts for classic SGD and Hogwild! [24] (i.e. hurting convergence because of gradients computed on stale iterates) or require the use of locking (i.e. hurting speed). We did not use atomic updates of the parameter in the shared memory, allowing some workers to overwrite the progress of others which might contribute to the slowdown for a higher number of workers. The experiment is run on a single machine, hence no inter-node communication is used. The colored area depicts the best and worst results of 3 independent runs for each dataset.

Our method consistently scales better than vanilla parallel SGD with (this scheme can be seen as a naïve implementation of Hogwild!) on a non sparse problem (i.e. logistic regression on dense dataset). In this asynchronous setup, SGD with memory computes gradients on stale iterates that differ only by a few coordinates. It encounters fewer inconsistent read/write operations than lock free asynchronous SGD and exhibits better scaling properties. The operator performs better than in the sequential setup, but this is not the case in the parallel setup. A reason for this could be that due to the deterministic nature of the operator the cores are more prone to update the same set of coordinates, and more collisions appear.

5 Conclusion

We provide the first concise convergence analysis of sparsified SGD [36, 32, 8, 1]. This extremely communication-efficient variant of SGD enforces sparsity of the applied updates by only updating a constant number of coordinates in every iteration. This way, the method overcomes the communication bottleneck of SGD, while still enjoying the same convergence rate in terms of stochastic gradient computations.

Our experiments verify the drastic reduction in communication cost by demonstrating that Mem-SGD requires one to two orders of magnitude less bits to be communicated than QSGD [3] while converging to the same accuracy. The experiments show an advantage for the top- sparsification over random sparsification in the serial setting, but not in the multi-core shared memory implementation. There, both schemes are on par, and show much better scaling than a simple shared memory implementation that just writes the dense updates in lock-free asynchronous fashion (like Hogwild! [24]).

Despite the fact that our analysis does not yet comprise the parallel and distributed setting, we feel that those are the domains where sparsified SGD might have the largest impact. It has already been shown in practice that gradient sparsification can be efficiently applied to bandwidth memory limited systems such as multi-GPU training for neural networks [8, 1, 20, 36, 32]. By enforcing sparsity, the scheme is not only communication efficient, it also becomes more eligible for asynchronous implementations, as limiting sparsity assumptions can be revoked.


We would like to thank Dan Alistarh for insightful discussions in the early stages of this project and Frederik Künstner for his useful comments on the various drafts of this manuscript.


  • [1] Alham Fikri Aji and Kenneth Heafield. Sparse communication for distributed gradient descent. In

    Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing

    , pages 440–445. Association for Computational Linguistics, 2017.
  • [2] Dan Alistarh, Christopher De Sa, and Nikola Konstantinov. The convergence of stochastic gradient descent in asynchronous shared memory. arXiv, March 2018.
  • [3] Dan Alistarh, Demjan Grubic, Jerry Li, Ryota Tomioka, and Milan Vojnovic. QSGD: Communication-efficient SGD via gradient quantization and encoding. In I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett, editors, NIPS - Advances in Neural Information Processing Systems 30, pages 1709–1720. Curran Associates, Inc., 2017.
  • [4] Dan Alistarh, Torsten Hoefler, Mikael Johansson, Nikola Konstantinov, Sarit Khirirat, and Cedric Renggli. The convergence of sparsified gradient methods. In NIPS 2018, to appear, 2018.
  • [5] Léon Bottou. Large-scale machine learning with stochastic gradient descent. In Yves Lechevallier and Gilbert Saporta, editors, Proceedings of COMPSTAT’2010, pages 177–186, Heidelberg, 2010. Physica-Verlag HD.
  • [6] Leon Bottou. Stochastic Gradient Descent Tricks, volume 7700, page 430–445. Springer, January 2012.
  • [7] Chia-Yu Chen, Jungwook Choi, Daniel Brand, Ankur Agrawal, Wei Zhang, and Kailash Gopalakrishnan. Adacomp : Adaptive residual gradient compression for data-parallel distributed training. In Sheila A. McIlraith and Kilian Q. Weinberger, editors,

    Proceedings of the Thirty-Second AAAI Conference on Artificial Intelligence, New Orleans, Louisiana, USA, February 2-7, 2018

    . AAAI Press, 2018.
  • [8] Nikoli Dryden, Sam Ade Jacobs, Tim Moon, and Brian Van Essen. Communication quantization for data-parallel training of deep neural networks. In Proceedings of the Workshop on Machine Learning in High Performance Computing Environments, MLHPC ’16, pages 1–8, Piscataway, NJ, USA, 2016. IEEE Press.
  • [9] John Duchi, Elad Hazan, and Yoram Singer. Adaptive subgradient methods for online learning and stochastic optimization. JMLR, 12:2121–2159, August 2011.
  • [10] Celestine Dünner, Thomas Parnell, and Martin Jaggi. Efficient use of limited-memory accelerators for linear learning on heterogeneous systems. In I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett, editors, NIPS - Advances in Neural Information Processing Systems 30, pages 4258–4267. Curran Associates, Inc., 2017.
  • [11] Priya Goyal, Piotr Dollár, Ross B. Girshick, Pieter Noordhuis, Lukasz Wesolowski, Aapo Kyrola, Andrew Tulloch, Yangqing Jia, and Kaiming He. Accurate, large minibatch SGD: training ImageNet in 1 hour. CoRR, abs/1706.02677, 2017.
  • [12] Suyog Gupta, Ankur Agrawal, Kailash Gopalakrishnan, and Pritish Narayanan. Deep learning with limited numerical precision. In Proceedings of the 32Nd International Conference on International Conference on Machine Learning - Volume 37, ICML’15, pages 1737–1746., 2015.
  • [13] Cho-Jui Hsieh, Hsiang-Fu Yu, and Inderjit Dhillon. Passcode: Parallel asynchronous stochastic dual co-ordinate descent. In International Conference on Machine Learning, pages 2370–2379, 2015.
  • [14] Eric Jones, Travis Oliphant, Pearu Peterson, et al. SciPy: Open source scientific tools for Python, 2001–.
  • [15] Diederik P. Kingma and Jimmy Ba. Adam: A method for stochastic optimization. CoRR, abs/1412.6980, 2014.
  • [16] Simon Lacoste-Julien, Mark W. Schmidt, and Francis R. Bach. A simpler approach to obtaining an convergence rate for the projected stochastic subgradient method. CoRR, abs/1212.2002, 2012.
  • [17] Rémi Leblond, Fabian Pedregosa, and Simon Lacoste-Julien. ASAGA: Asynchronous parallel SAGA. In Aarti Singh and Jerry Zhu, editors, Proceedings of the 20th International Conference on Artificial Intelligence and Statistics, volume 54 of Proceedings of Machine Learning Research, pages 46–54, Fort Lauderdale, FL, USA, 20–22 Apr 2017. PMLR.
  • [18] Rémi Leblond, Fabian Pedregosa, and Simon Lacoste-Julien. Improved asynchronous parallel optimization analysis for stochastic incremental methods., January 2018.
  • [19] David D. Lewis, Yiming Yang, Tony G. Rose, and Fan Li. RCV1: A new benchmark collection for text categorization research. Journal of Machine Learning Research, 5:361–397, 2004.
  • [20] Yujun Lin, Song Han, Huizi Mao, Yu Wang, and Bill Dally. Deep gradient compression: Reducing the communication bandwidth for distributed training. In ICLR 2018 - International Conference on Learning Representations, 2018.
  • [21] Horia Mania, Xinghao Pan, Dimitris Papailiopoulos, Benjamin Recht, Kannan Ramchandran, and Michael I. Jordan. Perturbed iterate analysis for asynchronous stochastic optimization. SIAM Journal on Optimization, 27(4):2202–2229, 2017.
  • [22] Eric Moulines and Francis R. Bach.

    Non-asymptotic analysis of stochastic approximation algorithms for machine learning.

    In J. Shawe-Taylor, R. S. Zemel, P. L. Bartlett, F. Pereira, and K. Q. Weinberger, editors, NIPS - Advances in Neural Information Processing Systems 24, pages 451–459. Curran Associates, Inc., 2011.
  • [23] Taesik Na, Jong Hwan Ko, Jaeha Kung, and Saibal Mukhopadhyay.

    On-chip training of recurrent neural networks with limited numerical precision.

    2017 International Joint Conference on Neural Networks (IJCNN), pages 3716–3723, 2009.
  • [24] Feng Niu, Benjamin Recht, Christopher Re, and Stephen J. Wright. HOGWILD!: A lock-free approach to parallelizing stochastic gradient descent. In NIPS - Proceedings of the 24th International Conference on Neural Information Processing Systems, NIPS’11, pages 693–701, USA, 2011. Curran Associates Inc.
  • [25] Fabian Pedregosa, Gaël Varoquaux, Alexandre Gramfort, Vincent Michel, Bertrand Thirion, Olivier Grisel, Mathieu Blondel, Peter Prettenhofer, Ron Weiss, Vincent Dubourg, et al. Scikit-learn: Machine learning in python. Journal of machine learning research, 12(Oct):2825–2830, 2011.
  • [26] Boris T. Polyak and Anatoli B. Juditsky. Acceleration of stochastic approximation by averaging. SIAM Journal on Control and Optimization, 30(4):838–855, 1992.
  • [27] Alexander Rakhlin, Ohad Shamir, and Karthik Sridharan. Making gradient descent optimal for strongly convex stochastic optimization. In Proceedings of the 29th International Coference on International Conference on Machine Learning, ICML’12, pages 1571–1578, USA, 2012. Omnipress.
  • [28] Herbert Robbins and Sutton Monro. A Stochastic Approximation Method. The Annals of Mathematical Statistics, 22(3):400–407, September 1951.
  • [29] David Ruppert. Efficient estimations from a slowly convergent Robbins-Monro process. Technical report, Cornell University Operations Research and Industrial Engineering, 1988.
  • [30] Christopher De Sa, Ce Zhang, Kunle Olukotun, and Christopher Ré. Taming the wild: A unified analysis of HOGWILD!-style algorithms. In NIPS - Proceedings of the 28th International Conference on Neural Information Processing Systems, NIPS’15, pages 2674–2682, Cambridge, MA, USA, 2015. MIT Press.
  • [31] Mark Schmidt, Nicolas Le Roux, and Francis Bach. Minimizing finite sums with the stochastic average gradient. Math. Program., 162(1-2):83–112, March 2017.
  • [32] Frank Seide, Hao Fu, Jasha Droppo, Gang Li, and Dong Yu. 1-bit stochastic gradient descent and its application to data-parallel distributed training of speech DNNs. In Haizhou Li, Helen M. Meng, Bin Ma, Engsiong Chng, and Lei Xie, editors, INTERSPEECH, pages 1058–1062. ISCA, 2014.
  • [33] Ohad Shamir and Tong Zhang. Stochastic gradient descent for non-smooth optimization: Convergence results and optimal averaging schemes. In Sanjoy Dasgupta and David McAllester, editors, Proceedings of the 30th International Conference on Machine Learning, volume 28 of Proceedings of Machine Learning Research, pages 71–79, Atlanta, Georgia, USA, 17–19 Jun 2013. PMLR.
  • [34] Soren Sonnenburg, Vojtvech Franc, E Yom-Tov, and M Sebag. Pascal large scale learning challenge. 10:1937–1953, 01 2008.
  • [35] Sebastian U. Stich. Local SGD Converges Fast and Communicates Little. ArXiv, abs/1805.09767, May 2018.
  • [36] Nikko Strom. Scalable distributed DNN training using commodity GPU cloud computing. In INTERSPEECH, pages 1488–1492. ISCA, 2015.
  • [37] Xu Sun, Xuancheng Ren, Shuming Ma, and Houfeng Wang. meProp: Sparsified back propagation for accelerated deep learning with reduced overfitting. In Doina Precup and Yee Whye Teh, editors, Proceedings of the 34th International Conference on Machine Learning, volume 70 of Proceedings of Machine Learning Research, pages 3299–3308, International Convention Centre, Sydney, Australia, 06–11 Aug 2017. PMLR.
  • [38] Hanlin Tang, Shaoduo Gan, Ce Zhang, and Ji Liu. Communication compression for decentralized training. In NIPS 2018, and arXiv 1803.06443, 2018.
  • [39] Jianqiao Wangni, Jialei Wang, Ji Liu, and Tong Zhang. Gradient sparsification for communication-efficient distributed optimization. In NIPS 2018, and arXiv 1710.09854, 2018.
  • [40] Wei Wen, Cong Xu, Feng Yan, Chunpeng Wu, Yandan Wang, Yiran Chen, and Hai Li. Terngrad: Ternary gradients to reduce communication in distributed deep learning. In I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett, editors, NIPS - Advances in Neural Information Processing Systems 30, pages 1509–1519. Curran Associates, Inc., 2017.
  • [41] Jiaxiang Wu, Weidong Huang, Junzhou Huang, and Tong Zhang. Error Compensated Quantized SGD and its Applications to Large-scale Distributed Optimization. In ICML 2018 - Proceedings of the 35th International Conference on Machine Learning, pages 5321–5329, July 2018.
  • [42] Yang You, Igor Gitman, and Boris Ginsburg. Scaling SGD batch size to 32k for ImageNet training. CoRR, abs/1708.03888, 2017.
  • [43] Hantian Zhang, Jerry Li, Kaan Kara, Dan Alistarh, Ji Liu, and Ce Zhang. ZipML: Training linear models with end-to-end low precision, and a little bit of deep learning. In Doina Precup and Yee Whye Teh, editors, Proceedings of the 34th International Conference on Machine Learning, volume 70 of Proceedings of Machine Learning Research, pages 4035–4043, International Convention Centre, Sydney, Australia, 06–11 Aug 2017. PMLR.
  • [44] Yuchen Zhang, Martin J Wainwright, and John C Duchi. Communication-efficient algorithms for statistical optimization. In F. Pereira, C. J. C. Burges, L. Bottou, and K. Q. Weinberger, editors, NIPS - Advances in Neural Information Processing Systems 25, pages 1502–1510. Curran Associates, Inc., 2012.
  • [45] Peilin Zhao and Tong Zhang. Stochastic optimization with importance sampling for regularized loss minimization. In Francis Bach and David Blei, editors, Proceedings of the 32nd International Conference on Machine Learning, volume 37 of Proceedings of Machine Learning Research, pages 1–9, Lille, France, 07–09 Jul 2015. PMLR.

Appendix A Proofs

a.1 Useful facts

Lemma A.1.

For , , and operator it holds


From the definition of the operators, for all in we have


and we apply the expectation


which concludes the proof. ∎

Lemma A.2.

Let , for . Then .




where the inequality follows from


a.2 Proof of the Main Theorem

Proof of Lemma 3.1.

Using the update equation (11) we have


And by applying expectation


To upper bound the third term, we use the same estimates as in [17, Appendix C.3]: By strong convexity, for , hence


and with we further have


Putting these two estimates together, we can bound (23) as follows:


where . We now estimate the last term. As each is -smooth also is -smooth, i.e. satisfies . Together with we have


Combining with (26) we have


and the claim follows with (12). ∎

Proof of Lemma 3.2.

First, observe that by Lemma A.1 and for we have


On the other hand, from we also have


Now the claim follows from Lemma A.3 just below with . ∎

Lemma A.3.

Let , , , be a sequence satisfying


for a sequence with , for , . Then


for .


The claim holds for .

Large .

Let , i.e. . (Note that for any it holds .) Suppose the claim holds for . Observe,


for . This follows from Lemma A.2 with . By induction,


where we used (and the observation just above) for the last inequality.

Small .

Assume , otherwise the claim follows from the part above. We have


where we used


for . For we have