1 Introduction
Recently, there has been great interest in stochastic optimization and learning algorithms that leverage parallelism, including e.g. delayed updates arising from pipelining and asynchronous concurrent processing, synchronous singleinstructionmultipledata parallelism, and parallelism across distant devices. With the abundance of parallelization settings and associated algorithms, it is important to precisely formulate the problem, which allows us to ask questions such as “is there a better method for this problem than what we have?” and “what is the best we could possibly expect?”
Oracle models have long been a useful framework for formalizing stochastic optimization and learning problems. In an oracle model, we place limits on the algorithm’s access to the optimization objective, but not what it may do with the information it receives. This allows us to obtain sharp lower bounds, which can be used to argue that an algorithm is optimal and to identify gaps between current algorithms and what might be possible. Finding such gaps can be very useful—for example, the gap between the first order optimization lower bound of Nemirovski et al. [21] and the best known algorithms at the time inspired Nesterov’s accelerated gradient descent algorithm [22].
We propose an oracle framework for formalizing different parallel optimization problems. We specify the structure of parallel computation using an “oracle graph” which indicates how an algorithm accesses the oracle. Each node in the graph corresponds to a single stochastic oracle query, and that query (e.g. the point at which a gradient is calculated) must be computed using only oracle accesses in ancestors of the node. We generally think of each stochastic oracle access as being based on a single data sample, thus involving one or maybe a small number of vector operations.
In Section 3 we devise generic lower bounds for parallel optimization problems in terms of simple properties of the associated oracle graph, namely the length of the longest dependency chain and the total number of nodes. In Section 4 we study specific parallel optimization settings in which many algorithms have been proposed, formulate them as graphbased oracle parallel optimization problems, instantiate our lower bounds, and compare them with the performance guarantees of specific algorithms. We highlight gaps between the lower bound and the best known upper bound and also situations where we can devise an optimal algorithm that matches the lower bound, but where this is not the “natural” and typical algorithm used in this settings. The latter indicates either a gap in our understanding of the “natural” algorithm or a need to depart from it.
Previously suggested models
Previous work studied communication lower bounds for parallel convex optimization where there are
machines each containing a local function (e.g. a collection of samples from a distribution). Each machine can perform computation on its own function, and then periodically every machine is allowed to transmit information to the others. In order to prove meaningful lower bounds based on the number of rounds of communication, it is necessary to prevent the machines from simply transmitting their local function to a central machine, or else any objective could be optimized in one round. There are two established ways of doing this. First, one can allow arbitrary computation on the local machines, but restrict the number of bits that can be transmitted in each round. There is work focusing on specific statistical estimation problems that establishes communication lower bounds via informationtheoretic arguments
[29, 12, 7]. Alternatively, one can allow the machines to communicate realvalued vectors, but restrict the types of computation they are allowed to perform. For instance, Arjevani and Shamir [3] present communication complexity lower bounds for algorithms which can only compute vectors that lie in a certain subspace, which includes e.g. linear combinations of gradients of their local function. Lee et al. [16] assume a similar restriction, but allow the data defining the local functions to be allocated to the different machines in a strategic manner. Our framework applies to general stochastic optimization problems and does not impose any restrictions on what computation the algorithm may perform, and is thus a more direct generalization of the oracle model of optimization.2 The graphbased oracle model
We consider the following stochastic optimization problem
(1) 
The problem (1
) captures many important tasks, such as supervised learning, in which case
is the loss of a model parametrized by on data instance and the goal is to minimize the population risk . We assume that is convex, Lipschitz, and smooth for all . We also allow to be nonsmooth, which corresponds to . A function is Lipschitz when for all , and it is smooth when it is differentiable and its gradient is Lipschitz. We consider optimization algorithms that use either a stochastic gradient or stochastic prox oracle ( and respectively):(2)  
(3)  
(4) 
The prox oracle is quite powerful and provides global rather than local information about . In particular, querying the prox oracle with fully optimizes .
As stated, is an argument to the oracle, however there are two distinct cases. In the “fully stochastic” oracle setting, the algorithm receives an oracle answer corresponding to a random . We also consider a setting in which the algorithm is allowed to “actively query” the oracle. In this case, the algorithm may either sample or choose a desired and receive an oracle answer for that . Our lower bounds hold for either type of oracle. Most optimization algorithms only use the fully stochastic oracle, but some require more powerful active queries.
We capture the structure of a parallel optimization algorithm with a directed, acyclic oracle graph . Its depth, , is the length of the longest directed path, and the size, , is the number of nodes. Each node in the graph represents a single stochastic oracle access, and the edges in the graph indicate where the results of that oracle access may be used: only the oracle accesses from ancestors of each node are available when issuing a new query. These limitations might arise e.g. due to parallel computation delays or the expense of communicating between disparate machines.
Let be the set of possible oracle queries, with the exact form of queries (e.g., vs. ) depending on the context. Formally, a randomized optimization algorithm that accesses the stochastic oracle as prescribed by the graph is specified by associating with each node a query rule , plus a single output rule . We grant all of the nodes access to a source of shared randomness (e.g. an infinite stream of random bits). The mapping selects a query to make at node using the set of queries and oracle responses in ancestors of , namely
(5) 
Similarly, the output rule maps from all of the queries and oracle responses to the algorithm’s output as . The essential question is: for a class of optimization problems specified by a dependency graph , a stochastic oracle , and a function class , what is the best possible guarantee on the expected suboptimality of an algorithm’s output, i.e.
(6) 
In this paper, we consider optimization problems where is the class of convex, Lipschitz, and smooth functions on the domain and parametrized by , and is either a stochastic gradient oracle (2) or a stochastic prox oracle (3). We consider this function class to contain Lipschitz but nonsmooth functions too, which corresponds to . Our function class does not bound the dimension of the problem, as we seek to understand the best possible guarantees in terms of Lipschitz and smoothness constants that hold in any dimension. Indeed, there are (typically impractical) algorithms such as centerofmass methods, which might use the dimension in order to significantly reduce the oracle complexity, but at a potentially huge computational cost. Nemirovski [20] studied nonsmooth optimization in the case that the dimension is bounded, proving lower bounds in this setting that scale with the power of the dimension but have only logarithmic dependence on the suboptimality. We do not analyze strongly convex functions, but the situation is similar and lower bounds can be established via reduction [28].
3 Lower bounds
We now provide lower bounds for optimization problems and in terms of , , , and the depth and size of .
Theorem 1.
Let , , , let be any oracle graph of depth and size and consider the optimization problem . For any randomized algorithm , there exists a distribution and a convex, Lipschitz, and smooth function on a bounded domain in for such that
Theorem 2.
Let , , , let be any oracle graph of depth and size and consider the optimization problem . For any randomized algorithm , there exists a distribution and a convex, Lipschitz, and smooth function on a bounded domain in for such that
These are the tightest possible lower bounds in terms of just the depth and size of in the sense that for all there are graphs and associated algorithms which match the lower bound. Of course, for specific, mostly degenerate graphs they might not be tight. For instance, our lower bound for the graph consisting of a short sequential chain plus a very large number of disconnected nodes might be quite loose due to the artificial inflation of . Nevertheless, for many interesting graphs they are tight, as we shall see in Section 4.
Each lower bound has two components: an “optimization” term and a “statistical” term. The statistical term is well known, although we include a brief proof of this portion of the bound in Appendix D for completeness. The optimization term depends on the depth , and indicates, intuitively, the best suboptimality guarantee that can be achieved by an algorithm using unlimited parallelism but only rounds of communication. Arjevani and Shamir [3] also obtain lower bounds in terms of rounds of communication, which are similar to how our lower bounds depend on depth. However they restricted the type of computations that are allowed to the algorithm to a specific class of operations, while we only limit the number of oracle queries and the dependency structure between them, but allow forming the queries in any arbitrary way.
Similar to Arjevani and Shamir [3], to establish the optimization term in the lower bounds, we construct functions that require multiple rounds of sequential oracle accesses to optimize. In the gradient oracle case, we use a single, deterministic function which resembles a standard construction for first order optimization lower bounds. For the prox case, we construct two functions inspired by previous lower bounds for roundbased and finite sum optimization [3, 28]. In order to account for randomized algorithms that might leave the span of gradients or proxs returned by the oracle, we use a technique that was proposed by Woodworth and Srebro [27, 28] and refined by Carmon et al. [8]. For our specific setting, we must slightly modify existing analysis, which is detailed in Appendix A.
A useful feature of our lower bounds is that they apply when both the Lipschitz constant and smoothness are bounded concurrently. Consequently, “nonsmooth” in the subsequent discussion can be read as simply identifying the case where the term achieves the minimum as opposed to the term (even if ). This is particularly important when studying stochastic parallel optimization, since obtaining nontrivial guarantees in a purely stochastic setting requires some sort of control on the magnitude of the gradients (smoothness by itself is not sufficient), while obtaining parallelization speedups often requires smoothness, and so we would like to ask what is the best that can be done when both Lipschitz and smoothness are controlled. Interestingly, the dependence on both and in our bounds is tight, even when the other is constrained, which shows that the optimization term cannot be substantially reduced by using both conditions together.
In the case of the gradient oracle, we “smooth out” a standard nonsmooth lower bound construction [21, 27]; previous work has used a similar approach in slightly different settings [2, 13]. For and , and orthonormal drawn uniformly at random, we define the Lipschitz but nonsmooth function , and its Lipschitz, smooth “Moreau envelope” [5]:
(7) 
This defines a distribution over ’s based on the randomness in the draw of , and we apply Yao’s minimax principle. In Appendix B, we prove Theorem 1 using this construction.
In the case of the prox oracle, we “straighten out” the smooth construction of Woodworth and Srebro [28]. For fixed constants , we define the following Lipschitz and smooth scalar function :
(8) 
For and orthonormal drawn uniformly at random, we define
(9)  
(10) 
Again, this defines a distribution over ’s based on the randomness in the draw of and we apply Yao’s minimax principle. In Appendix C, we prove Theorem 2 using this construction.
Relation to previous bounds
As mentioned above, Duchi et al. [10] recently showed a lower bound for first and zeroorder stochastic optimization in the “simple parallelism” graph consisting of layers, each with nodes. Their bound [10, Thm 2] applies only when the dimension is constant, and . Our lower bound requires nonconstant dimension, but applies in any range of . Furthermore, their proof techniques do not obviously extend to prox oracles.
4 Specific dependency graphs
We now use our framework to study four specific parallelization structures. The main results (tight complexities and gaps between lower and upper bounds) are summarized in Table 1. For simplicity and without loss of generality, we set , i.e. we normalize the optimization domain to be . All stated upper and lower bounds are for the expected suboptimality of the algorithm’s output.
Graph example  With gradient oracle  With gradient and prox oracle  










4.1 Sequential computation: the path graph
We begin with the simplest model, that of sequential computation captured by the path graph of length depicted above. The ancestors of each vertex are all the preceding vertices . The sequential model is of course well studied and understood. To see how it fits into our framework: A path graph of length has a depth of and size of , thus with either gradient or prox oracles, the statistical term is dominant in Theorems 1 and 2
. These lower bounds are matched by sequential stochastic gradient descent, yielding a tight complexity of
and the familiar conclusion that SGD is (worst case) optimal in this setting.4.2 Simple parallelism: the layer graph
We now turn to a model in which oracle queries can be made in parallel, and the results are broadcast for use in making the next batch of queries. This corresponds to synchronized parallelism and fast communication between processors. The model is captured by a layer graph of width , depicted above for . The graph consists of layers each with nodes whose ancestors include for all and . The graph has a depth of and size of . With a stochastic gradient oracle, Theorem 1 yields a lower bound of:
(11) 
which is matched by accelerated minibatch SGD (AMBSGD) [15, 9], establishing the optimality of AMBSGD in this setting. For sufficiently smooth objectives, the same algorithm is also optimal even if prox access is allowed, since Theorem 2 implies a lower bound of:
(12) 
That is, for smooth objectives, having access to a prox oracle does not improve the optimal complexity over just using gradient access. However, for nonsmooth or insufficiently smooth objectives, there is a gap between (11) and (12). An optimal algorithm, smoothed AMBSGD, uses the prox oracle in order to calculate gradients of the Moreau envelope of (cf. Proposition 12.29 of [5]), and then performs AMBSGD on the smoothed objectives. This yields a suboptimality guarantee that precisely matches (12), establishing that the lower bound from Theorem 2 is tight for the layer graph, and that smoothed AMBSGD is optimal. An analysis of the smoothed AMBSGD algorithm is provided in Appendix E.1.
4.3 Delayed updates
We now turn to a delayed computation model that is typical in many asynchronous parallelization and pipelined computation settings, e.g. when multiple processors or machines are working asynchronously, reading iterates, taking some time to perform the oracle accesses and computation, then communicating the results back (or updating the iterate accordingly) [6, 19, 1, 17, 25]. This is captured by a “delay graph” with nodes and delays for the response to the oracle query performed at to become available. Hence, . Analysis is typically based on the delays being bounded, i.e. for all . The depiction above corresponds to ; the case corresponds to the path graph. With constant delays , the delay graph has depth and size , so Theorem 1 gives the following lower bound when using a gradient oracle:
(13) 
Delayed SGD, with updates , is a natural algorithm in this setting. Under the bounded delay assumption the best guarantee we are aware of for delayed update SGD is (see [11] improving over [1])
(14) 
This result is significantly worse than the lower bound (13) and quite disappointing. It does not provide for a accelerated optimization rate, but even worse, compared to nonaccelerated SGD it suffers a slowdown quadratic in the delay, compared to the linear slowdown we would expect. In particular, the guarantee (14) only allows maximum delay of in order to attain the optimal statistical rate , whereas the lower bound allows a delay up to .
This raises the question of whether a different algorithm can match the lower bound (13). The answer is affirmative, but it requires using an “unnatural” algorithm, which simulates a minibatch approach in what seems an unnecessarily wasteful way. We refer to this as a “waitandcollect” approach: it works in stages, each stage consisting of iterations (i.e. nodes or oracle accesses). In stage , iterations are used to obtain stochastic gradient estimates at the same point . For the remaining iterations, we wait for all the preceding oracle computations to become available and do not even use our allowed oracle access. We can then finally update the using the minibatch of gradient estimates. This approach is also specified formally as Algorithm 2 in Appendix E.2. Using this approach, we can perform AMBSGD updates with a minibatch size of , yielding a suboptimality guarantee that precisely matches the lower bound (13).
Thus (13) indeed represents the tight complexity of the delay graph with a stochastic gradient oracle, and the waitandcollect approach is optimal. However, this answer is somewhat disappointing and leaves an intriguing open question: can a more natural, and seemingly more efficient (no wasted oracle accesses) delayed update SGD algorithm also match the lower bound? An answer to this question has two parts: first, does the delayed update SGD truly suffer from a slowdown as indicated by (14), or does it achieve linear degradation and a speculative guarantee of
(15) 
Second, can delayed update SGD be accelerated to achieve the optimal rate (13). We note that concurrent with our work there has been progress toward closing this gap: Arjevani et al. [4] showed an improved bound matching the nonaccelerated (15) for delayed updates (with a fixed delay) on quadratic objectives. It still remains to generalize the result to smooth nonquadratic objectives, handle nonconstant bounded delays, and accelerate the procedure so as to improve the rate to .
4.4 Intermittent communication
We now turn to a parallel computation model which is relevant especially when parallelizing across disparate machines: in each of iterations, there are machines that, instead of just a single oracle access, perform sequential oracle accesses before broadcasting to all other machines synchronously. This communication pattern is relevant in the realistic scenario where local computation is plentiful relative to communication costs (i.e. is large). This may be the case with fast processors distributed across different machines, or in the setting of federated learning, where mobile devices collaborate to train a shared model while keeping their respective training datasets local [18].
This is captured by a graph consisting of parallel chains of length , with cross connections between the chains every nodes. Indexing the nodes as , the nodes form a chain, and is connected to for all . This graph generalizes the layer graph by allowing sequential oracle queries between each complete synchronization; recovers the layer graph, and the depiction above corresponds to . We refer to the computation between each synchronization step as a (communication) round.
The depth of this graph is and the size is . Focusing on the stochastic gradient oracle (the situation is similar for the prox oracle, except with the potential of smoothing a nonsmooth objective, as discussed in Section 4.2), Theorem 1 yields the lower bound:
(16) 
A natural algorithm for this graph is parallel SGD, where we run an SGD chain on each machine and average iterates during communication rounds, e.g. [18]. The updates are then given by:
(17)  
(note that does not correspond to any node in the graph, and is included for convenience of presentation). Unfortunately, we are not aware of any satisfying analysis of such a parallel SGD approach. Instead, we consider two other algorithms in an attempt to match the lower bound (16). First, we can combine all oracle accesses between communication rounds in order to form a single minibatch, giving up on the possibility of sequential computation along the “local” node subpaths. Using all nodes to obtain stochastic gradient estimates at the same point, we can perform iterations of AMBSGD with a minibatch size of , yielding an upper bound of
(18) 
This is a reasonable and common approach, and it is optimal (up to constant factors) when so that the statistical term is limiting. However, comparing (18) to the lower bound (16) we see a gap by a factor of in the optimization term, indicating the possibility for significant gains when is large (i.e. when we can process a large number of examples on each machine at each round). Improving the optimization term by this factor would allow statistical optimality as long as —this is a very significant difference. In many scenarios we would expect a modest number of machines, but the amount of data on each machine could easily be much more than the number of communication rounds, especially if communication is across a wide area network.
In fact, when is large, a different approach is preferable: we can ignore all but a single chain and simply execute iterations of sequential SGD, offering an upper bound of
(19) 
Although this approach seems extremely wasteful, it actually yields a better guarantee than (18) when . This is a realistic regime, e.g. in federated learning when computation is distributed across devices, communication is limited and sporadic and so only a relatively small number of rounds are possible, but each device already possesses a large amount of data. Furthermore, for nonsmooth functions, (19) matches the lower bound (16).
Our upper bound on the complexity is therefore obtained by selecting either AMBSGD or singlemachine sequential SGD, yielding a combined upper bound of
(20) 
For smooth functions, there is still a significant gap between this upper bound and the lower bound (16). Furthermore, this upper bound is not achieved by a single algorithm, but rather a combination of two separate algorithms, covering two different regimes. This raises the question of whether there is a single, natural algorithm, perhaps an accelerated variant of the parallel SGD updates (17), that at the very least matches (20), and preferably also improves over them in the intermediate regime or even matches the lower bound (16).
Active querying and SVRG
All methods discussed so far used fully stochastic oracles, requesting a gradient (or prox computation) with respect to an independently and randomly drawn . We now turn to methods that also make active queries, i.e. draw samples from and then repeatedly query the oracle, at different points , but on the same samples . Recall that all of our lower bounds are valid also in this setting.
With an active query gradient oracle, we can implement SVRG [14, 16] on an intermittent communication graph. More specifically, for an appropriate choice of and , we apply SVRG to the regularized empirical objective
To do so, we first pick a sample (without actually querying the oracle). As indicated by Algorithm 1, we then alternate between computing full gradients on in parallel
, and sequential variancereduced stochastic gradient updates in between
. The full gradient is computed using active queries to the gradient oracle. Since all of these oracle accesses are made at the same point , this can be fully parallelized across the parallel chains of length thus requiring rounds. The sequential variancereduced stochastic gradient updates cannot be parallelized in this way, and must be performed using queries to the gradient oracle in just one of the available parallel chains, requiring rounds of synchronization. Consequently, each outer iteration of SVRG requires rounds. We analyze this method using , , and . Using the analysis of Johnson and Zhang [14], SVRG guarantees that, with an appropriate stepsize, we have ; the value of on the empirical objective also generalizes to the population, so (see [23]). With our choice of parameters, this implies upper bound (see Appendix E.3)(21) 
These guarantees improve over sequential SGD (17) as soon as and , i.e. . This is a very wide regime: we require only a moderate number of machines, and the second condition will typically hold for a smooth loss. Intuitively, SVRG does roughly the same number (up to a factor of two) of sequential updates as in the sequential SGD approach but it uses better, variance reduced updates. The price we pay is in the smaller total sample size since we keep calling the oracle on the same samples. Nevertheless, since SVRG only needs to calculate the “batch” gradient a logarithmic number of times, this incurs only an additional logarithmic factor.
Comparing (18) and (21), we see that SVRG also improves over AMBSGD as soon as , that is if the number of points we are processing on each machine each round is slightly more then the total number of rounds, which is also a realistic scenario.
To summarize, the best known upper bound for optimizing with intermittent communication using a pure stochastic oracle is (20), which combines two different algorithms. However, with active oracle accesses, SVRG is also possible and the upper bound becomes:
(22) 
5 Summary
Our main contributions in this paper are: (1) presenting a precise formal oracle framework for studying parallel stochastic optimization; (2) establishing tight oracle lower bounds in this framework that can then be easily applied to particular instances of parallel optimization; and (3) using the framework to study specific settings, obtaining optimality guarantees, understanding where additional assumptions would be needed to break barriers, and, perhaps most importantly, identifying gaps in our understanding that highlight possibilities for algorithmic improvement. Specifically,

For nonsmooth objectives and a stochastic prox oracle, smoothing and acceleration can improve performance in the layer graph setting. It is not clear if there is a more direct algorithm with the same optimal performance, e.g. averaging the answers from the prox oracle.

In the delay graph setting, delayed update SGD’s guarantee is not optimal. We suggest an alternative optimal algorithm, but it would be interesting and beneficial to understand the true behavior of delayed update SGD and to improve it as necessary to attain optimality.

With intermittent communication, we show how different methods are better in different regimes, but even combining these methods does not match our lower bound. This raises the question of whether our lower bound is achievable. Are current methods optimal? Is the true optimal complexity somewhere in between? Even finding a single method that matches the current best performance in all regimes would be a significant advance here.

With intermittent communication, active queries allow us to obtain better performance in a certain regime. Can we match this performance using pure stochastic queries or is there a real gap between active and pure stochastic queries?
The investigation into optimizing over in our framework indicates that there is no advantage to the prox oracle for optimizing (sufficiently) smooth functions. This raises the question of what additional assumptions might allow us to leverage the prox oracle, which is intuitively much stronger as it allows global access to . One option is to assume a bound on the variance of the stochastic oracle i.e. which captures the notion that the functions are somehow related and not arbitrarily different. In particular, if each stochastic oracle access, in each node, is based on a sample of data points (thus, a prox operation optimizes a subproblem of size ), we have that . Initial investigation into the complexity of optimizing over the restricted class (where we also require the above variance bound), reveals a significant theoretical advantage for the prox oracle over the gradient oracle, even for smooth functions. This is an example of how formalizing the optimization problem gives insight into additional assumptions, in this case low variance, that are necessary for realizing the benefits of a stronger oracle.
Acknowledgements
We would like to thank Ohad Shamir for helpful discussions. This work was partially funded by NSFBSF award 1718970 (“Convex and NonConvex Distributed Learning”) and a Google Research Award. BW is supported by the NSF Graduate Research Fellowship under award 1754881. AS was supported by NSF awards IIS1447700 and AF1763786, as well as a Sloan Foundation research award.
References
 Agarwal and Duchi [2011] Alekh Agarwal and John C Duchi. Distributed delayed stochastic optimization. In Advances in Neural Information Processing Systems, pages 873–881, 2011.
 Agarwal and Hazan [2017] Naman Agarwal and Elad Hazan. Lower bounds for higherorder convex optimization. arXiv preprint arXiv:1710.10329, 2017.
 Arjevani and Shamir [2015] Yossi Arjevani and Ohad Shamir. Communication complexity of distributed convex learning and optimization. In Advances in Neural Information Processing Systems, pages 1756–1764, 2015.
 Arjevani et al. [2018] Yossi Arjevani, Ohad Shamir, and Nathan Srebro. A tight convergence analysis for stochastic gradient descent with delayed updates. 2018.
 Bauschke et al. [2017] Heinz H Bauschke, Patrick L Combettes, et al. Convex analysis and monotone operator theory in Hilbert spaces, volume 2011. Springer, 2017.
 Bertsekas [1989] Dimitri P Bertsekas. Parallel and distributed computation: numerical methods, volume 23. Prentice hall Englewood Cliffs, NJ, 1989.

Braverman et al. [2016]
Mark Braverman, Ankit Garg, Tengyu Ma, Huy L Nguyen, and David P Woodruff.
Communication lower bounds for statistical estimation problems via a
distributed data processing inequality.
In
Proceedings of the fortyeighth annual ACM symposium on Theory of Computing
, pages 1011–1020. ACM, 2016.  Carmon et al. [2017] Yair Carmon, John C Duchi, Oliver Hinder, and Aaron Sidford. Lower bounds for finding stationary points i. arXiv preprint arXiv:1710.11606, 2017.
 Cotter et al. [2011] Andrew Cotter, Ohad Shamir, Nati Srebro, and Karthik Sridharan. Better minibatch algorithms via accelerated gradient methods. In Advances in neural information processing systems, pages 1647–1655, 2011.
 Duchi et al. [2018] John Duchi, Feng Ruan, and Chulhee Yun. Minimax bounds on stochastic batched convex optimization. In Proceedings of the 31st Conference On Learning Theory, pages 3065–3162, 2018.
 Feyzmahdavian et al. [2016] Hamid Reza Feyzmahdavian, Arda Aytekin, and Mikael Johansson. An asynchronous minibatch algorithm for regularized stochastic optimization. IEEE Transactions on Automatic Control, 61(12):3740–3754, 2016.
 Garg et al. [2014] Ankit Garg, Tengyu Ma, and Huy Nguyen. On communication cost of distributed statistical estimation and dimensionality. In Advances in Neural Information Processing Systems, pages 2726–2734, 2014.
 Guzmán and Nemirovski [2015] Cristóbal Guzmán and Arkadi Nemirovski. On lower complexity bounds for largescale smooth convex optimization. Journal of Complexity, 31(1):1–14, 2015.
 Johnson and Zhang [2013] Rie Johnson and Tong Zhang. Accelerating stochastic gradient descent using predictive variance reduction. In Advances in Neural Information Processing Systems, 2013.
 Lan [2012] Guanghui Lan. An optimal method for stochastic composite optimization. Mathematical Programming, 133(12):365–397, 2012.

Lee et al. [2017]
Jason D Lee, Qihang Lin, Tengyu Ma, and Tianbao Yang.
Distributed stochastic variance reduced gradient methods by sampling
extra data with replacement.
The Journal of Machine Learning Research
, 18(1):4404–4446, 2017.  McMahan and Streeter [2014] Brendan McMahan and Matthew Streeter. Delaytolerant algorithms for asynchronous distributed online learning. In Advances in Neural Information Processing Systems, pages 2915–2923, 2014.
 McMahan et al. [2017] H. Brendan McMahan, Eider Moore, Daniel Ramage, Seth Hampson, and Blaise Aguera y Arcas. Communicationefficient learning of deep networks from decentralized data. In Artificial Intelligence and Statistics, 2017.
 Nedić et al. [2001] A Nedić, Dimitri P Bertsekas, and Vivek S Borkar. Distributed asynchronous incremental subgradient methods. Studies in Computational Mathematics, 8(C):381–407, 2001.
 Nemirovski [1994] Arkadi Nemirovski. On parallel complexity of nonsmooth convex optimization. Journal of Complexity, 10(4):451–463, 1994.
 Nemirovski et al. [1983] Arkadii Nemirovski, David Borisovich Yudin, and Edgar Ronald Dawson. Problem complexity and method efficiency in optimization. 1983.
 Nesterov [1983] Yurii Nesterov. A method of solving a convex programming problem with convergence rate o (1/k2). 1983.
 ShalevShwartz and Srebro [2008] Shai ShalevShwartz and Nathan Srebro. Svm optimization: inverse dependence on training set size. In International Conference on Machine Learning, pages 928–935, 2008.

Slud et al. [1977]
Eric V Slud et al.
Distribution inequalities for the binomial law.
The Annals of Probability
, 5(3):404–412, 1977.  Sra et al. [2016] Suvrit Sra, Adams Wei Yu, Mu Li, and Alex Smola. Adadelay: Delay adaptive distributed stochastic optimization. In Artificial Intelligence and Statistics, pages 957–965, 2016.
 Wang et al. [2017] Jialei Wang, Weiran Wang, and Nathan Srebro. Memory and communication efficient distributed stochastic optimization with minibatch prox. In Conference on Learning Theory, 2017.
 Woodworth and Srebro [2017] Blake Woodworth and Nathan Srebro. Lower bound for randomized first order convex optimization. arXiv preprint arXiv:1709.03594, 2017.
 Woodworth and Srebro [2016] Blake Woodworth and Nati Srebro. Tight complexity bounds for optimizing composite objectives. In Advances in Neural Information Processing Systems, pages 3639–3647, 2016.
 Zhang et al. [2013] Yuchen Zhang, John Duchi, Michael I Jordan, and Martin J Wainwright. Informationtheoretic lower bounds for distributed statistical estimation with communication constraints. In Advances in Neural Information Processing Systems, pages 2328–2336, 2013.
Appendix A Main lower bound lemma
This analysis closely follows that of previous work, specifically the proof of Theorem 1 in [27] and the proof of Lemma 4 in [8]. There are slight differences in the problem setup between this work and that of previous papers, thus we include the following analysis for completeness and to ensure that all of our results can be verified. We do not claim any significant technical novelty within this section.
Let be a uniformly random orthonormal set of vectors in . All of the probabilities referred to in Appendix A will be over the randomness in the selection of . Let be a set of vectors in where for all . Let these vectors be organized into disjoint subsets . Furthermore, suppose that for each , the set is a deterministic function , so it can also be expressed as .
Let , let be the projection operator onto the span of and let be the projection onto the orthogonal complement of the span of . As in [27, 8], define
(23) 
Finally, suppose that for each , is of the form:
(24) 
i.e. conditioned on the event , it is a deterministic function of only (and not ). We say that , so is always independent of .
First, we connect the events to a more immediately useful condition
The proof of Lemma 1 involves straightforward linear algebra, and we defer it to Appendix A.1. By Lemma 1, , therefore the property (24) is implied by
(25) 
Now, we state the main result which allows us to prove our lower bounds:
Lemma 2.
Lemma 3.
Proof of Lemma 2.
This closely follows the proof of Lemma 4 [8] and Lemma 4 [27], with small modifications to account for the different setting.
Set . Then by Lemma 1, since satisfy the property (25)
(26) 
Focus on a single term in this product,
(27) 
For any particular ,
(28)  
(29)  
(30) 
Conditioned on and , the set is fixed, as is the set and therefore , so the first term in the inner product is a fixed unit vector. By Lemma 3, the conditional density of is spherically symmetric within the span onto which projects. Therefore, is distributed uniformly on the unit sphere in , which has dimension at least .
The probability of a fixed vector and a uniform random vector on the unit sphere in having inner product more than is proportional to the surface area of the “end caps" of the sphere lying above and below circles of radius , which is strictly smaller than the surface area of a full sphere of radius . Therefore, for a given
(31)  
(32)  
(33) 
where we used that . Finally, this holds for each , , and , so
(34)  
(35)  
(36)  
(37) 
Where we used that for (36). For (37), recall that we chose so . ∎
a.1 Proof of Lemmas 1 and 3
See 1
Proof.
This closely follows the proof of Lemma 9 [8], with slight modification to account for the different problem setup.
For assume . For any and
(38)  
(39)  
(40)  
(41) 
First, we decomposed into its and components and applied the triangle inequality. Next we used that and that the orthogonal projection operator is selfadjoint. Finally, we used that the projection operator is nonexpansive and the definition of .
Next, we prove by induction on that for all and , the event implies that . As a base case (), observe that, trivially, . For the inductive step, fix any and