We consider the problem of distributed convex learning and optimization, where a set of machines, each with access to a different local convex function and a convex domain , attempt to solve the optimization problem
A prominent application is empirical risk minimization, where the goal is to minimize the average loss over some dataset, where each machine has access to a different subset of the data. Letting be the dataset composed of
examples, and assuming the loss functionis convex in , then the empirical risk minimization problem can be written as in Eq. (1), where is the average loss over machine ’s examples.
The main challenge in solving such problems is that communication between the different machines is usually slow and constrained, at least compared to the speed of local processing. On the other hand, the datasets involved in distributed learning are usually large and high-dimensional. Therefore, machines cannot simply communicate their entire data to each other, and the question is how well can we solve problems such as Eq. (1) using as little communication as possible.
As datasets continue to increase in size, and parallel computing platforms becoming more and more common (from multiple cores on a single CPU to large-scale and geographically distributed computing grids), distributed learning and optimization methods have been the focus of much research in recent years, with just a few examples including [25, 4, 2, 27, 1, 5, 13, 23, 16, 17, 8, 7, 9, 11, 20, 19, 3, 26]. Most of this work studied algorithms for this problem, which provide upper bounds on the required time and communication complexity.
In this paper, we take the opposite direction, and study what are the fundamental performance limitations in solving Eq. (1), under several different sets of assumptions. We identify cases where existing algorithms are already optimal (at least in the worst-case), as well as cases where room for further improvement is still possible.
Since a major constraint in distributed learning is communication, we focus on studying the amount of communication required to optimize Eq. (1) up to some desired accuracy . More precisely, we consider the number of communication rounds that are required, where in each communication round the machines can generally broadcast to each other information linear in the problem’s dimension (e.g. a point in
or a gradient). This applies to virtually all algorithms for large-scale learning we are aware of, where sending vectors and gradients is feasible, but computing and sending larger objects, such as Hessians (matrices) is not.
Our results pertain to several possible settings (see Sec. 2 for precise definitions). First, we distinguish between the local functions being merely convex or strongly-convex, and whether they are smooth or not. These distinctions are standard in studying optimization algorithms for learning, and capture important properties such as the regularization and the type of loss function used. Second, we distinguish between a setting where the local functions are related – e.g., because they reflect statistical similarities in the data residing at different machines – and a setting where no relationship is assumed. For example, in the extreme case where data was split uniformly at random between machines, one can show that quantities such as the values, gradients and Hessians of the local functions differ only by , where is the sample size per machine, due to concentration of measure effects. Such similarities can be used to speed up the optimization/learning process, as was done in e.g. [20, 26]. Both the -related and the unrelated setting can be considered in a unified way, by letting be a parameter and studying the attainable lower bounds as a function of . Our results can be summarized as follows:
First, we define a mild structural assumption on the algorithm (which is satisfied by reasonable approaches we are aware of), which allows us to provide the lower bounds described below on the number of communication rounds required to reach a given suboptimality .
When the local functions can be unrelated, we prove a lower bound of for smooth and -strongly convex functions, and for smooth convex functions. These lower bounds are matched by a straightforward distributed implementation of accelerated gradient descent. In particular, the results imply that many communication rounds may be required to get a high-accuracy solution, and moreover, that no algorithm satisfying our structural assumption would be better, even if we endow the local machines with unbounded computational power. For non-smooth functions, we show a lower bound of for -strongly convex functions, and for general convex functions. Although we leave a full derivation to future work, it seems these lower bounds can be matched in our framework by an algorithm combining acceleration and Moreau proximal smoothing of the local functions.
When the local functions are related (as quantified by the parameter ), we prove a communication round lower bound of for smooth and -strongly convex functions. For quadratics, this bound is matched by (up to constants and logarithmic factors) by the recently-proposed DISCO algorithm . However, getting an optimal algorithm for general strongly convex and smooth functions in the -related setting, let alone for non-smooth or non-strongly convex functions, remains open.
We also study the attainable performance without posing any structural assumptions on the algorithm, but in the more restricted case where only a single round of communication is allowed. We prove that in a broad regime, the performance of any distributed algorithm may be no better than a ‘trivial’ algorithm which returns the minimizer of one of the local functions, as long as the number of bits communicated is less than . Therefore, in our setting, no communication-efficient 1-round distributed algorithm can provide non-trivial performance in the worst case.
There have been several previous works which considered lower bounds in the context of distributed learning and optimization, but to the best of our knowledge, none of them provide a similar type of results. Perhaps the most closely-related paper is , which studied the communication complexity of distributed optimization, and showed that bits of communication are necessary between the machines, for -dimensional convex problems. However, in our setting this does not lead to any non-trivial lower bound on the number of communication rounds (indeed, just specifying a -dimensional vector up to accuracy required bits). More recently,  considered lower bounds for certain types of distributed learning problems, but not convex ones in an agnostic distribution-free framework. In the context of lower bounds for one-round algorithms, the results of  imply that
bits of communication are required to solve linear regression in one round of communication. However, that paper assumes a different model than ours, where the function to be optimized is not split among the machines as in Eq. (1), where each is convex. Moreover, issues such as strong convexity and smoothness are not considered.  proves an impossibility result for a one-round distributed learning scheme, even when the local functions are not merely related, but actually result from splitting data uniformly at random between machines. On the flip side, that result is for a particular algorithm, and doesn’t apply to any possible method.
Finally, we emphasize that distributed learning and optimization can be studied under many settings, including ones different than those studied here. For example, one can consider distributed learning on a stream of i.i.d. data [19, 7, 10, 8], or settings where the computing architecture is different, e.g. where the machines have a shared memory, or the function to be optimized is not split as in Eq. (1). Studying lower bounds in such settings is an interesting topic for future work.
2 Notation and Framework
The only vector and matrix norms used in this paper are the Euclidean norm and the spectral norm, respectively. denotes the -th standard unit vector. We let and denote the gradient and Hessians of a function at , if they exist. is smooth (with parameter ) if it is differentiable and the gradient is -Lipschitz. In particular, if , then . is strongly convex (with parameter ) if for any , where is a subgradient of at . In particular, if , then . Any convex function is also strongly-convex with . A special case of smooth convex functions are quadratics, where for some positive semidefinite matrix , vector and scalar . In this case, and
correspond to the smallest and largest eigenvalues of.
We model the distributed learning algorithm as an iterative process, where in each round the machines may perform some local computations, followed by a communication round where each machine broadcasts a message to all other machines. We make no assumptions on the computational complexity of the local computations. After all communication rounds are completed, a designated machine provides the algorithm’s output (possibly after additional local computation).
Clearly, without any assumptions on the number of bits communicated, the problem can be trivially solved in one round of communication (e.g. each machine communicates the function to the designated machine, which then solves Eq. (1). However, in practical large-scale scenarios, this is non-feasible, and the size of each message (measured by the number of bits) is typically on the order of , enough to send a -dimensional real-valued vector111The hides constants and factors logarithmic in the required accuracy of the solution. The idea is that we can represent real numbers up to some arbitrarily high machine precision, enough so that finite-precision issues are not a problem., such as points in the optimization domain or gradients, but not larger objects such as Hessians.
In this model, our main question is the following: How many rounds of communication are necessary in order to solve problems such as Eq. (1) to some given accuracy ?
As discussed in the introduction, we first need to distinguish between different assumptions on the possible relation between the local functions. One natural situation is when no significant relationship can be assumed, for instance when the data is arbitrarily split or is gathered by each machine from statistically dissimilar sources. We denote this as the unrelated setting. However, this assumption is often unnecessarily pessimistic. Often the data allocation process is more random, or we can assume that the different data sources for each machine have statistical similarities (to give a simple example, consider learning from users’ activity across a geographically distributed computing grid, each servicing its own local population). We will capture such similarities, in the context of quadratic functions, using the following definition:
We say that a set of quadratic functions
are -related, if for any , it holds that
For example, in the context of linear regression with the squared loss over a bounded subset of , and assuming data points with bounded norm are randomly and equally split among machines, it can be shown that the conditions above hold with . The choice of provides us with a spectrum of learning problems ranked by difficulty: When , this generally corresponds to the unrelated setting discussed earlier. When , we get the situation typical of randomly partitioned data. When , then all the local functions have essentially the same minimizers, in which case Eq. (1) can be trivially solved with zero communication, just by letting one machine optimize its own local function. We note that although Definition 1 can be generalized to non-quadratic functions, we do not need it for the results presented here.
We end this section with an important remark. In this paper, we prove lower bounds for the -related setting, which includes as a special case the commonly-studied setting of randomly partitioned data (in which case ). However, our bounds do not apply for random partitioning, since they use -related constructions which do not correspond to randomly partitioned data. In fact, very recent work  has cleverly shown that for randomly partitioned data, and for certain reasonable regimes of strong convexity and smoothness, it is actually possible to get better performance than what is indicated by our lower bounds. However, this encouraging result crucially relies on the random partition property, and in parameter regimes which limit how much each data point needs to be “touched”, hence preserving key statistical independence properties. We suspect that it may be difficult to improve on our lower bounds under substantially weaker assumptions.
3 Lower Bounds Using a Structural Assumption
In this section, we present lower bounds on the number of communication rounds, where we impose a certain mild structural assumption on the operations performed by the algorithm. Roughly speaking, our lower bounds pertain to a very large class of algorithms, which are based on linear operations involving points, gradients, and vector products with local Hessians and their inverses, as well as solving local optimization problems involving such quantities. At each communication round, the machines can share any of the vectors they have computed so far. Formally, we consider algorithms which satisfy the assumption stated below. For convenience, we state it for smooth functions (which are differentiable) and discuss the case of non-smooth functions in Sec. 3.2.
For each machine , define a set , initially . Between communication rounds, each machine iteratively computes and adds to some finite number of points , each satisfying
for some such that . After every communication round, let for all . The algorithm’s final output (provided by the designated machine ) is a point in the span of .
This assumption requires several remarks:
Note that is not an explicit part of the algorithm: It simply includes all points computed by machine so far, or communicated to it by other machines, and is used to define the set of new points which the machine is allowed to compute.
The assumption bears some resemblance – but is far weaker – than standard assumptions used to provide lower bounds for iterative optimization algorithms. For example, a common assumption (see ) is that each computed point must lie in the span of the previous gradients. This corresponds to a special case of Assumption 1, where , and the span is only over gradients of previously computed points. Moreover, it also allows (for instance) exact optimization of each local function, which is a subroutine in some distributed algorithms (e.g. [27, 25]), by setting and computing a point satisfying . By allowing the span to include previous gradients, we also incorporate algorithms which perform optimization of the local function plus terms involving previous gradients and points, such as , as well as algorithms which rely on local Hessian information and preconditioning, such as . In summary, the assumption is satisfied by most techniques for black-box convex optimization that we are aware of. Finally, we emphasize that we do not restrict the number or computational complexity of the operations performed between communication rounds.
The requirement that is to exclude algorithms which solve non-convex local optimization problems of the form with , which are unreasonable in practice and can sometimes break our lower bounds.
The assumption that is initially (namely, that the algorithm starts from the origin) is purely for convenience, and our results can be easily adapted to any other starting point by shifting all functions accordingly.
The techniques we employ in this section are inspired by lower bounds on the iteration complexity of first-order methods for standard (non-distributed) optimization (see for example ). These are based on the construction of ‘hard’ functions, where each gradient (or subgradient) computation can only provide a small improvement in the objective value. In our setting, the dynamics are roughly similar, but the necessity of many gradient computations is replaced by many communication rounds. This is achieved by constructing suitable local functions, where at any time point no individual machine can ‘progress’ on its own, without information from other machines.
3.1 Smooth Local Functions
We begin by presenting a lower bound when the local functions are strongly-convex and smooth:
For any even number of machines, any distributed algorithm which satisfies Assumption 1, and for any , there exist local quadratic functions over (where is sufficiently large) which are -smooth, -strongly convex, and -related, such that if , then the number of communication rounds required to obtain satisfying (for any ) is at least
if , and at least if .
The assumption of being even is purely for technical convenience, and can be discarded at the cost of making the proof slightly more complex. Also, note that does not appear explicitly in the bound, but may appear implicitly, via (for example, in a statistical setting may depend on the number of data points per machine, and may be larger if the same dataset is divided to more machines).
Let us contrast our lower bound with some existing algorithms and guarantees in the literature. First, regardless of whether the local functions are similar or not, we can always simulate any gradient-based method designed for a single machine, by iteratively computing gradients of the local functions, and performing a communication round to compute their average. Clearly, this will be a gradient of the objective function , which can be fed into any gradient-based method such as gradient descent or accelerated gradient descent . The resulting number of required communication rounds is then equal to the number of iterations. In particular, using accelerated gradient descent for smooth and -strongly convex functions yields a round complexity of , and for smooth convex functions. This matches our lower bound (up to constants and log factors) when the local functions are unrelated ().
When the functions are related, however, the upper bounds above are highly sub-optimal: Even if the local functions are completely identical, and , the number of communication rounds will remain the same as when . To utilize function similarity while guaranteeing arbitrary small , the two most relevant algorithms are DANE , and the more recent DISCO . For smooth and -strongly convex functions, which are either quadratic or satisfy a certain self-concordance condition, DISCO achieves round complexity ([26, Thm.2]), which matches our lower bound in terms of dependence on . However, for non-quadratic losses, the round complexity bounds are somewhat worse, and there are no guarantees for strongly convex and smooth functions which are not self-concordant. Thus, the question of the optimal round complexity for such functions remains open.
The full proof of Thm. 1 appears in the supplementary material, and is based on the following idea: For simplicity, suppose we have two machines, with local functions defined as follows,
It is easy to verify that for , both and are -smooth and -strongly convex, as well as -related. Moreover, the optimum of their average is a point with non-zero entries at all coordinates. However, since each local functions has a block-diagonal quadratic term, it can be shown that for any algorithm satisfying Assumption 1, after communication rounds, the points computed by the two machines can only have the first coordinates non-zero. No machine will be able to further ‘progress’ on its own, and cause additional coordinates to become non-zero, without another communication round. This leads to a lower bound on the optimization error which depends on , resulting in the theorem statement after a few computations.
3.2 Non-smooth Local Functions
Remaining in the framework of algorithms satisfying Assumption 1, we now turn to discuss the situation where the local functions are not necessarily smooth or differentiable. For simplicity, our formal results here will be in the unrelated setting, and we only informally discuss their extension to a -related setting (in a sense relevant to non-smooth functions). Formally defining -related non-smooth functions is possible but not altogether trivial, and is therefore left to future work.
The lower bound for this setting is stated in the following theorem.
For any even number of machines, any distributed optimization algorithm which satisfies Assumption 1, and for any , there exist -strongly convex -Lipschitz continuous convex local functions and over the unit Euclidean ball in (where is sufficiently large), such that if , the number of communication rounds required to obtain satisfying (for any sufficiently small ) is for , and for .
As in Thm. 1, we note that the assumption of even is for technical convenience.
This theorem, together with Thm. 1, implies that both strong convexity and smoothness are necessary for the number of communication rounds to scale logarithmically with the required accuracy . We emphasize that this is true even if we allow the machines unbounded computational power, to perform arbitrarily many operations satisfying Assumption 1. Moreover, a preliminary analysis indicates that performing accelerated gradient descent on smoothed versions of the local functions (using Moreau proximal smoothing, e.g. [15, 24]), can match these lower bounds up to log factors222Roughly speaking, for any , this smoothing creates a -smooth function which is -close to the original function. Plugging these into the guarantees of accelerated gradient descent and tuning yields our lower bounds. Note that, in order to execute this algorithm each machine must be sufficiently powerful to obtain the gradient of the Moreau envelope of its local function, which is indeed the case in our framework.. We leave a full formal derivation (which has some subtleties) to future work.
The full proof of Thm. 2 appears in the supplementary material. The proof idea relies on the following construction: Assume that we fix the number of communication rounds to be , and (for simplicity) that is even and the number of machines is . Then we use local functions of the form
where is a suitably chosen parameter. It is easy to verify that both local functions are -strongly convex and -Lipschitz continuous over the unit Euclidean ball. Similar to the smooth case, we argue that after communication rounds, the resulting points computed by machine will be non-zero only on the first coordinates, and the points computed by machine will be non-zero only on the first coordinates. As in the smooth case, these functions allow us to ’control’ the progress of any algorithm which satisfies Assumption 1.
Finally, although the result is in the unrelated setting, it is straightforward to have a similar construction in a ‘-related’ setting, by multiplying and by . The resulting two functions have their gradients and subgradients at most -different from each other, and the construction above leads to a lower bound of for convex Lipschitz functions, and for -strongly convex Lipschitz functions. In terms of upper bounds, we are actually unaware of any relevant algorithm in the literature adapted to such a setting, and the question of attainable performance here remains wide open.
4 One Round of Communication
In this section, we study what lower bounds are attainable without any kind of structural assumption (such as Assumption 1). This is a more challenging setting, and the result we present will be limited to algorithms using a single round of communication round. We note that this still captures a realistic non-interactive distributed computing scenario, where we want each machine to broadcast a single message, and a designated machine is then required to produce an output. In the context of distributed optimization, a natural example is a one-shot averaging algorithm, where each machine optimizes its own local data, and the resulting points are averaged (e.g. [27, 25]).
Intuitively, with only a single round of communication, getting an arbitrarily small error may be infeasible. The following theorem establishes a lower bound on the attainable error, depending on the strong convexity parameter and the similarity measure between the local functions, and compares this with a ‘trivial’ zero-communication algorithm, which just returns the optimum of a single local function:
For any even number of machines, any dimension larger than some numerical constant, any , and any (possibly randomized) algorithm which communicates at most bits in a single round of communication, there exist quadratic functions over , which are -related, -strongly convex and -smooth, for which the following hold for some positive numerical constants :
The point returned by the algorithm satisfies
in expectation over the algorithm’s randomness.
For any machine , if , then .
The theorem shows that unless the communication budget is extremely large (quadratic in the dimension), there are functions which cannot be optimized to non-trivial accuracy in one round of communication, in the sense that the same accuracy (up to a universal constant) can be obtained with a ‘trivial’ solution where we just return the optimum of a single local function. This complements an earlier result in , which showed that a particular one-round algorithm is no better than returning the optimum of a local function, under the stronger assumption that the local functions are not merely -related, but are actually the average loss over some randomly partitioned data.
The full proof appears in the supplementary material, but we sketch the main ideas below. As before, focusing on the case of two machines, and assuming machine is responsible for providing the output, we use
where is essentially a randomly chosen -valued symmetric matrix with spectral norm at most , and is a suitable constant. These functions can be shown to be -related as well as -strongly convex. Moreover, the optimum of equals
Thus, we see that the optimal point depends on the -th column of . Intuitively, the machines need to approximate this column, and this is the source of hardness in this setting: Machine knows but not , yet needs to communicate to machine enough information to construct its -th column. However, given a communication budget much smaller than the size of (which is ), it is difficult to convey enough information on the -th column without knowing what is. Carefully formalizing this intuition, and using some information-theoretic tools, allows us to prove the first part of Thm. 3. Proving the second part of Thm. 3 is straightforward, using a few computations.
5 Summary and Open Questions
In this paper, we studied lower bounds on the number of communication rounds needed to solve distributed convex learning and optimization problems, under several different settings. Our results indicate that when the local functions are unrelated, then regardless of the local machines’ computational power, many communication rounds may be necessary (scaling polynomially with or ), and that the worst-case optimal algorithm (at least for smooth functions) is just a straightforward distributed implementation of accelerated gradient descent. When the functions are related, we show that the optimal performance is achieved by the algorithm of  for quadratic and strongly convex functions, but designing optimal algorithms for more general functions remains open. Beside these results, which required a certain mild structural assumption on the algorithm employed, we also provided an assumption-free lower bound for one-round algorithms, which implies that even for strongly convex quadratic functions, such algorithms can sometimes only provide trivial performance.
Besides the question of designing optimal algorithms for the remaining settings, several additional questions remain open. First, it would be interesting to get assumption-free lower bounds for algorithms with multiple rounds of communication. Second, our work focused on communication complexity, but in practice the computational complexity of the local computations is no less important. Thus, it would be interesting to understand what is the attainable performance with simple, runtime-efficient algorithms. Finally, it would be interesting to study lower bounds for other distributed learning and optimization scenarios.
This research is supported in part by an FP7 Marie Curie CIG grant, the Intel ICRI-CI Institute, and Israel Science Foundation grant 425/13. We thank Nati Srebro for several helpful discussions and insights.
-  A. Agarwal, O. Chapelle, M. Dudík, and J. Langford. A reliable effective terascale linear learning system. CoRR, abs/1110.4198, 2011.
-  M.-F. Balcan, A. Blum, S. Fine, and Y. Mansour. Distributed learning, communication complexity and privacy. In COLT, 2012.
M.-F. Balcan, V. Kanchanapally, Y. Liang, and D. Woodruff.
Improved distributed principal component analysis.In NIPS, 2014.
R. Bekkerman, M. Bilenko, and J. Langford.
Scaling up machine learning: Parallel and distributed approaches. Cambridge University Press, 2011.
-  S.P. Boyd, N. Parikh, E. Chu, B. Peleato, and J. Eckstein. Distributed optimization and statistical learning via ADMM. Foundations and Trends in Machine Learning, 3(1):1–122, 2011.
-  K. Clarkson and D. Woodruff. Numerical linear algebra in the streaming model. In STOC, 2009.
-  A. Cotter, O. Shamir, N. Srebro, and K. Sridharan. Better mini-batch algorithms via accelerated gradient methods. In NIPS, 2011.
-  O. Dekel, R. Gilad-Bachrach, O. Shamir, and L. Xiao. Optimal distributed online prediction using mini-batches. Journal of Machine Learning Research, 13:165–202, 2012.
-  J. Duchi, A. Agarwal, and M. Wainwright. Dual averaging for distributed optimization: Convergence analysis and network scaling. IEEE Trans. Automat. Contr., 57(3):592–606, 2012.
-  R. Frostig, R. Ge, S. Kakade, and A. Sidford. Competing with the empirical risk minimizer in a single pass. arXiv preprint arXiv:1412.6606, 2014.
-  M. Jaggi, V. Smith, M. Takác, J. Terhorst, S. Krishnan, T. Hofmann, and M. Jordan. Communication-efficient distributed dual coordinate ascent. In NIPS, 2014.
-  J. Lee, T. Ma, and Q. Lin. Distributed stochastic variance reduced gradient methods. CoRR, 1507.07595, 2015.
-  D. Mahajan, S. Keerthy, S. Sundararajan, and L. Bottou. A parallel SGD method with strong convergence. CoRR, abs/1311.0636, 2013.
-  Y. Nesterov. Introductory lectures on convex optimization: A basic course. Springer, 2004.
-  Y. Nesterov. Smooth minimization of non-smooth functions. Mathematical programming, 103(1):127–152, 2005.
B. Recht, C. Ré, S. Wright, and F. Niu.
Hogwild: A lock-free approach to parallelizing stochastic gradient descent.In NIPS, 2011.
-  P. Richtárik and M. Takác. Distributed coordinate descent method for learning with big data. CoRR, abs/1310.2059, 2013.
Fundamental limits of online and distributed algorithms for statistical learning and estimation.In NIPS, 2014.
-  O. Shamir and N. Srebro. On distributed stochastic optimization and learning. In Allerton Conference on Communication, Control, and Computing, 2014.
-  O. Shamir, N. Srebro, and T. Zhang. Communication-efficient distributed optimization using an approximate newton-type method. In ICML, 2014.
Topics in random matrix theory, volume 132. American Mathematical Soc., 2012.
-  J. Tsitsiklis and Z.-Q. Luo. Communication complexity of convex optimization. J. Complexity, 3(3):231–243, 1987.
-  T. Yang. Trading computation for communication: Distributed SDCA. In NIPS, 2013.
-  Y.-L. Yu. Better approximation and faster algorithm using proximal average. In NIPS, 2013.
-  Y. Zhang, J. Duchi, and M. Wainwright. Communication-efficient algorithms for statistical optimization. Journal of Machine Learning Research, 14:3321–3363, 2013.
-  Y. Zhang and L. Xiao. Communication-efficient distributed optimization of self-concordant empirical loss. In ICML, 2015.
-  M. Zinkevich, M. Weimer, A. Smola, and L. Li. Parallelized stochastic gradient descent. In NIPS, 2010.
Appendix A Proofs
a.1 Proof of Thm. 1
The proof of the theorem is based on splitting the machines into two sub-groups
of the same size, each of which is assigned with a finite dimensional
restriction of and (see Eq. (3)), and tracing the maximal number of non-zero
coordinates for vectors in , the set of feasible points.
Recall that are defined as follows:
Formally speaking, we consider the matrices as infinite in size, so that each is defined over , the space of square-summable sequences. To derive lower bounds in , we consider the following restrictions of and :
Note that and produce the same values as and do for
vectors such that for all . Similarly, we define the leading principal submatrix of by .
We assign half of the machines with , and the other half with . To prove the theorem, we need the following lemma, which formalizes the intuition described in the main paper. Let
where denote the standard unit vectors. Then, the following holds:
Suppose all the sets of feasible points satisfy for some , then under assumption 1, right after the next communication round we have .
Recall that by Assumption 1, each machine can compute new points that satisfy the following for some such that :
We now analyze the state of the sets of feasible points prior to the next communication round. Assume that
is an odd number, i.e., assumefor some . The proof for the case where is even follows similar lines. Note that for any , we have
For any viable diagonal matrix D. Therefore, since , we have that the first point generated by machines which hold must satisfy
for as stated in the assumption. That is,
Since is positive semidefinite, it holds that is invertible. Also, and admit the same partitions into and blocks on the diagonal, thus , yielding . Inductively extending the latter argument shows that, in the absence of any communication rounds, all the machines whose local function is are ‘stuck’ in .
As for machines which contain , we have that for all
For any viable diagonal matrix D. Therefore, the first generated point by these machines must satisfy,
for appropriate . Hence,
Similarly to the previous case this implies that . It is now left to show that these machines cannot make further progress beyond without communicating. To see this, note that for all we have,
This means that all the points which are generated subsequently also lie in , i.e., without communicating , machines whose local function is are stuck in . Finally, executing a communication round updates all the sets of feasible points to be . ∎
The following is a direct consequence of a recursive application of Lemma 1.
Under assumption 1, after communication rounds we have
With this corollary in hand, we now turn to prove the main result. First, we compute the minimizer of the average function in , denoted by , whose form for even number of machines is simply:
By first-order optimality condition for smooth convex functions, we have
whose coordinate form is as follows
The optimal solution can be now realized as a geometric sequence for some as follows:
By Eq. (5), we must have
with the smallest root being
Therefore, this choice of satisfies Eq. (5), and it is straightforward to verify that it also satisfies Eq. (4), hence indeed equals . It will be convenient to denote a continuous range of coordinates of by where and . Also, using the following inequality which holds for
together with Eq. (A.1) yields
We now use this computation (with respect to ) to find the minimizer of , defined as the average function of the finite-dimensional restrictions actually handed to the machines. Fix and denote the corresponding minimizer by
Let be some point which was obtained after communication rounds. To bound the sub-optimality of from below, observe that
where the last equality follows from Corollary (1), according to which all the coordinates of , except for the first , must vanish. To bound the term, note that
The fact that is -strongly convex implies
Inequality (7) yields
To bound the term from below, note that since is 1-smooth we have
Combining both lower bounds for the terms and , we get for any
Picking sufficiently large, and considering how large the number of communication rounds must be to make this lower bound less than , we get
It is worth mentioning that by computing the exact minimizers of one may derive a lower bound such that the choice of does not depend on the parameters of the problem, except for the number of communication rounds. Nevertheless, such analysis requires a more involved reasoning which we find unnecessary for stating our results.
a.2 Proof of Thm. 2
We construct two types of local functions, and provide one of them to of the machines, and the other function to the other machines, in some arbitrary order. In this case, the average function is simply the average of the two types of local functions.
We will first prove the theorem statement in the strongly convex case, where is given, and then explain how to extract from it the result in the non strongly convex case.
Fix a natural number and some