Compressed Distributed Gradient Descent: Communication-Efficient Consensus over Networks

12/10/2018
by   Xin Zhang, et al.
0

Network consensus optimization has received increasing attention in recent years and has found important applications in many scientific and engineering fields. To solve network consensus optimization problems, one of the most well-known approaches is the distributed gradient descent method (DGD). However, in networks with slow communication rates, DGD's performance is unsatisfactory for solving high-dimensional network consensus problems due to the communication bottleneck. This motivates us to design a communication-efficient DGD-type algorithm based on compressed information exchanges. Our contributions in this paper are three-fold: i) We develop a communication-efficient algorithm called amplified-differential compression DGD (ADC-DGD) and show that it converges under any unbiased compression operator; ii) We rigorously prove the convergence performances of ADC-DGD and show that they match with those of DGD without compression; iii) We reveal an interesting phase transition phenomenon in the convergence speed of ADC-DGD. Collectively, our findings advance the state-of-the-art of network consensus optimization theory.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

02/01/2019

Decentralized Stochastic Optimization and Gossip Algorithms with Compressed Communication

We consider decentralized stochastic optimization with the objective fun...
12/06/2019

Communication-Efficient Network-Distributed Optimization with Differential-Coded Compressors

Network-distributed optimization has attracted significant attention in ...
09/04/2017

Abstraction of Linear Consensus Networks with Guaranteed Systemic Performance Measures

A proper abstraction of a large-scale linear consensus network with a de...
07/20/2021

CANITA: Faster Rates for Distributed Convex Optimization with Communication Compression

Due to the high communication cost in distributed and federated learning...
05/14/2021

Innovation Compression for Communication-efficient Distributed Optimization with Linear Convergence

Information compression is essential to reduce communication cost in dis...
09/14/2021

Scalable Average Consensus with Compressed Communications

We propose a new decentralized average consensus algorithm with compress...
04/13/2021

1-bit LAMB: Communication Efficient Large-Scale Large-Batch Training with LAMB's Convergence Speed

To train large models (like BERT and GPT-3) with hundreds or even thousa...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

In recent years, network consensus optimization has received increasing attention thanks to its generality and wide applicability. To date, network consensus optimization has found important applications in many scientific and engineering fields, e.g., distributed sensing in wireless sensor networks[1, 2, 3, 4]

, decentralized machine learning

[5, 6], multi-agent robotic systems[7, 8, 9], smart grids[10, 11], to name just a few. Simply speaking, in a network consensus optimization problem, each node only has access to some component of the global objective function. That is, the global objective function is only partially known at each node. Through communications with local neighbors, all nodes in the network collaborate with each other and try to reach a consensus on an optimal solution, which minimizes the global objective function.

Among various algorithms for solving network consensus optimization problems, one of the most effective methods is the distributed gradient descent (DGD) algorithm, a first-order iterative method developed by Nedic and Ozdaglar approximately a decade ago[12]. The enduring popularity of DGD is primarily due to its implementation simplicity and elegant networking interpretation: In each iteration of DGD, each node performs an update by using a linear combination of a gradient step with respect to its local objective function and a weighted average from its local neighbors (also termed as a consensus step). It has been shown that DGD enjoys the same convergence speed as the classical gradient descent method, where denotes the number of iterations[12]. The simplicity and salient features of DGD have further inspired a large number of extensions to various network settings (see Section II for more in-depth discussions).

However, despite its theoretical and engineering appeals, the performance of DGD may not always be satisfactory in practice. This is particularly true for solving a high-dimensional consensus problem over a network with low network communication speed. In this case, due to the large amount of data sharing and the communication bottleneck, exchanging full high-dimensional information between neighboring nodes is time-consuming (or even infeasible), which significantly hinders the overall convergence of DGD. To improve the convergence speed, several second-order approaches using Hessian approximation (with respect to local objective function) have been proposed (see, e.g., [13, 14]). Although these second-order methods converge in a fewer number of iterations (hence less information exchanges), they require matrix inversion in each iteration, implying a per-iteration complexity for a -dimensional problem. Hence, for high-dimensional consensus problems (i.e., large ), low-complexity first-order methods remain more preferable in practice.

To address DGD’s limitations in high-dimensional network consensus over low-speed networks, a naturally emerging idea is to compress the information exchanged between nodes. Specifically, by compressing the information in a high-dimensional state space to a smaller set of quantized states, each node can use a codebook to represent the quantized states with a small number of bits. Then, rather than directly transmitting full information, each node can just transmit the small-size codewords, which significantly reduces the communication burden. Moreover, from a cybersecurity standpoint, transmitting compressed information is also very helpful because each node can encrypt its codebook and avoid revealing full information to potential eavesdroppers in the network.

However, with compressed information being adopted in DGD, several fundamental questions immediately arise: i) Will DGD with compressed information exchanges still converge? ii) If the answer to i) is no, could we modify DGD to make it work with compressed information? iii) If the answer to ii) is yes, how fast does this modified DGD method converge? Indeed, answering all these questions are highly non-trivial and they constitute the main subjects of this paper. The main contribution in this paper is that we provide concrete answers to all three fundamental questions. Our key results and their significance are summarized as follows:

  • First, we show that DGD with straightforward compressed information exchange fails to converge because of a non-vanishing accumulated noise term resulted from compression over iterations. This motivates us to develop a noise variance reduction method. To this end, we propose a new idea called “amplified-differential compression DGD” (ADC-DGD), where, instead of directly exchanging compressed estimates of the global optimization variable in DGD, we exchange an

    amplified version of the state differential between consecutive iterations, hence the name. We show that ADC-DGD effectively diminishes the accumulated noise from compression and induces convergence.

  • We show that, under any unbiased compression operator, our ADC-DGD method converges at rate to an -neighborhood of an optimal solution with a constant step-size . Under diminishing step-sizes, ADC-DGD converges asymptotically at rate to an optimal solution. We note that these convergence rates are the best possible in the sense that they match with those of the original DGD without compression. This result is surprising since the information loss due to compression could be large. We also note that the convergence rate of ADC-DGD outperforms other existing distributed first-order methods with compression (see Section II for detailed discussions).

  • Based on the above convergence results of ADC-DGD, we further investigate the impacts of ADC-DGD’s amplifying factor on convergence speed and communication load. Interestingly, we reveal a phase transition phenomenon of the convergence speed with respect to the amplification exponent in ADC-DGD. Specifically, when (sublinear growth of amplification), convergence speed approaches that of DGD as increases. However, as soon as , there is no further convergence speed improvement but network communication load continues to grow. This shows that is a critical point, under which we can trade communication overhead for convergence speed.

Collectively, our results contribute to a growing theoretical foundation of network consensus optimization. The rest of the paper is organized as follows. In Section II, we review related work. In Section III, we introduce the network consensus optimization problem and show that DGD with compressed information exchange fails to converge. In Section IV, we present our ADC-DGD algorithm and its convergence performance analysis. Numerical results are provided in Section V and Section VI concludes this paper.

Ii Related Work

In this section, we first provide a quick overview on the historical development of DGD-type algorithms. We then focus on the recent advances of communication-conscious network consensus optimization, including related work that utilize compression.

1) DGD-Based Algorithms for Network Consensus: Network consensus optimization can trace its roots to the seminal work by Tsitsiklis [15], where the system model and the analysis framework were first developed. As mentioned earlier, a well-known method for solving network consensus optimization is the distributed (sub)gradient descent (DGD) method, which was proposed by Nedic and Ozdaglar in [12]. DGD was recently reexamined in [16] by Yuan et al. using a new Lyapunov technique, which offers further mathematical understanding of its convergence performance. In their follow-up work [17], the convergence behavior of DGD was further analyzed for non-convex problems. Recently, several DGD variants have been proposed to enhance the convergence performance (e.g., achieving the same convergence rate with constant step-size [18] or even under time-varying network graphs [19]).

2) Communication-Conscious Distributed Optimization: As mentioned earlier, studies have shown that communication costs of DGD could be a major concern in practice. To this end, Chow et al. [20] studied the tradeoff between communication requirements and prescribed accuracy. In [21], Berahas et al. developed an adaptive DGD framework called to balance the costs between communication and computation. Here, the parameter represents the number of consensus steps performed per gradient descent step ( corresponding to the original DGD). The larger the -value, the cheaper the communication cost, and vice versa. The most related work to ours is by Tang et al.[22], which, to our knowledge, is also the only work in the literature that considers adopting compression in DGD. However, our algorithm differs from [22] in the following key aspects: i) The compression in [22] uses a quantized extrapolation between two successive iterates, which can be viewed as a diminishing step-size strategy. In contrast, our ADC-DGD algorithm uses an amplified differential of two successive iterates. As will be shown later, our algorithm can be interpreted as a variance reduction method; ii) Our convergence rate outperforms that of [22]. The fastest convergence rate of the algorithms in [22] is , while the convergence rate of our ADC-DGD algorithm is ; iii) To reach the best convergence rate in [22], the extrapolation compression algorithm needs to solve a complex equation to obtain an optimal step-size. In contrast, our ADC-DGD algorithm uses the standard sublinearly diminishing step-sizes, which is of much lower complexity and can be easily implemented in practice.

Iii Network Consensus Optimization and Distributed Gradient Descent

In Section III-A, we first introduce the network consensus optimization problem, which is followed by the basic version of the DGD method. Then in Section III-B, we will illustrate an example where DGD with directly compressed information fails to converge, which motivates our subsequent ADC-DGD approach in Section IV.

Iii-a Consensus Optimization over Networks: A Primer

Consider an undirected connected graph , where and are the sets of nodes and links, respectively, with and . Let be some global decision variable to be optimized. Each node has a local objective function (only available to node ). The global objective function is the sum of all local objectives, i.e., . Our goal is to solve the following network-wide optimization problem in a distributed fashion:

(1)

Problem (1) has a wide range of applications in practice. For example, consider a wireless sensor network, where each sensor node distributively collects some local monitored temporal data and collaborates to detect the change-point in the global temporal data. This problem can be formulated as: , where is the CUSUM (cumulative sum control chart) statistics. Note that Problem (1) can be equivalently written in the following consensus form:

Minimize (2)
subject to

where is the local copy of at node . In Problem (2), the constraints enforce that the local copy at each node is equal to those of its neighbors, hence the name consensus. It is well-known [12] that Problem (2) can be reformulated as:

Minimize (3)
subject to

where , denotes the

-dimensional identity matrix, and the operator

denotes the Kronecker product. In (3), is referred to as the consensus matrix and satisfies the following properties:

  1. is doubly stochastic: .

  2. The sparsity pattern of follows the network topology: for and otherwise.

  3. is symmetric and hence it has real eigenvalues.

The doubly stochastic property in 1) ensures that all eigenvalues of are in and exactly one eigenvalue is equal to 1. Hence, it follows from Property 3) that one can sort eigenvalues as Let . Clearly, we have It is shown in [12] that if and only if , . Therefore, Problems (2) and (3) are equivalent.

The equivalent network consensus formulation in Problem (3) motivates the design of the decentralized gradient descent (DGD) method as stated in Algorithm 1:

  Algorithm 1: Decentralized Gradient Descent (DGD)[12].   Initialization:

  1. [topsep=1pt, itemsep=-.1ex, leftmargin=.2in]

  2. Let . Choose initial values for and step-size .

Main Loop:

  1. [topsep=1pt, itemsep=-.1ex, leftmargin=.2in]

  2. In the -th iteration, each node sends its local copy to its neighbors. Also, upon reception of all local copies from its neighbors, each node updates its local copy as follows:

    (4)

    where is the entry in the -th row and -th column in , and represent ’s value and step-size in the -th iteration, respectively, and .

  3. Stop if a desired convergence criterion is met; otherwise, let and go to Step 2.

 

We can see that the DGD update in (4) consists of a consensus step and a local gradient step, which can be easily implemented in a network. Also, DGD achieves the same convergence rate as in the classical gradient descent method. However, as mentioned in Section I, DGD may not work well for high-dimensional consensus problem in low-speed networks. Hence, we are interested in developing a DGD-type algorithm with compressed information exchanges in this paper. In what follows, we will first show that DGD fails to converge if compressed information is directly adopted in the consensus step.

Iii-B DGD with Directly Compressed Information Exchange Does Not Converge: A Motivating Example

We first introduce the notion of unbiased stochastic compression operator, which has been widely used to represent compressions in the literature (see, e.g., [20, 23, 21, 24, 25, 26]).

Definition 1 (Unbiased Stochastic Compression Operator).

A stochastic compression operator is unbiased if it satisfies , with and , .

Defintion 1 guarantees that the noise caused by the compression has no effect on the mean of the parameter and its variance is bounded. Many compressed operators satisfy the above definition. The following is an example:

Example 1 (The Quantized Compressed Operater [24]).

For the -th element of is:

where presents the largest integer smaller than and the probability .

Now, we consider the convergence of DGD with unbiased stochastic compressions. If local copies are compressed and then directly used in the consensus step in the DGD algorithm, then Eq. (4) in Algorithm 1 can be modified as:

(5)

which shows that there is a non-vanishing noise term accumulated over iterations, which prevents the DGD algorithm from converging. For example, consider a simple 2-node network with local objectives and . The quantized compressed operator[25] is adopted in DGD. The simulation results are illustrated in Fig. 1, where we can see that DGD fails to converge after 1000 iterations even for such a small-size network consensus problem. This motivates us to pursue a new algorithmic design in Section IV.

(a) .
(b) .
Fig. 1: The simulation results for DGD with quantized compression operator for a 2-node network, for which DGD fails to converge after 1000 iterations.

Iv Amplified-Differential Distributed Gradient Descent Method (ADC-DGD)

In this Section, we will first introduce our ADC-DGD algorithm in Section IV-A. Then, we will present the main theoretical results and their intuitions in Section IV-B. The proofs for the main results are provided in Section IV-C.

Iv-a The ADC-DGD Algorithm

Our ADC-DGD algorithm is stated in Algorithm 2:

  Algorithm 2: Amplified-Differential Compression DGD.   Initialization:

  1. [topsep=1pt, itemsep=-.1ex, leftmargin=.2in]

  2. Let . Let , . Choose initial values for step-size and the amplification exponent . Let , .

Main Loop:

  1. [topsep=1pt, itemsep=-.1ex, leftmargin=.2in]

  2. In the -th iteration, each node sends the compressed amplified-differential to its neighbors. Also, upon collecting all neighbors’ information, each node estimates neighbors’ (imprecise) values: . Then, each node updates its local value:

    (6)

    Each node updates local differential: .

  3. Stop if a desired convergence criterion is met; otherwise, let and go to Step 2.

 

Several important remarks on Algorithm 2 are in order: i) Compared to the original DGD, each node under ADC-DGD requires additional memory to store the (imprecise) values of its neighbors in the previous iteration: . This additional memory allows the neighbors to only transmit the difference between successive iterations rather than directly. Note that this memory requirement is modest in practice since many computer networks are scale-free (i.e., node degree distribution follows a power law and hence most nodes have low degrees); ii) Each node sends out a compressed version of the amplified-differential . This information will then be de-amplified at the receiving nodes as , which is a noisy version of . Based on the memory of the previous version, each node obtains their neighbors’ values estimation , . Clearly, ADC-DGD is more communication-efficient compared to the original DGD; iii) Once , , are available, the update in (6) follows the same structure as in DGD, which also contains a consensus step and a local gradient step. Therefore, the complexity of ADC-DGD are almost identical to the original DGD, which means that ADC-DGD enjoys the same low-complexity.

Iv-B Main Convergence Results

Before presenting the convergence results of ADC-DGD, we first state several needed assumptions:

Assumption 1.

The local objective functions satisfy:

  • (Lower boundedness) There exists an optimal with such that

  • (Lipschitz continuous gradient) there exists a constant such that .

Assumption 2 (Growth rate at infinity).

If the domain for is unbounded, then there exists a constant such that

where and

Assumption 1 is standard in convergence analysis of gradient descent type algorithms: The first bullet ensures the existence of optimal solution and the second bullet guarantees the smoothness of the local objectives. Assumption 2 is a technical result coming out of our proofs and guarantees that, at infinity, the growth rate of the objective function is at least faster than linear. We note that Assumption 2 is a mild assumption, which is evidenced by the following lemma (proof details are relegated to Appendix A).

Lemma 1.

Any strictly convex function satisfying Assumption 1 also satisfies Assumption 2.

Fig. 2: Examples of non-convex functions satisfying Assumption 2.

In addition to convex objectives, many non-convex functions also satisfy Assumption 2, , as shown below and in Fig. 2:

Example 2.

(Non-convex functions satisfying Assumption 2):

  • with but is smaller than when

  • with but is smaller than when

Our first key result is on the convergence of local variables to the mean vector across nodes:

Theorem 1.

Let the mean vector at the -th iteration be defined as with Under Assumptions 1, if is bounded by and the amplifying exponent is then:

  • For constant step-size , , ;

  • For diminishing step-size with some , .

Remark 1.

Theorem 1 says that the local copies will converge to the mean vector asymptotically with a diminishing step-size, or stay within a bounded error ball of the mean vector if a constant step-size is adopted.

Our second key convergence result is on the convergence rate of ADC-DGD under constant step-sizes:

Theorem 2 (Constant Step-Size).

Let the step-size be constant, i.e.,, , with . Under Assumptions 1-2, if the amplified exponent , then it holds that:

(7)

where and are two constants.

Remark 2.

Under the same conditions of Theorem 2, we immediately have that Algorithm 2 has an ergodic convergence rate until reaching the error ball and the fastest rate is

Our third key convergence result is concerned with the convergence rate of ADC-DGD under diminishing step-sizes:

Theorem 3 (Diminishing Step-Sizes).

Under Assumptions 1-2, if the local objectives have bounded graidents, i.e. there exists a positive constant such that and , then with diminishing step-size it holds that almost surely.

Remark 3.

In Theorem 3, the exponent for the diminishing rate of step-size is lower bounded (. Thus, the best convergence rate for this algorithm is which is faster than the rate in [22]. We also note that our convergence result is in “Small-O”, which is stronger than conventional “Big-O” convergence results.

Remark 4 (Intuition and Design Rationale of ADC-DGD).

To understand why ADC-DGD converges, a closer look at (6) in Algorithm 2 reveals that:

(8)

Thanks to the properties of the unbiased stochastic operator (cf. Definition 1), the noise term in the last step of (4) has zero mean and a vanishing variance as gets large. This is in contrast to the accumulated non-vanishing noise term in DGD (cf. Eq. (III-B)). Eq. (4) also shows that our ADC-DGD algorithm can be interpreted as a variance reduction method. Indeed, our proofs in Section IV-C are based on these intuitions.

Iv-C Proofs of the Main Theorems

Due to space limitation, in this subsection, we outline the key steps of the proofs of Theorems 13. We relegate proof details to appendices. Some appendices provide proof sketches due to the lengths of the proofs.

Step 1): Introducing a Lyapunov Function. Consider the following Lyapunov function, which is also used in[16, 21]:

(9)

where and so that . The following lemma is from[16], which says that the Lyapunov function has Lipschitz-continuous gradient.

Lemma 2.

Under Assumption 1, the Lyapunov function has -Lipschitz gradient, i.e.

Note that, using the notation , we can compactly rewrite the updating step (6) in Algorithm 2 as follows:

(10)

where is the parameter in the -th iteration, is the vector of imprecise parameters, and and . It can be seen that Eq. (IV-C

) is one-step stochastic gradient descent for

and the noise term has zero mean and variance with diminishing bound , i.e.,

(11)
(12)

where follows from the fact that the eigenvalues of are in and

Step 2) Convergence of the Objective Value. Note from (4) that the noise caused by compression is similar to the noise in the standard stochastic gradient descent method (SGD). Hence, we can apply similar analysis techniques from SGD on the iterations of ADC-DGD to obtain the following results:

Theorem 4 (Bounded Gradient).

Under Assumptions 1-2, if the step-size and the amplified exponent in Algorithm 2, then there exists a constant such that and where Moreover,

Theorem 4 shows that with an appropriate step-size and an amplifying exponent, Algorithm 2 converges. But due to the compression noise, the convergence rate is sublinear. To see this, note that and Thus, which implies From Theorem 4, the convergence rate of is also

Step 3) Proving Theorem 1. Note from Algorithm 2 and (IV-C) that the following hold:

(13)

Eq. (13) characterize the trajectory of the iterates. Each iterate consists of two parts, one from gradients and the other from noises. Note that in (13), the variance of accumulated noises are in the form of . Next, we prove an interesting lemma for , which is useful in proving Theorem 1.

Lemma 3.

Define where and It follows that

Lemma 3 implies that the negative effect of compression noises can be ignored asymptotically, which induces convergence. With (13), Theorem 4 and Lemma 3, we can finally prove Theorem 1 and the details are relegated to Appendix D.

Step 4) Proving Theorems 2 and 3. With some algebraic derivation, we can show the following fundamental result:

Lemma 4.

Let be a filtration. Under Assumptions 1-2, the following inequality holds:

(14)

where is the step-size at the -the iteration.

Eq. (4) in Lemma 4 is similar to the contraction in stochastic gradient descent algorithm, which relates the objective values and gradient norm. Then, by telescoping and the supermartingale convergence theorem, we can prove Theorems 2 and 3 (see Appendices F and G).

Iv-D Understanding the Role of the Amplifying Exponent

In our algorithm, the amplifying exponent is a key component to adjust the communication rate. From Theorems 2 and 3, it can be seen that within the larger means the faster convergence. However, since the transmitted value is we can see that a larger leads to a larger which may lead to overflow error (for example, type ‘int8’ in Matlab could only present data within ). Hence, it is necessary to guarantee that would not grow too fast. Recalling Eqs. (IV-C) and (6) in Algorithm 2, we have

Under the expectation, the transmitted value is bounded by

From Definition 1, we have that each element of is bounded by From Theorem 4, we have Thus, is bounded by . We state this result in the following proposition:

Proposition 5.

Under Assumptions 1-2, with the transmitted value satisfies

The insight from Proposition 5 is that with the growth speed for the transmitted value is slower than which is not very fast.

V Numerical Results

In this section, we will present several numerical experiments to further validate the performance of ADC-DGD.

1) Effect of Compression: First, we compare ADC-DGD with some existing methods to show its convergence rate and communication-efficiency. Consider a four-node network as shown in Fig. 4 with the following global objective function: , where , , , and . It can be seen that is non-convex, while the rest are convex. The communication consensus matrix used in this experiment is shown in Fig. 4.

Fig. 3: A four-node network.
Fig. 4: The consensus matrix for Fig. 4.

In our simulation, we compare our ADC-DGD with the conventional DGD and For we consider two cases: and In ADC-DGD, the amplifying exponent is set to . We use two step-size strategies: 1) constant step-size (i.e. ) and 2) diminishing step-size (i.e. ). We adopt the quantized operator in [24] as the compression operator. After compression, the values are integer. Hence, they can be stored as type ‘int16’, which is 2 bytes. However, the uncompressed values are stored as type ‘double’, costing 8 bytes. The convergence results for one trial are illustrated in Fig. 6 and Fig. 6.

Fig. 5: Convergence comparisons between ADC-DGD, DGD, and DGD.
Fig. 6: Amount of exchanged information (bytes) vs gradient norm.

From the simulations, we can see that: 1) with a fixed step-size, all algorithms converge to an error ball, while the radiuses of the conventional DGD and ADC-DGD are relatively smaller. This is because with a larger becomes smaller and hence the error ball for becomes larger; 2) By using compression, the convergence process of ADC-DGD is relatively less smooth. But the compression noise does not affect convergence. With the same step-size, the conventional DGD and ADC-DGD have the almost the same convergence rate; 3) By using diminishing step-sizes, the convergence speed for ADC-DGD becomes slower. However, the objective value remains decreasing; 4) By comparing the amount of exchanged information, ADC-DGD with the fixed step-size converges the fastest, using only 2000 bytes. This shows that our algorithm is the most communication-efficient.

2) Effect of the Amplifying Exponent: Next, we show the effect of the amplifying exponent As discussed in Section IV-D, with a small the noise caused by compression could lead to a slow convergence. On the other hand, with a large the transmitted value could be too large and cause overflow, especially for quantized compressed operator. Here, we change using and keep the rest of the parameters the same. For each we repeat the algorithm times and compute the average objective values, as well as the maximum transmitted value from all the nodes in each iteration. The simulation results are shown in Figs. 8 and 8. We can see that, with a larger value, the algorithm converges faster and the curve is smoother, while the transmitted values are increasing a little bit faster. In this example, we can see that strikes a good balance between convergence and maximum transmitted value.

Fig. 7: Convergence behaviors under different choices of .
Fig. 8: Growth of transmitted values vs. number of iterations.

3) Effect of Network Size: The following simulations indicate that our algorithm could be scaled to large-size networks. In our simulation, we consider the ‘circle’ system: each node only connects with two neighboring nodes and forms a circle. For example, Fig. 10 shows a five-node circle. We set to be , , , in our experiment. The local objectives are in the form of In our simulation, are independently randomly generated: and . For each value of , we repeat trials and compute the average gradient norm. The convergence results are shown in Fig. 10. It can be seen that our algorithm works well as the network size increases, demonstrating the scalability of ADC-DGD.

Fig. 9: The 5-node circle topology.
Fig. 10: The effect of network size.

Vi Conclusion

In this paper, we considered designing communication-efficient network consensus optimization algorithms in networks with slow communication rates. We proposed a new algorithm called amplified-differential compression decentralized gradient descent (ADC-DGD), which is based on compression to reduce communication costs. We investigated the convergence behavior of ADC-DGD on smooth but possibly non-convex objectives in this work. We showed that: 1) by employing a fixed step-size , ADC-DGD converges with the ergodic rate until reaching an error ball of size with the amplified parameter ; 2) ADC-DGD enjoys the best convergence rate and converge to a stationary point almost surely with diminishing step-sizes. Consensus optimization with compressed information is an important and under-explored area. An interesting future topic is to generalize our ADC-DGD algorithmic framework to analyze cases with local stochastic gradients, which could further lower the implementation complexity of ADC-DGD.

References

Appendix A Proof for Lemma 1

Without loss of generality, we prove the case of one-dimensional objective. Firstly, we consider reach the minimal at and With the convexity of Consider with and define Consider there exists a constant such that and thus Hence, It is easy to obtain the same result for negative values Therefore, Next, if and Consider the transformation, then is the minimal solution for and also maintains the convexity. From the above, we know that Therefore, Denote we have Consider the following limits:

which implies that Similarly, we can show