Randomization and quantization for average consensus

04/29/2018 ∙ by Bernadette Charron-Bost, et al. ∙ 0

A variety of problems in distributed control involve a networked system of autonomous agents cooperating to carry out some complex task in a decentralized fashion, e.g., orienting a flock of drones, or aggregating data from a network of sensors. Many of these complex tasks reduce to the computation of a global function of values privately held by the agents, such as the maximum or the average. Distributed algorithms implementing these functions should rely on limited assumptions on the topology of the network or the information available to the agents, reflecting the decentralized nature of the problem. We present a randomized algorithm for computing the average in networks with directed, time-varying communication topologies. With high probability, the system converges to an estimate of the average in linear time in the number of agents, provided that the communication topology remains strongly connected over time. This algorithm leverages properties of exponential random variables, which allows for approximating sums by computing minima. It is completely decentralized, in the sense that it does not rely on agent identifiers, or global information of any kind. Besides, the agents do not need to know their out-degree; hence, our algorithm demonstrates how randomization can be used to circumvent the impossibility result established in [1]. Using a logarithmic rounding rule, we show that this algorithm can be used under the additional constraints of finite memory and channel capacity. We furthermore extend the algorithm with a termination test, by which the agents can decide irrevocably in finite time - rather than simply converge - on an estimate of the average. This terminating variant works under asynchronous starts and yields linear decision times while still using quantized - albeit larger - values.



There are no comments yet.


page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1. Introduction

The subject of this paper is the average consensus problem. We fix a finite set  of  autonomous agents. Each agent  has a scalar input value , and all agents cooperate to estimate, within some error bound , the average  of the inputs. They do so by maintaining a local variable , which they drive close to the average  by exchanging messages with neighboring agents. An algorithm achieves average consensus if, in each of its executions, each variable  gets sufficiently close to the average , namely , after a finite number of computation steps.

The study of this problem is motivated by a wide array of practical distributed applications, which either directly reduce to computing the average of well-chosen values, like sensor fusion [2, 3, 4], or use average computation as a key subroutine, like load balancing [5, 6]. Other examples of such applications include formation control [7], distributed optimization [5], and task assignment [8].

1.1. Contribution

In this paper, our focus is on the design of efficient algorithms for average consensus. Specifically, we are concerned with the convergence time, defined as the number of communication phases needed to get each  within the range . It clearly depends on various parameters, including the input values, the error bound , connectivity properties of the network, and the number  of agents.

The main contribution of this paper is a linear time algorithm in  that achieves average consensus in a networked system of anonymous agents, i.e., without identifiers, with a time-varying topology that is continuously strongly connected. It is a Monte Carlo algorithm in the sense that the agents make use of private random oracles and may compute a wrong estimate of the average, but with a typically small probability. We do not assume any stability or bidirectionality of the communication links, nor do we provide the agents with any global knowledge (like a bound on the size of the network) or knowledge of the number of their out-neighbors. We also show how, by adding an initial quantization phase, we can make the memory and bandwidth requirements grow with  only as .

This is to be considered in the light of the impossibility result stated in [1]: deterministic, anonymous agents communicating by broadcast and without knowledge of their out-neighbors cannot compute functions dependent on the multiplicity or the order of the input values. In particular, computing the average in this context requires providing the agents with knowledge of their out-degrees (or some equivalent information), or centralized control in the form of agent identifiers. In contrast, our algorithm shows that, using randomization, computing the average can be done in a purely decentralized fashion, without using the out-degrees, even on a time-varying communication topology.

1.2. Related works

Average consensus is a specific case of the general consensus problem, where the agents only need to agree on any value in the range of the input values. Natural candidates to solve this general problem are convex combination algorithms, where each agent repeatedly broadcasts its latest estimate , and then picks  in the range of its incoming values. For agent  at time , this takes the form of the update rule


where  sets the weight  to  if it has not received agent ’s value at round . To specify a convex combination algorithm amounts to describing how each agent  selects the positive weights . The evolution of the system is then determined by the initial values  and the stochastic weight matrices .

Convex combination algorithms have been extensively studied; see e.g., [9, 5, 10, 11, 12, 13, 14, 15, 16, 6]. The estimates  have been shown to converge to a common limit under various assumptions of agent influence and network connectivity [17, 12]. Unfortunately, convex combination algorithms suffer from poor time complexity in general: as shown in [18, 16], they may exhibit an exponentially large convergence time, even on a fixed communication topology.

However, some specific convex combination algorithms are known to converge in polynomial time, e.g., [14, 5, 19]

. They all have in common that the Perron eigenvectors of the weight matrices are constant, which indeed is shown in 

[20] to guarantee convergence in  if there exists a positive lower bound  on all positive weights and if the network is continuously strongly connected. Polynomial bounds are essentially optimal, as no convex combination algorithm can achieve convergence at an earlier time than on every topology [21].

While convex combination algorithms achieve asymptotic consensus with a positive lower bound on positive weights and high enough connectivity, the limit is only guaranteed to be in the range of the initial values and may be different from the average . For example, the linear-time consensus algorithm in [22] works over any dynamic topology that is continuously rooted, but it converges to a value that is not equal to  in general. In contrast, the convex combination algorithm in [5] computes the average by selecting weights such that all the weight matrices  are doubly stochastic. To ensure this condition, agents need to collect some non local informations about the network, which requires some link stability over time, namely a three-round link stability.

In [6] and subsequent works, agents enrich the update rule (1) with a second order term:

The parameter  is usually a function of the spectral values of the communication graph, which are hard to compute in a distributed fashion. A notable exception is the algorithm proposed by Olshevsky in [10], where the weights are locally computable, and the second order factor only depends on a bound  on the size  of the network. Unfortunately, the latter algorithm assumes a fixed bidirectional topology and an initial consensus since all agents must agree on the bound . Moreover, its time complexity is linear in , which may be arbitrarily large.

Other quadratic or linear-time average consensus algorithms elaborating on the update rule (1) have been proposed in [23, 24, 25, 26]. All of these actually solve a stronger problem, in that they achieve consensus on the exact average  in finite time, but they are computationally intensive and highly sensitive to numerical imprecisions. Moreover, they are designed in the context of a fixed topology and some centralized control.

Average consensus algorithms built around the update rule (1) typically require bidirectional communication links with some stability and assume that agents have access to global information. This is to be expected, as they operate by broadcast and over anonymous networks, and thus have to bypass the impossibility result in [1]: they do so through the use, at least implicitly, of the out-degree of the agents.

Another example is to be found in the Push-Sum algorithm [27] in which agents make explicit use of their out-degrees in the messages they send. This method converges on fixed strongly connected graphs [28], and on continuously strongly connected dynamic graphs [29].

Another way to circumvent the impossibility result in [1] consists in assuming unique agent identifiers: by tagging each initial value  with ’s identifier, at each step of the Flooding algorithm, the agents can compute the average of the input values that they have heard of so far, and thus compute the global average  after  communication steps when the topology is continuously strongly connected. Unfortunately, the price to pay in this simple average consensus algorithm is messages of size in  bits. By repeated leader election on the basis of agent identifiers and a shared bound on the network diameter, the quadratic time algorithm in [30] also achieves average consensus, using message and memory size in only  bits with a fixed, strongly connected network.

Our approach is dramatically different from the above sketched ones: equipping each agent with private random oracles enables them to estimate the average with neither central control nor global information. In particular, our algorithm requires no global clock, and agents may start asynchronously. Communication links are no more assumed to be bidirectional and they may change arbitrarily over time, provided that the network remains permanently strongly connected.

Our algorithm leverages the fact that the minimum function can be easily computed in this general setting. By individually sampling exponential random distributions with adequate rates and then by computing the minimum of the so-generated random numbers, agents can estimate the sum of initial values and the size of the network, yielding an estimate of the average. This approach was first introduced in [31] for a gossip communication model, and later applied in [32] to the design of distributed counting algorithms in networked systems equipped with a global clock that delivers synchronous start signals to the agents.

The main features of some of the average consensus algorithms discussed above, including our own randomized algorithm, denoted , are summarized in Table 1.

Algorithm Time Message size Restrictions
Flooding * non anonymous network
Ref. [5] bidirectional topology
link stability over three rounds
Ref. [24] * fixed and bidirectional topology
computationally intensive
Ref. [10] fixed and bidirectional topology
, known by all agents
Ref. [30] * fixed topology
, known by all agents
non anonymous network
Algorithm  Monte Carlo algorithm
  • * These algorithms compute the exact average

Table 1. Average consensus algorithms with continuous strong connectivity

1.3. Quantization

Most average consensus algorithms require agents to store and transmit real numbers. This assumption is unrealistic in digital systems, where agents have finite memory and communication channels have finite capacity. These constraints entail agents to use only quantized values.

Convex combination algorithms are not, in general, robust to quantization. However, those that compute the average using doubly stochastic influence matrices have been shown to degrade gracefully under several specific rounding schemes, either deterministic [5], where the degradation induced by rounding is bounded, or randomized [33], where the expected average in the network is kept constant.

Other methods elaborating on the update rule (1) have not, in general, been shown to behave well under rounding, like the second-order algorithm in [34], or of the various protocols in [25, 26, 23, 24]. In this context, one important feature of our algorithm is that it can be adapted to work with quantized values, following a logarithmic rounding scheme similar to the one in [35]. With this rounding rule, each quantized value can be represented using bits.

1.4. Irrevocable decisions

The specification of average consensus can be strengthened by requiring that agents irrevocably decide on some good estimates of the average  in finite time. In other words, agents are required to detect when consensus on  has been reached within a given error margin. This is desirable for many applications, e.g., when the average consensus algorithm is to be used as a subroutine that returns an estimate of  to the calling program. Various decision strategies have been developed for fixed topologies, e.g., [36, 37, 38].

Here, we design a decision test that uses the approximate value of  computed on-line and that incorporates the randomized firing scheme developed in [39] to tolerate asynchronous starts. In this way, we show that the agents may still safely decide in linear time, but at the cost of larger messages. Moreover, it achieves exact consensus since all the agents decide on the same estimate of .


The rest of this paper is organized as follows: in Section 2, we introduce our computational model and present some preliminary technical lemmas; we present our main algorithm in Section 3, its quantized version in Section 4, and its variant with decision tests in Section 5. Finally, we present concluding remarks in Section 6.

2. Preliminaries

2.1. Computation model

We consider a networked system with a finite set  of  agents, and assume a distributed computational model in the spirit of the Heard-Of model [40]. Computation proceeds in rounds: in a round, each agent broadcasts a message, receives messages from some agents, and finally updates its state according to some local algorithm. Rounds are communication closed, in the sense that a message sent at round  can only be received during round . Communications that occur in a given round  are thus modeled by a directed graph : the edge  is in  if agent  receives the message sent by agent  at round . Any agent can communicate with itself instantaneously, so we assume a self-loop at each node in all of the graphs .

We consider randomized distributed algorithms which execute an infinite sequence of rounds, and in which agents have access to private and independent random oracles. Thus, an execution of a randomized algorithm is entirely determined by the collection of input values, the sequence of directed graphs , called a dynamic graph, and the outputs of the random oracles. We assume that the dynamic graph is managed by an oblivious adversary that has no access to the outcomes of the random oracles.

We design algorithms to compute the average of initial values in a dynamic network. Consider an algorithm where the local variable  is used to estimate the average. We say that an execution of this algorithm -computes the average if there is a round  such that, for all subsequent rounds , all estimates are within distance  of the average  of the input values, namely  for all . The convergence time of this execution is the smallest such round  if it exists.

2.2. Directed graphs and dynamic graphs

Let be a directed graph, with a finite set of nodes  of cardinality  and a set of edges . There is a path from  to  in  either if , or if and there is a path from  to . If every pair of nodes is connected by a path, then  is said to be strongly connected. The dynamic graph  is said to be continuously strongly connected if all the directed graphs  are strongly connected.

The product graph  of two directed graphs  and  is defined as , with . Let us recall that the product of  directed graphs on  that are all strongly connected and have self-loops at each node is the complete graph. It follows that, in every execution of the algorithm Min — given in Algorithm 1 —  over a continuously strongly connected dynamic graph, all agents have computed the smallest of the input values at the end of  rounds. The algorithm Min is a fundamental building block of our average consensus algorithms, and the latter observation will drive their convergence times.

3:for  do
4:     Send .
5:     Receive  from neighbors.
7:end for
Algorithm 1 The algorithm Min, code for agent 

2.3. Exponential random variables

For any positive real number , we denote by that 

is a random variable following an exponential distribution with rate 

. One easily verifies the following property of exponential random variables.

Lemma 1.

Let be independent exponential random variables with rates , respectively. Let  be the minimum of . Then, follows an exponential distribution with rate .

The accuracy of our algorithm depends on some parameter  whose value is determined by the bound in the following lemma, which is an application of the Cramér-Chernoff method (see for instance [41], sections 2.2 and 2.4).

Lemma 2.

Let be i.i.d. exponential random variables with rate , and let . Then,

3. Randomized algorithm

In this section, we assume infinite bandwidth channels and infinite storage capabilities. For this model, we present a randomized algorithm  and prove that all agents compute the same value which, with high probability, is a good estimate of the average of the initial values.

The underlying idea is that each agent computes an estimate of , the sum of the input values, and an estimate of , the size of the network. They use the ratio of the two estimates as an estimate of the average .

The computations of the estimates of  and  are based on Lemma 1: each agent  samples two random numbers from two exponential distributions, with respective rates  and . Then, the agent  computes the two global minima of these so-generated random numbers in the variables  and  with the algorithm Min. As recalled in Section 2.2, this takes at most  rounds when the dynamic graph is continuously strongly connected. Then, and provide estimates of respectively  and .

The probabilistic analysis requires all the input values to be at least equal to one. To overcome this limitation, we assume that the agents know some pre-defined interval in which all the input values lie and we apply a reduction to the case by simple translations of the inputs.

We then elaborate on the above algorithmic scheme to decrease the probability of incorrect executions, i.e., executions with errors in the estimates that are greater than . We replicate each random variable 

times, and each node starts with the two vectors

and , instead of the sole variables  and . Using the Cramér-Chernoff bound given in Lemma 2, we choose the parameter  in terms of the maximal admissible error , the probability  of incorrect executions, and the amplitude  of the input values; namely, we set .

The pseudocode of the algorithm  is given in Algorithm 2.

4:Generate random numbers from an exponential distribution of rate .
6:Generate random numbers from an exponential distribution of rate .
8:In each round do
9:Send .
10:Receive from neighbors.
11:for  do
14:end for
Algorithm 2 The algorithm , code for agent 
Theorem 1.

For any real numbers  and , in any continuously strongly connected network, the algorithm  -computes the average of initial values in in at most  rounds with probability at least .


We first introduce some notation. If  is any variable of node , we denote by  the value of  at the end of round . We let


As an immediate consequence of the connectivity assumptions, for each node  and each index , we have  and  at every round . Hence, whenever .

We now show that  lies in the admissible range with probability at least . By considering the translate initial values  that all lie in , we obtain a reduction to the case .

So let us assume that . In this case, is positive, and we let . Since and , we have . This implies and . It follows that, if  and , i.e.,

then we have


Specializing Lemma 2 for  and , we get

where are i.i.d. exponential random variables of rate . In particular, and since, by Lemma 1, we have and with  and  that are both positive. The probability of the union of those two events is thus less than . Using the above argument and the fact that , we conclude that

which completes the proof. ∎

The convergence of the algorithm  in Theorem 1 is ensured by the assumption of continuous strong connectivity of the dynamic graph : the directed graph  is complete, and thus the entries  and  hold a global minimum at the end of round . This connectivity assumption may be dramatically reduced into eventual strong connectivity: for each round , there exists a round  such that  is the complete graph. Clearly, the algorithm  converges with any dynamic graph that is eventually strongly connected, but the finite convergence time is then unbounded.

An intermediate connectivity assumption has been proposed in [39]: a dynamic graph is strongly connected with delay if each product of  consecutive graphs  is strongly connected. Then, the convergence of the algorithm  is still guaranteed, but at the price of increasing the convergence time by a factor .

Conversely, the assumption of continuous strong connectivity can be strengthened in the following way: for any positive integer , a dynamic graph  is continuously -strongly connected if each directed graph  is -in-connected, i.e., any non-empty subset  has at least  incoming neighbors in . It can be shown that the product of  -in-connected directed graphs is complete [39]. Hence, the assumption of continuous -connectivity results in a speedup by a factor .

4. Quantization

In this section, we present a variant of the algorithm  that, as opposed to the former, works under the additional constraint that agents can only store and transmit quantized values. This model is intended for networked systems with communication bandwidth and storage limitations. We incorporate this constraint in our randomized algorithm by requiring each agent  firstly to quantize the random numbers it generates, and secondly to broadcast only one entry of each of the two vectors  and  in each round.

The quantization scheme consists in rounding values down along a logarithmic scale, to the previous integer power of some pre-defined number greater than one. Exponential random variables, when rounded in this way, continue to follow concentration inequalities similar to those of Lemma 2. This makes logarithmic rounding appealing to use in conjunction with the algorithm , as we retain control over incorrect executions simply by increasing the number  of samples by a constant factor.

This quantization method does not offer an absolute bound over the space and bandwidth potentially required in the algorithm: the generated random numbers may be arbitrarily large or small, and therefore the number of quantization levels used in all executions is unbounded. Instead, we provide a probabilistic bound over the number of quantization levels required — that is, a bound that holds with high probability. All the random numbers that are generated lie in some pre-defined interval  with high probability, and hence most executions of our algorithm require a pre-defined number  of quantization levels. In each of these “good” executions, random numbers can be represented efficiently, as  grows with .

This probabilistic guarantee for quantization could be turned into an absolute one by providing the agents with a bound . This is indeed the rounding scheme developed in [35], where each agent starts with normalizing the random numbers that it generates before rounding. Our quantization method provides a weaker guarantee, but it does not use any global information about the network.

In the following, our quantized algorithm is denoted ; its pseudocode is given in Algorithm 3. It uses the rounding function , where  is any positive number.

We start the correctness proof of  with a preliminary lemma that gives, for , concentration inequalities for the logarithmically rounded exponential random variable .

Lemma 3.

Let be i.i.d. exponential random variables with rate , and let  and . Then,


Let  and . For any , we have , and hence

It follows that if , then

The result follows from the latter inequality and Lemma 2. ∎

5:Generate  random numbers from an exponential distribution of rate .
7:Generate  random numbers from an exponential distribution of rate .
10:In each round do
12:Send .
13:Receive from neighbors.
16:if  then
19:end if
Algorithm 3 The algorithm , code for agent
Proposition 1.

For any real numbers  and , in any continuously strongly connected network, the algorithm  -computes the average of initial values that all lie in  with probability at least  in at most  rounds.


We let


The main loop of the algorithm  consists in running many instances of the algorithm Min, interleaving their executions so that the variables  and  are updated at rounds  Since the topology is continuously strongly connected, and  for every round . Hence, whenever .

Now we show that  lies in the admissible range with probability at least . For that, we proceed as in Theorem 1: we reduce the general case to the case  by translation.

Since the function is non-decreasing, and  commute. Therefore, by Lemma 1, and  are the quantized values of two exponential random variables with respective rates  and .

We let  and . Since  and , we have . This implies that, if and , then .

Using Lemma 3 with , , and , we obtain and . Therefore,

Proposition 2.

For any real numbers  and , in any continuously strongly connected network, each entry of the vectors  and  in algorithm  can be represented over  quantization levels, with probability at least .


If  with , then for any ,


In particular, when  denotes the interval with , we obtain that, for each agent  and each index , and . Since the random numbers  and  are all independent, we deduce that

If all the random numbers  and  lie in the interval , then they are rounded into the finite set , which means that  different quantization levels are sufficient to represent their logarithmically rounded values. Since , we have for the  roundings of values in the interval . Observing that and thus , we have . With the values of the parameters  and  as defined in the algorithm , lines 2 and 3, we finally obtain

Combining Propositions 1 and 2, we deduce the following correctness result for the algorithm .

Theorem 2.

For any real numbers  and , in any continuously strongly connected network, the algorithm  -computes the average of initial values in  in at most  rounds and using messages in bits, with probability at least .

As above sketched, the algorithm  differs from  in several respects. First, the length  of the random vectors is larger. This is due to the fact that the concentration inequality in Lemma 3 is looser than in Lemma 2. Moreover, we retain a safety margin of  for controlling executions in which some of the random numbers generated by the agents lie outside of the admissible interval for quantization.

Another discrepancy is that the agents send only one entry of each of the two vectors  and  in each round of  while they send the complete vectors in the algorithm . This sequentialization implemented in the algorithm , results in reducing the size of messages by a factor , but at the price of augmenting the convergence time by the same factor .

The use of this strategy also entails a stronger sensitivity on network connectivity than when broadcasting entire vectors at each round. Indeed, the convergence of  and  is now decorrelated from that of  and  for . Global convergence requires that for each index , the graph products of the form are all complete from some integer . This condition is not implied, for instance, by continuous strong connectivity with delay , and indeed an adversary with knowledge of  can pick a dynamic graph that is -delayed continuously strongly connected, and for which no progress is ever made for some entries of the vectors  and .

5. Decision

So far, we have been concerned only with the convergence of each estimate  to the average . However, when used as a subroutine, an average consensus algorithm may have to return an estimate of the average  to the calling program. In other words, the agents have to decide irrevocably on an estimate of .

Formally, we equip each agent with a decision variable , initialized to . Agent  is said to decide the first time it writes in . The corresponding problem is specified as follows:







In this section, we seek to augment the algorithms  and to solve the above problem with high probability. Our approach relies on the fact that in both algorithms, each agent converges in finite time.

A simple solution consists in providing the agents with a bound : each agent  stops executing  and decides at round . From Theorem 1, it follows that termination, irrevocability, and validity hold with probability at least . A similar scheme can be applied to the algorithm  and decisions at round .

Unfortunately, this approach suffers from two major drawbacks. First, the time complexity of the resulting algorithms is arbitrarily large, as it depends on the quality of the bound . Second, the decision tests involve the current round number , and hence require that the agents have access to this value, or at least start executing their code simultaneously. Charron-Bost and Moran [39] recently showed that synchronous starts can be emulated in continuously strongly connected networks, but at the price of a firing phase of  additional rounds.

To circumvent the above two problems, we propose another approach that consists in using the estimate of  computed by the algorithms  and  in the decision tests, and in incorporating the randomized firing scheme developed in [39] to tolerate asynchronous starts.

Let us briefly recall their model and techniques. Each agent is initially passive, i.e., it does nothing and emits only null messages (heartbeats). Eventually it becomes active, i.e., it starts executing the algorithm. An active agent  maintains a local virtual clock with the following property: under the assumption of a dynamic network that is continuously strongly connected, the local clocks remain smaller than  as long as some agents are passive, and when all the agents are active, they get synchronized to some value at most equal to . Let  denote the last round with passive agents. At the end of round , all agents have the same estimate  of , which lies in  with high probability. Hence, guarantees that , and thus agent  can safely decide.

The algorithm , given in Algorithm 4, integrates this decision mechanism in the algorithm , with the rounding of algorithm .

Theorem 3.

For any real numbers  and , in any continuously strongly connected network, with probability at least , the algorithm  decides on values within  of the average of initial values in  in  rounds, using messages in  bits.


We first observe that all the variables , , and are stationary. Since the dynamic graph is continuously strongly connected, their final values do not depend on agent . Let be the first round from which all these variables are constant. Section 2.2 shows that


We let and .

From [39], we know that the counters  satisfy the following:

  1. ;

  2. .

Since is upper bounded by , the property (ii) entails that the agent  eventually decides. Hence, the termination property is ensured. Let denote the first round at which the agent  decides, i.e., the first round such that

Observing that deciding in  coincides with firing in the randomized algorithm in [39], the first part of the correctness proof of the latter algorithm shows that


since . Combined with (2), we obtain

Because of the definition of , decisions in  are thus irrevocable with probability at least .

The proof of the randomized firing algorithm also shows that if , then


In other words, all the agents decide by round with probability at least .

Moreover, we observe that the computation of the estimate of in  corresponds to the algorithm . Then, Proposition 1 shows that


since . It follows that the validity property holds with probability at least .

As opposed to , each agent  sends all the entries of and in the messages of the algorithm . Moreover, the above argument shows that the agent  can stop sending when it has decided. Hence, in correct executions where the agents decide in linear time in , the counters  can be represented over  bits.

Reasoning as in Proposition 2, each entry of the vectors  and  can be represented over  quantization levels with probability , where . Therefore, each message of  uses  bits with probability .

By the union bound over the latter four events, we obtain that all the agents in the algorithm  decide on values in the range by round using messages of size in  bits, with probability at least . ∎

4:Generate  random numbers from an exponential distribution of rate .
6:Generate  random numbers from an exponential distribution of rate .
10:In each round do
11:Send to all and receive one message from each in-neighbor.
12:if at least one received message is null then
16:end if
17:for  do
20:end for
22:if  then