1 Introduction
In this work we look at Byzantine consensus in asynchronous systems under the local broadcast model. In the local broadcast model [2, 9], a message sent by any node is received identically by all of its neighbors in the communication network, preventing a faulty node from transmitting conflicting information to different neighbors. Our recent work [6] has shown that in the synchronous setting, network connectivity requirements for Byzantine consensus are lower under the local broadcast model as compared to the classical pointtopoint communication model. Here we show that the same is not true in the asynchronous setting, and the network requirements for Byzantine consensus stays the same under local broadcast as under pointtopoint communication model.
A classical result [5] shows that it is impossible to reach exact consensus even with a single crash failure in an asynchronous system. However, despite asynchrony, approximate Byzantine consensus among nodes in the presence of Byzantine faulty nodes is possible in networks with vertex connectivity at least and [3]. Motivated by results in the synchronous setting [6], one might expect a lower connectivity requirement under the local broadcast model. In this work we show that, in fact, the network conditions do not change from the pointtopoint communication model.
2 System Model and Notation
We represent the communication network by an undirected graph . Each node knows the graph . Each node is represented by a vertex . We use the terms node and vertex interchangeably. Two nodes and are neighbors if and only if is an edge of .
Each edge represents a FIFO link between two nodes and . When a message sent by node is received by node , node knows that was sent by node . We assume the local broadcast model wherein a message sent by a node is received identically and correctly by each node such that (i.e., by each neighbor of )^{1}^{1}1Our results apply even for the stronger model where messages must be received at the same time by all the neighbors.. We assume an asynchronous system where the nodes proceed at varying speeds, in the absence of a global clock, and messages sent by a node are received after an unbounded but finite delay^{2}^{2}2Our results apply even for the stronger model where messages are received after a known bounded delay as well as (with slight modifications to the proofs) to the case where message delay is unbounded but nodes have a global clock for synchronization..
A Byzantine faulty node may exhibit arbitrary behavior. There are nodes in the system of which at most nodes may be Byzantine faulty, where ^{3}^{3}3The case where is trivial and the case when is not of interest.. We consider the approximate Byzantine consensus problem where each of the nodes starts with a real valued input, with known upper and lower bounds and such that and . Each node must output a real value satisfying the following conditions.

[label=0)]

Agreement: For any two nonfaulty nodes, their output must be within a fixed constant .

Validity: The output of each nonfaulty node must be in the convex hull of the inputs of nonfaulty nodes.

Termination: All nonfaulty nodes must decide on their output in finite time which can depend on , , and .
Once a node terminates, it takes no further steps.
3 Impossibility Results
In this section we show two impossibility results.
Theorem 3.1.
If there exists an approximate Byzantine consensus algorithm under the local broadcast model on an undirected graph tolerating at most Byzantine faulty nodes, then .
Theorem 3.2.
If there exists an approximate Byzantine consensus algorithm under the local broadcast model on an undirected graph tolerating at most Byzantine faulty nodes, then is connected.
Proof of Theorem 3.1: We assume that is a complete graph; if consensus can not be achieved on a complete graph consisting of nodes, then it clearly cannot be achieved on a partially connected graph consisting of nodes. Suppose for the sake of contradiction that and there exists an algorithm that solves approximate Byzantine consensus in an asynchronous system under the local broadcast model. Then there exists a partition of such that . Since , we can ensure that both and are nonempty. Algorithm outlines a procedure for each node that describes ’s state transitions.
We first create a network to model behavior of nodes in in two different executions and , which we will describe later. Figure 1 depicts . The network consists of two copies of each node in , denoted by and , and a single copy of each of the remaining nodes. For each node in , we have the following cases to consider:

[label=0)]

If , then there is a single copy of in . With a slight abuse of terminology, we denote the copy by as well.

If , then there is a single copy of in . With a slight abuse of terminology, we denote the copy by as well.

If , then there are two copies of in . We denote the two copies by and .
For each edge , we create edges in as follows:

[label=0)]

If , then there is an edge between the corresponding copy of and in .

If , then there is a single edge in .

If , then there is an edge and an edge in .
Note that the edges in and are both undirected. Observe that the structure of ensures the following property. For each edge in the original graph , each copy of receives messages from at most one copy of in . This allows us to create an algorithm for corresponding to by having each copy of node run .
The nodes in start off in a crashed state and never take any steps. The nodes in are “slow” and start taking steps after time , where the value of will be chosen later.
Consider an execution of the above algorithm on as follows. Each node in has input and each node in has input . Observe that it is not guaranteed that nodes in will satisfy any of the conditions of approximate Byzantine consensus, including the termination property. We will show that the algorithm does indeed terminate but the output of the nodes do not satisfy the validity condition, which will give us the desired contradiction. We use to describe two executions and of on the original graph as follows.

[label=:,labelindent=0pt]

is the set of faulty nodes which crash immediately at the start of the execution. Each node in has input while all other nodes have input . Since solves approximate Byzantine consensus on , nodes in reach agreement and terminate within some finite time, without receiving any messages from nodes in . We set for the delay above for to be this value. Since , the outputs of (nonfaulty) nodes in are either not or not . WLOG we assume that the outputs are not ^{4}^{4}4For the other case, we can switch the faulty set in to and change the input of to be . Note that the behavior of nonfaulty nodes in and for the first time period is modeled by the corresponding (copies of) nodes in , while the behavior of the (crashed) faulty nodes is captured by .

is the set of Byzantine faulty nodes. A faulty node broadcasts the same messages as the corresponding node in in execution . Each node in has input while all other nodes have input . The output of the nonfaulty nodes will be described later. The behavior of nodes (both faulty and nonfaulty) in and is modeled by the corresponding (copies of) nodes in , while the behavior of the (nonfaulty) nodes in is captured by .
Due to the behavior of nodes in and in , each of the corresponding copies in decides on a value distinct from and terminates within time in execution .
Therefore, the behavior of nodes in and is completely captured by the corresponding copies in .
It follows that in , nodes in have outputs other than .
However, all nonfaulty nodes have input in .
Recall that, by construction, is nonempty.
This violates validity, a contradiction.
Proof of Theorem 3.2: Suppose for the sake of contradiction that is not connected and there exists an algorithm that solves approximate Byzantine consensus in an asynchronous system under the local broadcast model on . Then there exists a vertex cut of of size at most with a partition of such that and (both nonempty) are disconnected in (so there is no edge between a node in and a node in ). Since , there exists a partition of such that . Algorithm outlines a procedure for each node that describes ’s state transitions.
We first create a network to model behavior of nodes in in three different executions , , and , which we will describe later. Figure 2 depicts . The network consists of three copies of each node in , two copies of each node in and , and a single copy of each node in . We denote the three sets of copies of by , , and . We denote the two sets of copies of (resp. ) by and (resp. and ). For each edge , we create edges in as follows:

[label=0)]

If (resp. ), then there are two copies of and , (resp. ) and (resp. ). There is an edge and an edge in .

If , then there are three copies , , and of and . There are edges , , in .

If , then there is an edge between the corresponding copies in .

If , then there are three copies , , and of , and a single copy of . There is an undirected edge and a directed edge in .

If , then there are two copies and of , and three copies , , and of . There are two undirected edges and in .

If , then there are two copies and of , and three copies , , and of . There are two undirected edges and in .

If , then there are two copies and of , and a single copy of . There is an undirected edge and a directed edge in .

If , then there are two copies and of , and a single copy of . There is an undirected edge and a directed edge in .
has some directed edges. We describe their behavior next. We denote a directed edge from to as . All message transmissions in are via local broadcast, as follows. When a node in transmits a message, the following nodes receive this message identically: each node with whom has an undirected edge and each node to whom there is an edge directed away from . Note that a directed edge behaves differently for and . All messages sent by are received by . No message sent by is received by . Observe that with this behavior of directed edges, the structure of ensures the following property. For each edge in the original graph , each copy of receives messages from at most one copy of in . This allows us to create an algorithm for corresponding to by having each copy of node run .
The nodes in start off in a crashed state and never take any steps. The nodes in and are “slow” and start taking steps after time , where the value of will be chosen later.
Consider an execution of the above algorithm on as follows. Each node in has input and all other nodes have input . Observe that it is not guaranteed that nodes in will satisfy any of the conditions of approximate Byzantine consensus, including the termination property. We will show that the algorithm does indeed terminate but nodes do not reach agreement in , which will be useful in deriving the desired contradiction. We use to describe three executions , , and of on the original graph as follows.

[label=:,labelindent=0pt]

is the set of faulty nodes which crash immediately at the start of the execution. Each node in has input while all other nodes have input . Since solves approximate Byzantine consensus on , nodes in reach agreement and terminate within some finite time, without receiving any messages from nodes in . We set for the delay above for and to be this value. The output of the nonfaulty nodes will be described later. Note that the behavior of nonfaulty nodes in , , and for the first time period is modeled by the corresponding (copies of) nodes in , , and respectively, while the behavior of the (crashed) faulty nodes is captured by .

is the set of faulty nodes. A faulty node broadcasts the same messages as the corresponding node in in execution . All nonfaulty nodes have input . The behavior of nonfaulty nodes in , , is modeled by the corresponding (copies of) nodes in , , and respectively, while the behavior of the faulty nodes is captured by . Since solves approximate Byzantine consensus on , nodes in decide on output .

is the set of faulty nodes. A faulty node broadcasts the same messages as the corresponding node in in execution . All nonfaulty nodes have input . The behavior of nonfaulty nodes in , , is modeled by the corresponding (copies of) nodes in , , and respectively, while the behavior of the faulty nodes is captured by . Since solves approximate Byzantine consensus on , nodes in decide on output .
Due to the output of nodes in and in , the nodes in and decide on an output within time in execution . Therefore, the behavior of nodes in and in is completely captured by the corresponding nodes in and in . Now, due to the output of nodes in in , the nodes in output in . Similarly, due to the output of nodes in in , the nodes in output in . It follows that in , nodes in have output while nodes in have output . Recall that, by construction, both and are nonempty. This violates agreement, a contradiction.
4 Summary
In [6] we showed that network requirements are lower for Byzantine consensus in synchronous systems under the local broadcast model, as compared with the pointtopoint communication model. One might expect a lower connectivity requirement in the asynchronous setting as well. In this work, we have presented two impossibility results in Theorems 3.1 and 3.2 that show that local broadcast does not help improve the network requirements in asynchronous systems.
References
 [1] H. Attiya and J. Welch. Distributed Computing: Fundamentals, Simulations and Advanced Topics. John Wiley & Sons, Inc., USA, 2004.
 [2] V. Bhandari and N. H. Vaidya. On reliable broadcast in a radio network. In Proceedings of the Twentyfourth Annual ACM Symposium on Principles of Distributed Computing, PODC ’05, pages 138–147, New York, NY, USA, 2005. ACM.
 [3] D. Dolev, N. A. Lynch, S. S. Pinter, E. W. Stark, and W. E. Weihl. Reaching approximate agreement in the presence of faults. J. ACM, 33(3):499–516, May 1986.
 [4] M. J. Fischer, N. A. Lynch, and M. Merritt. Easy impossibility proofs for distributed consensus problems. Distributed Computing, 1(1):26–39, Mar 1986.
 [5] M. J. Fischer, N. A. Lynch, and M. S. Paterson. Impossibility of distributed consensus with one faulty process. Technical report, Massachusetts Inst of Tech Cambridge lab for Computer Science, 1982.
 [6] M. S. Khan, S. S. Naqvi, and N. H. Vaidya. Exact byzantine consensus on undirected graphs under local broadcast model. In Proceedings of the 2019 ACM Symposium on Principles of Distributed Computing, PODC ’19, pages 327–336, New York, NY, USA, 2019. ACM.
 [7] M. S. Khan, S. S. Naqvi, and N. H. Vaidya. Exact byzantine consensus on undirected graphs under local broadcast model. CoRR, abs/1903.11677, 2019.
 [8] M. S. Khan and N. H. Vaidya. Byzantine consensus under local broadcast model: Tight sufficient condition. CoRR, abs/1901.03804, 2019.
 [9] C.Y. Koo. Broadcast in radio networks tolerating byzantine adversarial behavior. In Proceedings of the Twentythird Annual ACM Symposium on Principles of Distributed Computing, PODC ’04, pages 275–282, New York, NY, USA, 2004. ACM.
Comments
There are no comments yet.