Asynchronous Byzantine Consensus on Undirected Graphs under Local Broadcast Model

09/04/2019 ∙ by Muhammad Samir Khan, et al. ∙ Georgetown University University of Illinois at Urbana-Champaign 0

In this work we look at Byzantine consensus in asynchronous systems under the local broadcast model. In the local broadcast model, a message sent by any node is received identically by all of its neighbors in the communication network, preventing a faulty node from transmitting conflicting information to different neighbors. Our recent work has shown that in the synchronous setting, network connectivity requirements for Byzantine consensus are lower under the local broadcast model as compared to the classical point-to-point communication model. Here we show that the same is not true in the asynchronous setting, and the network requirements for Byzantine consensus stays the same under local broadcast as under point-to-point communication model.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

In this work we look at Byzantine consensus in asynchronous systems under the local broadcast model. In the local broadcast model [2, 9], a message sent by any node is received identically by all of its neighbors in the communication network, preventing a faulty node from transmitting conflicting information to different neighbors. Our recent work [6] has shown that in the synchronous setting, network connectivity requirements for Byzantine consensus are lower under the local broadcast model as compared to the classical point-to-point communication model. Here we show that the same is not true in the asynchronous setting, and the network requirements for Byzantine consensus stays the same under local broadcast as under point-to-point communication model.

A classical result [5] shows that it is impossible to reach exact consensus even with a single crash failure in an asynchronous system. However, despite asynchrony, approximate Byzantine consensus among nodes in the presence of Byzantine faulty nodes is possible in networks with vertex connectivity at least and [3]. Motivated by results in the synchronous setting [6], one might expect a lower connectivity requirement under the local broadcast model. In this work we show that, in fact, the network conditions do not change from the point-to-point communication model.

2 System Model and Notation

We represent the communication network by an undirected graph . Each node knows the graph . Each node is represented by a vertex . We use the terms node and vertex interchangeably. Two nodes and are neighbors if and only if is an edge of .

Each edge represents a FIFO link between two nodes and . When a message sent by node is received by node , node knows that was sent by node . We assume the local broadcast model wherein a message sent by a node is received identically and correctly by each node such that (i.e., by each neighbor of )111Our results apply even for the stronger model where messages must be received at the same time by all the neighbors.. We assume an asynchronous system where the nodes proceed at varying speeds, in the absence of a global clock, and messages sent by a node are received after an unbounded but finite delay222Our results apply even for the stronger model where messages are received after a known bounded delay as well as (with slight modifications to the proofs) to the case where message delay is unbounded but nodes have a global clock for synchronization..

A Byzantine faulty node may exhibit arbitrary behavior. There are nodes in the system of which at most nodes may be Byzantine faulty, where 333The case where is trivial and the case when is not of interest.. We consider the -approximate Byzantine consensus problem where each of the nodes starts with a real valued input, with known upper and lower bounds and such that and . Each node must output a real value satisfying the following conditions.

  1. [label=0)]

  2. -Agreement: For any two non-faulty nodes, their output must be within a fixed constant .

  3. Validity: The output of each non-faulty node must be in the convex hull of the inputs of non-faulty nodes.

  4. Termination: All non-faulty nodes must decide on their output in finite time which can depend on , , and .

Once a node terminates, it takes no further steps.

3 Impossibility Results

In this section we show two impossibility results.

Theorem 3.1.

If there exists an -approximate Byzantine consensus algorithm under the local broadcast model on an undirected graph tolerating at most Byzantine faulty nodes, then .

Theorem 3.2.

If there exists an -approximate Byzantine consensus algorithm under the local broadcast model on an undirected graph tolerating at most Byzantine faulty nodes, then is -connected.

Both the proofs follow the state machine based approach [1, 3, 4].

Proof of Theorem 3.1:   We assume that is a complete graph; if consensus can not be achieved on a complete graph consisting of nodes, then it clearly cannot be achieved on a partially connected graph consisting of nodes. Suppose for the sake of contradiction that and there exists an algorithm that solves -approximate Byzantine consensus in an asynchronous system under the local broadcast model. Then there exists a partition of such that . Since , we can ensure that both and are non-empty. Algorithm outlines a procedure for each node that describes ’s state transitions.

We first create a network to model behavior of nodes in in two different executions and , which we will describe later. Figure 1 depicts . The network consists of two copies of each node in , denoted by and , and a single copy of each of the remaining nodes. For each node in , we have the following cases to consider:

  1. [label=0)]

  2. If , then there is a single copy of in . With a slight abuse of terminology, we denote the copy by as well.

  3. If , then there is a single copy of in . With a slight abuse of terminology, we denote the copy by as well.

  4. If , then there are two copies of in . We denote the two copies by and .

For each edge , we create edges in as follows:

  1. [label=0)]

  2. If , then there is an edge between the corresponding copy of and in .

  3. If , then there is a single edge in .

  4. If , then there is an edge and an edge in .

Note that the edges in and are both undirected. Observe that the structure of ensures the following property. For each edge in the original graph , each copy of receives messages from at most one copy of in . This allows us to create an algorithm for corresponding to by having each copy of node run .

The nodes in start off in a crashed state and never take any steps. The nodes in are “slow” and start taking steps after time , where the value of will be chosen later.

Figure 1: Network to model executions and in proof of Theorem 3.1. Edges within the sets are not shown while edges between sets are depicted as single edges. The labels adjacent to the sets are the corresponding inputs in execution .

Consider an execution of the above algorithm on as follows. Each node in has input and each node in has input . Observe that it is not guaranteed that nodes in will satisfy any of the conditions of -approximate Byzantine consensus, including the termination property. We will show that the algorithm does indeed terminate but the output of the nodes do not satisfy the validity condition, which will give us the desired contradiction. We use to describe two executions and of on the original graph as follows.

  1. [label=:,labelindent=0pt]

  2. is the set of faulty nodes which crash immediately at the start of the execution. Each node in has input while all other nodes have input . Since solves -approximate Byzantine consensus on , nodes in reach -agreement and terminate within some finite time, without receiving any messages from nodes in . We set for the delay above for to be this value. Since , the outputs of (non-faulty) nodes in are either not or not . WLOG we assume that the outputs are not 444For the other case, we can switch the faulty set in to and change the input of to be . Note that the behavior of non-faulty nodes in and for the first time period is modeled by the corresponding (copies of) nodes in , while the behavior of the (crashed) faulty nodes is captured by .

  3. is the set of Byzantine faulty nodes. A faulty node broadcasts the same messages as the corresponding node in in execution . Each node in has input while all other nodes have input . The output of the non-faulty nodes will be described later. The behavior of nodes (both faulty and non-faulty) in and is modeled by the corresponding (copies of) nodes in , while the behavior of the (non-faulty) nodes in is captured by .

Due to the behavior of nodes in and in , each of the corresponding copies in decides on a value distinct from and terminates within time in execution . Therefore, the behavior of nodes in and is completely captured by the corresponding copies in . It follows that in , nodes in have outputs other than . However, all non-faulty nodes have input in . Recall that, by construction, is non-empty. This violates validity, a contradiction.

Proof of Theorem 3.2:   Suppose for the sake of contradiction that is not -connected and there exists an algorithm that solves -approximate Byzantine consensus in an asynchronous system under the local broadcast model on . Then there exists a vertex cut of of size at most with a partition of such that and (both non-empty) are disconnected in (so there is no edge between a node in and a node in ). Since , there exists a partition of such that . Algorithm outlines a procedure for each node that describes ’s state transitions.

We first create a network to model behavior of nodes in in three different executions , , and , which we will describe later. Figure 2 depicts . The network consists of three copies of each node in , two copies of each node in and , and a single copy of each node in . We denote the three sets of copies of by , , and . We denote the two sets of copies of (resp. ) by and (resp. and ). For each edge , we create edges in as follows:

  1. [label=0)]

  2. If (resp. ), then there are two copies of and , (resp. ) and (resp. ). There is an edge and an edge in .

  3. If , then there are three copies , , and of and . There are edges , , in .

  4. If , then there is an edge between the corresponding copies in .

  5. If , then there are three copies , , and of , and a single copy of . There is an undirected edge and a directed edge in .

  6. If , then there are two copies and of , and three copies , , and of . There are two undirected edges and in .

  7. If , then there are two copies and of , and three copies , , and of . There are two undirected edges and in .

  8. If , then there are two copies and of , and a single copy of . There is an undirected edge and a directed edge in .

  9. If , then there are two copies and of , and a single copy of . There is an undirected edge and a directed edge in .

has some directed edges. We describe their behavior next. We denote a directed edge from to as . All message transmissions in are via local broadcast, as follows. When a node in transmits a message, the following nodes receive this message identically: each node with whom has an undirected edge and each node to whom there is an edge directed away from . Note that a directed edge behaves differently for and . All messages sent by are received by . No message sent by is received by . Observe that with this behavior of directed edges, the structure of ensures the following property. For each edge in the original graph , each copy of receives messages from at most one copy of in . This allows us to create an algorithm for corresponding to by having each copy of node run .

The nodes in start off in a crashed state and never take any steps. The nodes in and are “slow” and start taking steps after time , where the value of will be chosen later.

Figure 2: Network to model executions , , and in proof of Theorem 3.2. Edges within the sets are not shown while edges between sets are depicted as single edges. The crossed dotted lines emphasize that there are no edges between the corresponding sets. The labels adjacent to the sets are the corresponding inputs in execution .

Consider an execution of the above algorithm on as follows. Each node in has input and all other nodes have input . Observe that it is not guaranteed that nodes in will satisfy any of the conditions of -approximate Byzantine consensus, including the termination property. We will show that the algorithm does indeed terminate but nodes do not reach -agreement in , which will be useful in deriving the desired contradiction. We use to describe three executions , , and of on the original graph as follows.

  1. [label=:,labelindent=0pt]

  2. is the set of faulty nodes which crash immediately at the start of the execution. Each node in has input while all other nodes have input . Since solves -approximate Byzantine consensus on , nodes in reach -agreement and terminate within some finite time, without receiving any messages from nodes in . We set for the delay above for and to be this value. The output of the non-faulty nodes will be described later. Note that the behavior of non-faulty nodes in , , and for the first time period is modeled by the corresponding (copies of) nodes in , , and respectively, while the behavior of the (crashed) faulty nodes is captured by .

  3. is the set of faulty nodes. A faulty node broadcasts the same messages as the corresponding node in in execution . All non-faulty nodes have input . The behavior of non-faulty nodes in , , is modeled by the corresponding (copies of) nodes in , , and respectively, while the behavior of the faulty nodes is captured by . Since solves -approximate Byzantine consensus on , nodes in decide on output .

  4. is the set of faulty nodes. A faulty node broadcasts the same messages as the corresponding node in in execution . All non-faulty nodes have input . The behavior of non-faulty nodes in , , is modeled by the corresponding (copies of) nodes in , , and respectively, while the behavior of the faulty nodes is captured by . Since solves -approximate Byzantine consensus on , nodes in decide on output .

Due to the output of nodes in and in , the nodes in and decide on an output within time in execution . Therefore, the behavior of nodes in and in is completely captured by the corresponding nodes in and in . Now, due to the output of nodes in in , the nodes in output in . Similarly, due to the output of nodes in in , the nodes in output in . It follows that in , nodes in have output while nodes in have output . Recall that, by construction, both and are non-empty. This violates -agreement, a contradiction.

4 Summary

In [6] we showed that network requirements are lower for Byzantine consensus in synchronous systems under the local broadcast model, as compared with the point-to-point communication model. One might expect a lower connectivity requirement in the asynchronous setting as well. In this work, we have presented two impossibility results in Theorems 3.1 and 3.2 that show that local broadcast does not help improve the network requirements in asynchronous systems.

References

  • [1] H. Attiya and J. Welch. Distributed Computing: Fundamentals, Simulations and Advanced Topics. John Wiley & Sons, Inc., USA, 2004.
  • [2] V. Bhandari and N. H. Vaidya. On reliable broadcast in a radio network. In Proceedings of the Twenty-fourth Annual ACM Symposium on Principles of Distributed Computing, PODC ’05, pages 138–147, New York, NY, USA, 2005. ACM.
  • [3] D. Dolev, N. A. Lynch, S. S. Pinter, E. W. Stark, and W. E. Weihl. Reaching approximate agreement in the presence of faults. J. ACM, 33(3):499–516, May 1986.
  • [4] M. J. Fischer, N. A. Lynch, and M. Merritt. Easy impossibility proofs for distributed consensus problems. Distributed Computing, 1(1):26–39, Mar 1986.
  • [5] M. J. Fischer, N. A. Lynch, and M. S. Paterson. Impossibility of distributed consensus with one faulty process. Technical report, Massachusetts Inst of Tech Cambridge lab for Computer Science, 1982.
  • [6] M. S. Khan, S. S. Naqvi, and N. H. Vaidya. Exact byzantine consensus on undirected graphs under local broadcast model. In Proceedings of the 2019 ACM Symposium on Principles of Distributed Computing, PODC ’19, pages 327–336, New York, NY, USA, 2019. ACM.
  • [7] M. S. Khan, S. S. Naqvi, and N. H. Vaidya. Exact byzantine consensus on undirected graphs under local broadcast model. CoRR, abs/1903.11677, 2019.
  • [8] M. S. Khan and N. H. Vaidya. Byzantine consensus under local broadcast model: Tight sufficient condition. CoRR, abs/1901.03804, 2019.
  • [9] C.-Y. Koo. Broadcast in radio networks tolerating byzantine adversarial behavior. In Proceedings of the Twenty-third Annual ACM Symposium on Principles of Distributed Computing, PODC ’04, pages 275–282, New York, NY, USA, 2004. ACM.