1 Introduction
Following the widespead success over cryptocurrences, recent advances in blockchain and its successors have offered secure decentralized consistent transaction ledgers in numerous domains including financial, logistics as well as health care sectors. There have been extensive work [2, 3, 4, 5] addressing some known limitations, such as long consensus confirmation time and high power consumption, of blockchainpowered distributed ledgers.
In distributed database systems, Byzantine fault tolerance (BFT) [6] addresses the reliability of the system when up to a certain number (e.g., onethird) of the participant nodes may be compromised. Consensus algorithms [7] ensures the integrity of transactions between participants over a distributed network [6] and is equivalent to the proof of BFT in distributed database systems [8, 9].
For deterministic, completely asynchronous system, Byzantine consensus is not guaranted with unbounded delays [10]. Though achieving consensus is completely feasible for nondeterministic system. In practical Byzantine fault tolerance (pBFT), all nodes can successfully reach a consensus for a block in the presence of a Byzantine node [11]. Consensus in pBFT is reached once a created block is shared with other participants and the share information is further shared with others [12, 13].
There have been extensive research in consensus algorithms. Proof of Work (PoW) [14], used in the original Nakamoto consensus protocol in Bitcoin, requires exhausive computational work from participants for block generation. Proof Of Stake (PoS) [15, 16] uses participants’ stakes for generating blocks. With recent technological advances and innovations, consensus algorithms [2, 3, 4, 5] have addressed to improve the consensus confirmation time and power consumption over blockchainpowered distributed ledges. These approaches utilize directed acyclic graphs (DAG) [17, 4, 5, 18, 19] to facilitate consensus. Examples of DAGbased consensus algorithms include Tangle [20], Byteball [21], and Hashgraph [22]. Lachesis protocol [1] presents a general model of DAGbased consensus protocols.
1.1 Motivation
Lachesis protocols, as introduced in our previous paper [1], is a set of consensus protocols that create a directed acyclic graph for distributed systems. We introduced a Lachesis consensus protocol, called , which is a DAGbased asynchronous nondeterministic protocol that guarantees pBFT. generates each block asynchronously and uses the OPERA chain (DAG) for faster consensus by confirming how many nodes share the blocks.
In BFT systems, an synchronous approach utilizes a broadcast voting and asks each node to vote on the validity of each block. Instead, we aim for an asynchronous system where we leverage the concepts of distributed common knowledge and network broadcast to achieve a local view with high probability of being a consistent global view. Each node receives transactions from clients and then batches them into an event block. The new event block is then communicated with other nodes through asynchronous event transmission. During communication, nodes share their own blocks as well as the ones they received from other nodes. Consequently, this spreads all information through the network. The process is asynchronous and thus it can increase throughput near linearly as nodes enter the network.
Lachesis protocols [1] and [23] proposed new approaches which are better than the previous DAGbased approaches. However, both protocols and have posed some limitations in their algorithms, which can be made simpler and more reliable.
In this paper we are interested in a new consensus protocol that is more reliable to address pBFT in asynchronous scalable DAG. Specifically, we are investigating to use the notion of graph layering and hierarchical graph in graph theory to develop an intuitive model of consensus.
Let =(,) be a directed acyclic graph. In graph theory, a layering of is a topological numbering of that maps each vertex of to an integer such that + 1 for every directed edge . That is, a vertex is laid at layer , if =, and the th layer of is = . In other words, a layering of partitions of the set of vertices into a finite number of nonempty disjoint subsets (called layers) ,,, , such that = , and for every edge (,) , , , . For a layering , =(,,) is called a hierarchical (layered or levelled) graph. An layer hierarchical graph can be represented as =(,,,;).
Figure 1(a) shows an example of block DAG, which is a local snapshot of a node in a threenode network. Figure 1(b) depicts the result of layering applied on the DAG. In the figure, there are 14 layers. Vertices of the same layer are horizontally aligned. Every edge points from a vertex of a higher layer to a vertex of a lower layer. The hierarchical graph depicts a clear structure of the DAG, in which the dependency between blocks are shown uniformly from top to bottom.
The layering gives better structure of DAG. Thus, we aim to address the following questions: 1) Can we leverage the concepts of layering and hierarchical graph on the block DAGs? 2) Can we extend the layering algorithms to handle block DAGs that are evolving over time, i.e., on block creation and receiving? 3) Can a model use layering to quickly compute and validate the global information and help quickly search for Byzantine nodes within the block DAG? 4) Is it possible to use hierarchical graph to reach consensus of the partical ordering of blocks at finality across the nodes?
1.2 ONLAY framework
In this paper, we present ONLAY, a new framework for asynchronous distributed systems. ONLAY leverages asynchronous event transmission for practical Byzantine fault tolerance (pBFT), similar to our previous Lachesis protocol [1]. The core idea of ONLAY is to create a leaderless, scalable, asynchronous DAG. By computing asynchronous partially ordered sets with logical time ordering instead of blockchains, ONLAY offers a new practical alternative framework for distributed ledgers.
Specifically, we propose a consensus protocol , which is based on Lachesis protocol [1]. protocol introduces an online layering algorithm to achieve practical Byzantine fault tolerance (pBFT) in leaderless DAG. The protocol achieves determistic scalable consensus in asynchronous pBFT by using assigned layers and asynchronous partially ordered sets with logical time ordering instead of blockchains. The partial ordering produced by is flexible but consistent across the distributed system of nodes.
We then present a formal model for the protocol that can be applied to abstract asynchronous DAGbased distributed system. The formal model is built upon the model of current common knowledge (CCK) [24].
Figure 2 shows the overview of our ONLAY framework. In the framework, each node stores and maintain it own DAG. There are multiple steps, including layering, selecting roots, assigning frames, selecting Clothos and Atropos, and topologically ordering event blocks to determine consensus. The major step of the framework compared to previous approaches is the layering step.
The main concepts of ONLAY are given as follows:
 Event block

An event block is a holder of a set of transactions created by a node and is then transported to other nodes. Event block includes signature, timestamp, transaction records and referencing hashes to previous (parent) blocks.
 protocol

sets the rules for event creation, communication and reaching consensus in ONLAY.
 OPERA chain

is the local view of the DAG held by each node. This local view is used to determine consensus.
 Layering

Layering assigns every block in OPERA chain a number so that every edge only points from high to low layers.
 LOPERA chain

The LOPERA chain is a hierarchical graph obtained from layering the DAG held by each node.
 Lamport timestamp

An event block is said Happenedbefore , if was created after was created. Lamport timestamp is a timestamp assigned to each event block by a logical clock of a node. It is used to determine a partial order amongst the blocks.
 Root

An event block is called a root if either (1) it is the first generated event block of a node, or (2) it can reach more than twothirds of other roots. A root set contains all the roots of a frame. A frame, denoted by , is a natural number that is assigned to Root sets (and dependent event blocks).
 Root graph

Root graph contains roots as vertices and reachability between roots as edges.
 Clotho

A Clotho is a root satisfying that it is known by more than nodes and more than nodes know that information.
 Atropos

An Atropos is a Clotho that is assigned with a consensus time.
 ONLAY chain

ONLAY chain is the main subset of the LOPERA chain. It contains Atropos blocks and the subgraphs reachable from those Atropos blocks.
1.3 Contributions
In summary, this paper makes the following contributions:

We propose a new scalable framework, socalled ONLAY, aiming for practical DAGbased distributed ledgers.

We introduce a novel consensus protocol , which uses layer algorithm and root graphs, for faster root selection. protocol uses layer assignment on the DAG to achieve deterministic and thus more reliable consensus.

We define a formal model using continuous consistent cuts of a local view to achieve consensus via layer assignment.

We formalize our proofs that can be applied to any generic asynchronous DAGbased solution.
1.4 Paper structure
The rest of this paper is organised as follows. Section 2 presents some preliminaries and background for our ONLAY framework. Section 3 introduces our consensus protocol. The section describes our consensus algorithm that uses layering algorithm to achieve a more reliable and scalable solution to the consensus problem in BFT systems. Section 4 presents important details of our ONLAY framework. Section 5 concludes. Proof of Byzantine fault tolerance is described in Section 6.
2 Preliminaries
This section presents some background information as well as related terminologies used in ONLAY framework.
2.1 Basic Definitions
The protocol sets rules for all nodes representing client machines that forms a network. In , a (participant) node is a server (machine) of the distributed system. Each node can create messages, send messages to, and receive messages from, other nodes. The communication between nodes is asynchronous.
[Node] Each machine participating in the protocol is called a node. Let denote the node with the identifier of . Let denote the total number of nodes.
[Process] A process represents a machine or a node. The process identifier of is . A set = {1,…,} denotes the set of process identifiers.
[Channel] A process can send messages to process if there is a channel (,). Let {(,) s.t. } denote the set of channels.
The basic units of the protocol are called event blocks  a data structure created by a single node as a container to wrap and transport transaction records across the network. Each event block references previous event blocks that are known to the node. This makes a stream of DAG event blocks to form a sequence of history.
The history stored on each node can be represented by a directed acyclic graph =(, ), where is a set of vertices and is a set of edges. Each vertex in a row (node) represents an event. Time flows bottomtotop (or lefttoright) of the graph, so bottom (left) vertices represent earlier events in history. For a graph , a path in is a sequence of vertices (, , , ) by following the edges in . Let be a vertex in . A vertex is the parent of if there is an edge from to . A vertex is an ancestor of if there is a path from to .
[Event block] An event block is a holder of a set of transactions. The structure of an event block includes the signature, generation time, transaction history, and references to previous event blocks. The first event block of each node is called a leaf event.
Each node can create event blocks, send (receive) messages to (from) other nodes. Suppose a node creates an event after an event in . Each event block has exactly references. One of the references is selfreference, and the other 1 references point to the top events of ’s 1 peer nodes.
[Top event] An event is a top event of a node if there is no other event in referencing .
[Ref] An event is called “ref” of event if the reference hash of points to the event . Denoted by . For simplicity, we can use to denote a reference relationship (either or ).
[Selfref] An event is called “selfref” of event , if the selfref hash of points to the event . Denoted by .
[Selfancestor] An event block is selfancestor of an event block if there is a sequence of events such that . Denoted by .
[Ancestor] An event block is an ancestor of an event block if there is a sequence of events such that . Denoted by .
For simplicity, we simply use to refer both ancestor and selfancestor relationship, unless we need to distinguish the two cases.
2.2 OPERA chain
protocol uses the DAGbased structure, called the OPERA chain, which was introduced in Lachesis protocols [1]. For consensus, the algorithm examines whether an event block is reached by more than nodes, where is the total number of participant nodes.
Let =(,) be a DAG. We extend with =(,,,), where is a pseudo vertex, called top, which is the parent of all top event blocks, and is a pseudo vertex, called bottom, which is the child of all leaf event blocks. With the pseudo vertices, we have happenedbefore all event blocks. Also all event blocks happenedbefore . That is, for all event , and .
[OPERA chain] The OPERA chain is a graph structure stored on each node. The OPERA chain consists of event blocks and references between them as edges.
The OPERA chain (DAG) is the local view of the DAG held by each node. OPERA chain is a DAG graph =(,) consisting of vertices and edges. Each vertex is an event block. An edge (,) refers to a hashing reference from to ; that is, . This local view is used to identify Root, Clotho and Atropos vertices, and to determine topological ordering of the event blocks.
[Leaf] The first created event block of a node is called a leaf event block.
[Root] An event block is a root if either (1) it is the leaf event block of a node, or (2) can reach more than of the roots.
[Root set] The set of all first event blocks (leaf events) of all nodes form the first root set ( = ). The root set consists of all roots such that , = 1..(1) and can reach more than 2n/3 other roots in the current frame, = 1..(1).
[Frame] Frame is a natural number that separates Root sets. The root set at frame is denoted by .
[Creator] If a node creates an event block , then the creator of , denoted by , is .
[Clotho] A root in the frame can nominate a root as Clotho if more than 2n/3 roots in the frame dominate and dominates the roots in the frame .
[Atropos] An Atropos is a Clotho that is decided as final.
Event blocks in the subgraph rooted at the Atropos are also final events. Atropos blocks form a Mainchain, which allows time consensus ordering and responses to attacks.
2.3 Layering Definitions
For a directed acyclic graph =(,), a layering is to assign a layer number to each vertex in .
[Layering] A layering (or levelling) of is a topological numbering of , , mapping the set of vertices of to integers such that + 1 for every directed edge (, ) . If =, then is a layer vertex and is the jth layer of .
A layering of partitions the set of vertices into a finite number of nonempty disjoint subsets (called layers) ,,, , such that = . Each vertex is assigned to a layer , where , such that every edge (,) , , , .
[Hierarchical graph] For a layering , the produced graph =(,,) is a hierarchical graph, which is also called an layered directed graph and could be represented as =(,,,;).
The spans of an edge =(,) with and is  or . If no edge in the hierarchical has a span greater than one then the hierarchical graph is proper.
Let HG denote hierarchical graph. Let denote the outgoing neighbours of ; =. Let denote the incoming neighbours of ; =.
[Height] The height of a hierarchical graph is the number of layers .
[Width] The width of a hierarchical graph is the number of vertices in the longest layer. that is,
There are three common criteria for a layering of a directed acyclic graph. First, the hierarchical graph should be compact. Compactness aims for minimizing the width and the height of the graph. Finding a layering by minimizing the height with respect to a given width is NPhard. Second, the hierarchical graph should be proper, e.g., by introducing dummy vertices into the layering for every long edge (,) with and where 1. Each long edge (,), , is replaced by a path (,,,,,) where a dummy vertex , +1 1, is inserted in each intermediate layer . Third, the number of dummy vertices should be kept minimum so as to reduce the running time.
There are several approaches to DAG layering, which are described as follows.
2.3.1 Minimizing The Height
[Longest Path layering] The longest path method layers vertex to layer where is the length of the longest path from a source.
Longest path layering is a list scheduling algorithm produces hierarchical graph with the smallest possible height [25, 26]. The main idea is that given an acyclic graph we place the vertices on the th layer where is the length of the longest path to the vertex from a source vertex.
Algorithm 1 shows the longest path layering algorithm. The algorithm picks vertices whose incoming edges only come from past layers () and assigns them to the current layer . When no more vertex satisfying that condition, the layer is incremented. The process starts over again until all vertices have been assigned a layer.
The following algorithm similar to the above computes layerings of minimum height. Firstly, all source vertices are placed in the first layer . Then, the layer for every remaining vertex is recursively defined by = + 1. This algorithm produces a layering where many vertices will stay close to the bottom, and hence the number of layers is kept minimized. The main drawback of this algorithm is that it may produce layers that are too wide. By using a topological ordering of the vertices [27], the algorithm can be implemented in linear time (+).
2.3.2 Fixed Width Layering
The longest path layering algorithm minimizes the height, while compactness of HG depends on both the width and the height. The problem of finding a layering with minimum height of general graphs is NPcomplete if a fixed width is no less than three [28]. Minimizing the number of dummy vertices guarantees minimum height.
Now we present CoffmanGraham (CG) algorithm, which considers a layering with a maximum width [29],
[CoffmanGraham algorithm] CoffmanGraham is a layering algorithm in which a maximum width is given. Vertices are ordered by their distance from the source vertices of the graph, and are then assigned to the layers as close to the bottom as possible.
CoffmanGraham layering algorithm was originated to solve multiprocessor scheduling to minimize the width (regardless of dummy vertices) as well as the height. The CoffmanGraham algorithm is currently the most commonly used layering method.
Algorithm 2 gives the CoffmanGraham algorithm. The algorithm takes as input a reduced graph, i.e., no transitive edges are included in the graph, and a given width . An edge is called transitive if a path (=,,,=) exists in the graph. The CoffmanGraham algorithm works in two phases. First, it orders the vertices by their distance from the source vertices of the graph. In OPERA chain, the source vertices are the leaf event blocks. In the second phase, vertices are assigned to the layers as close to the bottom as possible.
Lam and Sethi [30] showed that the number of layers of the computed layering with width is bounded by , where is the minimum height of all layerings with width . So, the CoffmanGraham algorithm is an exact algorithm for . In certain appplications, the notion of width does not consider dummy vertices.
2.3.3 Minimizing the Total Edge Span
The objective to minimize the total edge span (or edge length) is equivalent to minimizing the number of dummy vertices. It is a reasonable objective in certain applications such as hierarchical graph drawing. It can be shown that minimizing the number of dummy vertices guarantees minimum height. The problem of layering to minimize the number of dummy vertices can be modelled as an ILP, which has been proved to be very efficient experimentally, even though it does not guarantee a polynomial running time.
2.4 Lamport timestamps
Our Lachesis protocol relies on Lamport timestamps to define a topological ordering of event blocks in OPERA chain. By using Lamport timestamps, we do not rely on physical clocks to determine a partial ordering of events.
The “happened before” relation, denoted by , gives a partial ordering of events from a distributed system of nodes. For a pair of event blocks and , the relation ”” satisfies: (1) If and are events of the same node , and comes before , then . (2) If is the send() by one process and is the receive() by another process, then . (3) If and then .
[HappenedImmediateBefore] An event block is said HappenedImmediateBefore an event block if is a (self) ref of . Denoted by .
[Happenedbefore] An event block is said HappenedBefore an event block if is a (self) ancestor of . Denoted by .
Happenedbefore relation is the transitive closure of happensimmediatelybefore. “ Happenedbefore ” means that the node creating knows event block . An event happened before an event if one of the followings happens: (a) , (b) , or (c) . The happenedbefore relation of events form an acyclic directed graph = (,) such that an edge (,) corresponding to an edge (,) .
[Concurrent] Two event blocks and are said concurrent if neither of them happened before the other. Denoted by .
Given two vertices and both contained in two OPERA chains (DAGs) and on two nodes. We have the following: (1) in if in ; (2) in if in .
[Total ordering] Let denote an arbitrary total ordering of the nodes (processes) and . Total ordering is a relation satisfying the following: for any event in and any event in , if and only if either (i) or (ii) = and .
This defines a total ordering relation. The Clock Condition implies that if then .
2.5 State Definitions
Each node has a local state, a collection of histories, messages, event blocks, and peer information, we describe the components of each.
[State] A (local) state of a process is denoted by consisting of a sequence of event blocks =, , , .
In a DAGbased protocol, each event block is valid only if the reference blocks exist before it. From a local state , one can reconstruct a unique DAG. That is, the mapping from a local state into a DAG is injective or onetoone. Thus, for ONLAY, we can simply denote the th local state of a process by the DAG (often we simply use to denote the current local state of a process ).
[Action] An action is a function from one local state to another local state.
Generally speaking, an action can be one of: a action where is a message, a action, and an internal action. A message is a triple where is the sender of the message, is the message recipient, and is the body of the message.
In ONLAY, consists of the content of an event block . Let denote the set of messages.
Semanticswise, there are two actions that can change a process’s local state: creating a new event and receiving an event from another process.
[Event] An event is a tuple consisting of a state, an action, and a state. Sometimes, the event can be represented by the end state .
The th event in history of process is , denoted by .
[Local history] A local history of process is a (possibly infinite) sequence of alternating local states — beginning with a distinguished initial state. A set of possible local histories for each process in .
The state of a process can be obtained from its initial state and the sequence of actions or events that have occurred up to the current state. In the Lachesis protocol, we use appendonly semantics. The local history may be equivalently described as either of the following: (1) = ,,, , (2) = , ,, , (3) = , , , , .
In Lachesis, a local history is equivalently expressed as: = , , , , where is the th local DAG (local state) of the process .
Let denote the set of asynchronous runs. We can now use Lamport’s theory to talk about global states of an asynchronous system. A global state of run is an vector of prefixes of local histories of , one prefix per process. The happensbefore relation can be used to define a consistent global state, often termed a consistent cut, as follows.
2.6 Consistent Cut
An asynchronous system consists of the following sets: a set of process identifiers, a set of channels, a set of possible local histories for each process , a set of asynchronous runs, a set of all messages. Consistent cuts represent the concept of scalar time in distributed computation, it is possible to distinguish between a “before” and an “after”, see CCK paper [24].
[Consistent cut] A consistent cut of a run is any global state such that if and is in the global state, then is also in the global state. Denoted by .
The concept of consistent cut formalizes such a global state of a run. A consistent cut consists of all consistent DAG chains. A received event block exists in the global state implies the existence of the original event block. Note that a consistent cut is simply a vector of local states; we will use the notation to indicate the local state of in cut of run .
The formal semantics of an asynchronous system is given via the satisfaction relation . Intuitively , “ satisfies ,” if fact is true in cut of run .
We assume that we are given a function that assigns a truth value to each primitive proposition . The truth of a primitive proposition in is determined by and . This defines .
[Equivalent cuts] Two cuts and are equivalent with respect to if:
[ knows ] represents the statement “ is true in all possible consistent global states that include ’s local state”.
[ partially knows ] represents the statement “there is some consistent global state in this run that includes ’s local state, in which is true.”
[Majority concurrently knows] The next modal operator is written and stands for “majority concurrently knows.” The definition of is as follows.
where and .
This is adapted from the “everyone concurrently knows” in CCK paper [24].
In the presence of onethird of faulty nodes, the original operator “everyone concurrently knows” is sometimes not feasible.
Our modal operator fits precisely the semantics for BFT systems, in which unreliable processes may exist.
[Concurrent common knowledge] The last modal operator is concurrent common knowledge (CCK), denoted by . is defined as a fixed point of .
CCK defines a state of process knowledge that implies that all processes are in that same state of knowledge, with respect to , along some cut of the run. In other words, we want a state of knowledge satisfying: . will be defined semantically as the weakest such fixed point, namely as the greatest fixedpoint of .It therefore satisfies:
Thus, states that there is some cut in the same asynchronous run including ’s local state, such that is true in that cut.
Note that implies . But it is not the case, in general, that implies or even that implies . The truth of is determined with respect to some cut . A process cannot distinguish which cut, of the perhaps many cuts that are in the run and consistent with its local state, satisfies ; it can only know the existence of such a cut.
[Global fact] Fact is valid in system , denoted by , if is true in all cuts of all runs of .
Fact is valid, denoted , if is valid in all systems, i.e.
.
[Local fact] A fact is local to process in system if .
3 : Layeringbased Consensus Protocol
This section will then present the main concepts and algorithms of the protocol in our ONLAY framework. The key idea of our ONLAY framework is to use layering algorithms on the OPERA chain. The assigned layers are then used to determine consensus of the event blocks.
For a DAG =(,), a layering of is a mapping , such that + 1 for every directed edge (,) . If =, then is a layer vertex and = is the th layer of . In terms of happened before relation, iff . Layering partitions the set of vertices into a finite number of nonempty disjoint subsets (called layers) ,,, , such that = . Each vertex is assigned to a layer , where , such that every edge (,) , , , . Section 2.3 gives more details about layering.
3.1 HOPERA chain
Recall that OPERA chain, which is a DAG =(,) stored in each node. We introduce a new notion of HOPERA chain, which is built on top of the OPERA chain. By applying a layering on the OPERA chain, one can obtain the hierarchical graph of , which is called HOPERA chain.
Definition 3.1 (HOPERA chain).
An HOPERA chain is the result hierarchical graph .
Thus, HOPERA chain can also be represented by =(,,,;). In HOPERA chain, vertices are assigned with layers such that each edge (,) flows from a higher layer to a lower layer i.e., .
Note that, the hierarchical graph HOPERA chain produced by ONLAY is not proper. This is because there always exist a long edge (,) that spans multiple layers i.e.  1. Generally speaking, a layering can produce a proper hierarchical graph by introducing dummy vertices along every long edge. However, for consensus protocol, such dummy vertices in the resulting proper hierarchical graph are also not ideal as they can increase the computational cost of the HOPERA itself and of the following steps to determine consensus. Thus, in ONLAY we consider layering algorithms that do not introduce dummy vertices.
A simple approach to compute HOPERA chain is to consider OPERA chain as a static graph and then apply a layering algorithm such as either LongestPathLayer (LPL) algorithm or CoffmanGraham (CG) algorithm on the graph . LPL algorithm can achieve HG in linear time (+) with the minimum height. CG algorithm has a time complexity of (), for a given . The LPL and CG algorithms are given in Section 2.3.
The above simple approach that takes a static works well when is small, but then may be not applicable when becomes sufficiently large. As time goes, the dynamic OPERA chain becomes larger and larger, and thus it takes more and more time to process.
3.2 Online Layer Assignment
To overcome the limitation of the above simple approach, we introduce online layering algorithms to apply on the evolunary OPERA chain to reduce the computational cost.
In ONLAY, the OPERA chain in each node keeps evolving rather than being static. A node can introduce a new vertex and few edges when a new event block was created. The OPERA chain can also be updated with new vertices and edges that a node receives from the other nodes in the network.
Definition 3.2 (Dynamic OPERA chain).
OPERA chain is a dynamic DAG which is evolving as new event block is created and received.
We consider a simple model of dynamic OPERA chain. Let DAG =(,) be the current OPERA chain of a node. Let =(,) denote the diff graph, which consists of the changes to at a time, either at block creation or block arrival. We can assume that the vertex sets and are disjoint, similar to the edges and . That is, = and = . At each graph update, the updated OPERA chain becomes =(, ).
An online version of layering algorithm takes as input a new change to OPERA chain and the current layering information and then efficiently computes the layers for new vertices.
Definition 3.3 (Online Layering).
Online layering is an incremental algorithm to compute layering of a dynamic DAG efficiently as evolves.
Specifically, we propose two online layering algorithms. Online Longest Path Layering (OLPL) is an improved version of the LPL algorithm. Online CoffmanGraham (OCG) algorithm is a modified version of the original CG algorithm. The online layering algorithms assign layers to the vertices in diff graph =(,) consisting of new self events and received unknown events.
3.2.1 Online Longest Path Layering
Definition 3.4 (Online LPL).
Online Longest Path Layering (OLPL) is a modified version of LPL to compute layering of a dynamic DAG efficiently.
Algorithm 3 gives our new algorithm to compute LPL layering for a dynamic OPERA chain. The OLPL algorithm takes as input the following information: is the set of processed vertices, is the set of already layered vertices, is the current height (maximum layer number), is the set of new vertices, and is the set of new edges.
The algorithm assumes that the given value of width is sufficiently large. Section 3.3 gives some discussions about choosing an appropriate value of . The algorithms has a worst case time complexity of (+ ).
3.2.2 Online CoffmanGraham Algorithm
Definition 3.5 (Online CG).
Online CoffmanGraham (OCG) Algorithm is a modified version of the CG algorithm to compute layering of a dynamic DAG efficiently.
Algorithm 4 gives a new layering algorithm that is based on the original CoffmanGraham algorithm. The algorithm takes as input the following: is the fixed maximum width, is the unprocessed vertices, is the current height (maximum layer), is the layering assignment, is the set of new vertices, is the set of new edges. The algorithms has a worst case time complexity of .
3.3 Layer Width in BFT
This section presents our formal analysis of the layering with respect to the Byzantine fault tolerance.
In distributed database systems, Byzantine fault tolerance addresses the functioning stability of the network in the presence of at most of Byzantine nodes, where is the number of nodes. A Byzantine node is a node that is compromised, malfunctioned, dies or adversarially targeted. Such nodes can also be called dishonest or faulty.
To address BFT of protocol, we present some formal analysis of the layering results of OPERA chain. For simplicity, let denote the layering of obtained from LongestPathLayering algorithm. Let denote the layering of obtained from the CoffmanGraham algorithm with some fixed width .
3.3.1 Byzantine free network
We consider the case in which the network is free from any faulty nodes. In this setting, all nodes are honest and thus zero fork exists.
Theorem 3.1.
In the absense of dishonest nodes, the width of each layer is at most .
Since there are nodes, there are leaf vertices. We can prove the theorem by induction. The width of the first layer is at maximum, otherwise there exists a fork, which is not possible. We assume that the theorem holds for every layer from 1 to . That is, the at each layer has the width of at most . We will prove it holds for layer as well. Since each honest node can create at most one event block on each layer, the width at each layer is at most . We can prove by contradiction. Suppose there exists a layer . Then there must exist at least two events , on layer such that and have the same creator, say node . That means and are the forks of node . It is a contradiction with the assumption that there is no fork. Thus, the width of each layer . The theorem is proved.
Theorem 3.2.
LongestPathLayering(G) and CoffmanGraham(G,n) computes the same layer assignment. That is, and are the same.
Proposition 3.3.
For each vertex , = .
In the absence of forks, proofs of the above theorem and proposition are easily obtained, since each layer contains no more event blocks.
3.3.2 1/3Bft
Now, let us consider the layering result for the network in which has at most dishonest nodes.
Without loss of generality, let be the probability that a node can create fork. Let be the maximum number of forks a faulty node can create at a time. The number of forked events a node can create at a time is . The number of fork events at each layer is given by
Thus, the maximum width of each layer is given by:
Thus, if we set the maximum width high enough to tolerate the potential existence of forks, we can achieve BFT of the layering result. The following theorem states the BFT of LPL and CG algorithms.
Theorem 3.4.
LongestPathLayering() and CoffmanGraham(, ) computes the same layer assignment. That is, and are identical.
Proposition 3.5.
For each vertex , .
With the assumption about the maximum number of forks at each layer, the above theorem and proposition can be easily proved.
3.4 Root Graph
We propose to use a new data structure, called root graph, which is used to determine frames.
Definition 3.6 (Root graph).
A root graph =(, ) is a directed graph consisting of vertices as roots and edges represent their reachability.
In the root graph , the set of roots is a subset of . The set of edges is the reduced edges from , in that (,) only if and are roots and can reach following edges in i.e., .
A root graph contains the genesis vertices i.e., the leaf event blocks. A vertex that can reach at least 2/3 + 1 of the current set of all roots is itself a root. For each root that new root reaches, we include a new edge (, ) into the set of root edges . Note that, if a root reaches two other roots and of the same node, then we retain only one edge (, ) or (, ) if . This requirement makes sure each root of a node can have at most one edge to any other node.
Figure (a)a shows an example of the HOPERA chain, which is the resulting hierarchical graph from a laying algorithm. Vertices of the same layer are placed on a horizontal line. There are 14 layers which are labelled as L0, L1, …, to L14. Roots are shown in red colors. Figure (b)b gives an example of the root graph constructed from the HOPERA chain.
Figure (a)a shows a HOPERA chain (hierarchical graph resulted from layering an OPERA chain) on a single node in a 5node network. Figure (b)b depicts the root graph from the HOPERA chain. There are 14 layers in the example.
3.5 Frame Assignment
We then present a determinstic approach to frame assignment, which assigns each event block a unique frame number. First, we show how to assign frames to root vertices via the socalled rootlayering. Second, we then assign nonroot vertices to the determined frame numbers with respect to the topological ordering of the vertices.
Definition 3.7 (Rootlayering).
A root layering assigns each root with a unique layer (frame) number.
Here, we model root layering problem as a modified version of the original layering problem. For a root graph , rootlayering is a layering that assigns each root vertex of with an integer such that:

, for each edge .

if reaches at least of the roots of frame , then = .
If , then is a frame vertex and is the th frame of . In terms of happened before relation, iff . Similarly to the original layering problem, root layering partitions the set of vertices into a finite number of nonempty disjoint subsets (called frames) ,,, , such that = . Each vertex is assigned to a rootlayer , where , such that every edge , , , .
Figure (c)c depicts an example of root layering for the root graph in Figure (b)b. There are five frames assigned to the root graph. The frames are labelled as F0, F1, F2, F3 and F4. Figure (c)c depicts an example of root layering for the root graph in Figure (b)b. There are four frames whose labels are F0, F1, F2 and F3.
We now present our new approach that uses root layering information to assign frames to nonroot vertices.
Definition 3.8 (Frame assignment).
A frame assignment assigns every vertex in HOPERA chain a frame number such that

for every directed edge in ;

for each root , ;

for every pair of and : if then ; if then .
3.6 Consensus
The protocol uses several novel concepts such as OPERA chain, HOPERA chain, Root graph, Frame assignment to define a deterministic solution for the consensus problem. We now present our formal model for the consistency of knowledge across the distributed network of nodes. We use the consistent cut model, described in Section 2.6. For more details, see the CCK paper [24].
For an OPERA chain , let denote the subgraph of that contains nodes and edges reachable from .
Definition 3.9 (Consistent chains).
For two chains and , they are consistent if for any event contained in both chains, . Denoted by .
For any two nodes, if their OPERA chains contain the same event , then they have the same hashes contained within .
A node must already have the references of in order to accept . Thus, both OPERA chains must contain references of . Presumably, the cryptographic hashes are always secure and thus references must be the same between nodes. By induction, all ancestors of must be the same. When two consistent chains contain the same event , both chains contain the same set of ancestors for , with the same reference and selfref edges between those ancestors. Consequently, the two OPERA chains are consistent.
Definition 3.10 (Global OPERA chain).
A global consistent chain is a chain such that for all .
Let denote that is a subgraph of . Some properties of are given as follows:

().

().

() () (().
The layering of consistent OPERA chains is consistent itself.
Definition 3.11 (Consistent layering).
For any two consistent OPERA chains and , layering results and are consistent if , for any vertex common to both chains. Denoted by .
Theorem 3.6.
For two consistent OPERA chains and , the resulting HOPERA chains using layering are consistent.
The theorem states that for any event contained in both OPERA chains, . Since , we have . Thus, the height of is the same in both and . Thus, the assigned layer using is the same for in both chains.
Proposition 3.7 (Consistent root graphs).
Two root graphs and from two consistent HOPERA chains are consistent.
Definition 3.12 (Consistent root).
Two chains and are root consistent, if for every contained in both chains, is a root of th frame in , then is a root of th frame in .
Proposition 3.8.
For any two consistent OPERA chains and , they are root consistent.
By consistent chains, if and belongs to both chains, then = . We can prove the proposition by induction. For = 0, the first root set is the same in both and . Hence, it holds for = 0. Suppose that the proposition holds for every from 0 to . We prove that it also holds for = + 1. Suppose that is a root of frame in . Then there exists a set reaching 2/3 of members in of frame such that (). As , and in , then (). Since the proposition holds for =, As is a root of frame in , is a root of frame in . Hence, the set of 2/3 members happens before in . So belongs to in .
Thus, all nodes have the same consistent root sets, which are the root sets in . Frame numbers are consistent for all nodes.
Proposition 3.9 (Consistent Clotho).
A root in the frame can nominate a root as Clotho if more than 2n/3 roots in the frame dominate and dominates the roots in the frame .
Proposition 3.10 (Consistent Atropos).
An Atropos is a Clotho that is decided as final. Event blocks in the subgraph rooted at the Atropos are also final events. Atropos blocks form a Mainchain, which allows time consensus ordering and responses to attacks.
Proposition 3.11 (Consistent Mainchain).
Mainchain is a special subgraph of the OPERA chain that stores Atropos vertices.
An event block is called a root if the event block is linked to more than twothirds of previous roots. A leaf vertex is also a root itself. With root event blocks, we can keep track of “vital” blocks that of the network agree on. Each participant node computes to the same Main chain, which is the consensus of all nodes, from its own event blocks.
3.7 Topological sort
In this section, we present an approach to ordering event blocks that reach finality. The new approach is deterministic subject to any deterministic layering algorithm.
After layers are assigned to achieve HOPERA chian, root graphs are constructed. Then frames are determined for every event block. Once certain roots become Clothos, which are known by by 2/3 + 1 of the roots that are in turn known by 2/3 + 1 of the nodes. The final step is to compute the ordering of event blocks that are final.
Figure 6 shows HOPERA chain of event blocks. The Clothos/Atropos vertices are shown in red. For each Atropos vertex, we compute the subgraph under it. These subgraphs are shown in different colors. We process Atropos from lower layer to high layer.
Our algorithm for topological ordering the consensed event blocks is given in Algorithm 5. The algorithm requires that HOPERA chain and layering assignment from are precomputed. The algorithm takes as input the set of Atropos vertices, denoted by . We first order the atropos vertices using function, which sorts vertices based on their layer, then lamport timestamp and then hash information of the event blocks. Second, we process every Atropos in the sorted order, and then compute the subgraph under each Atropos. The set of vertices contains vertices from that is not yet process in . We then apply to order vertices in and then append the ordered result into the final ordered list .
With the final ordering computed using the above algorithm, we can assign the consensus time to the finally ordered event blocks.
4 ONLAY Framework
We now present our ONLAY framework, which is a practical DAGbased solution to distributed ledgers.
Algorithm 6 shows the main function, which is the entry point to launch ONLAY framework. There are two main loops in the main function and they run asynchronously in parallel. The first loop attempts to create new event blocks and then communicate with other nodes about the new blocks. The second loop will accept any incoming sync request from other nodes. The node will retrieve the incoming event blocks and will send responses that consist of its known events.
Specifically, in the first loop, a node makes synchronization requests to other nodes and it then creates event blocks. Once created, the blocks are broadcasted to all other nodes. In line 3, a node runs the Node Selection Algorithm, to select other nodes that it will need to communicate with. In line 4 and 5, the node sends synchronization requests to get the latest OPERA chain from the other nodes. Once receiving the latest event blocks from the responses, the node creates new event blocks (line 6). The node then broadcasts the created event block to all other nodes (line 7). After new event block is created, the node updates its OPERA chain, then will then apply layering (line 8). After layering is performed, HOPERA chain is obtained. It then checks whether the block is a root (line 9) and then computes the root graph (line 10). In line 11, it computes global state from its HOPERA chain. Then the node decides which roots are Clothos (line 12) and which then becomes Atropos (line 13). Once Atropos vertices are confirmed, the algorithm runs a topological sorting algorithm to get the final ordering for vertices at consensus (line 14). The main chain is constructed from the Atropos vertices that are found (line 15).
4.1 Peer selection algorithm
In order to create a new event block, a node needs to synchronize with other nodes to get their latest top event blocks. The peer selection algorithm computes such set of nodes that a node will need to make a synchronization request.
There are multiple ways to select nodes from the set of nodes. An simple approach can use a random selection from the pool of nodes. A more complex approach is to define a cost model for the node selection. In general, protocol does not depend on how peer nodes are selected.
In ONLAY, we have tried a few peer selection algorithms, which are described as follows: (1) Random: randomly select a peer from peers; (2) Least Used: select the least use peer(s). (3) Most Used (MFU): select the most use peer(s). (4) Fair: select a peer that aims for a balanced distribution. (5) Smart: select a peer based on some other criteria, such as successful throughput rates, number of own events.
4.2 Peer synchronization
Now we describe the steps to synchronize events between the nodes, as presented in Algorithm 7. Each node selects a random peer and sends a sync request indicating the local known events of
Comments
There are no comments yet.