1 Introduction
Blockchain has emerged as a technology for secure decentralized transaction ledgers with broad applications in financial systems, supply chains and health care. Byzantine fault tolerance [2] is addressed in distributed database systems, in which up to onethird of the participant nodes may be compromised. Consensus algorithms [3] ensures the integrity of transactions between participants over a distributed network [2] and is equivalent to the proof of Byzantine fault tolerance in distributed database systems [4, 5].
A large number of consensus algorithms have been proposed. For example; the original Nakamoto consensus protocol in Bitcoin uses Proof of Work (PoW) [6]. Proof Of Stake (PoS) [7, 8] uses participants’ stakes to generate the blocks respectively. Our previous paper gives a survey of previous DAGbased approaches [1].
In the previous paper [1], we introduced a new consensus protocol, called . The protocol is a DAGbased asynchronous nondeterministic protocol that guarantees pBFT. generates each block asynchronously and uses the OPERA chain (DAG) for faster consensus by confirming how many nodes share the blocks.
The Lachesis protocol as previously proposed is a set of protocols that create a directed acyclic graph for distributed systems. Each node can receive transactions and batch them into an event block. An event block is then shared with it’s peers. When peers communicate they share this information again and thus spread this information through the network. In BFT systems we would use a broadcast voting approach and ask each node to vote on the validity of each block. This event is synchronous in nature. Instead we proposed an asynchronous system where we leverage the concepts of distributed common knowledge, dominator relations in graph theory and broadcast based gossip to achieve a local view with high probability of being a global view. It accomplishes this asynchronously, meaning that we can increase throughput near linearly as nodes enter the network.
In this work, we propose a further enhancement on these concepts and we formalize them so that they can be applied to any asynchronous distributed system.
1.1 Contributions
In summary, this paper makes the following contributions:

We introduce the nrow flag table for faster root selection of the Lachesis Protocol.

We define continuous consistent cuts of a local view to achieve consensus.

We present proof of how domination relationships can be used for share information.

We formalize our proofs that can be applied to any generic asynchronous DAG solution.
1.2 Paper structure
2 Preliminaries
The protocol is run via nodes representing users’ machines which together create a network. The basic units of the protocol are called event blocks  a data structure created by a single node to share transaction and user information with the rest of the network. These event blocks reference previous event blocks that are known to the node. This flow or stream of information creates a sequence of history.
The history of the protocol can be represented by a directed acyclic graph , where is a set of vertices and is a set of edges. Each vertex in a row (node) represents an event. Time flows lefttoright of the graph, so left vertices represent earlier events in history.
For a graph , a path in is a sequence of vertices (, , , ) by following the edges in . Let be a vertex in . A vertex is the parent of if there is an edge from to . A vertex is an ancestor of if there is a path from to .
Figure 1 shows an example of an OPERA chain (DAG) constructed through the Lachesis protocol. Event blocks are representated by circles. Blocks of the same frame have the same color.
2.1 Basic Definitions
(Lachesis) The set of protocols
(Node) Each machine that participates in the Lachesis protocol is called a node. Let denote the total number of nodes.
() A constant defined in the system.
(Peer node) A node has peer nodes.
(Process) A process represents a machine or a node. The process identifier of is . A set = {1,…,} denotes the set of process identifiers.
(Channel) A process can send messages to process if there is a channel (,). Let {(,) s.t. } denote the set of channels.
(Event block) Each node can create event blocks, send (receive) messages to (from) other nodes.The structure of an event block includes the signature, generation time, transaction history, and hash information to references.
All nodes can create event blocks. The information of the referenced event blocks can be copied by each node. The first event block of each node is called a leaf event.
Suppose a node creates an event after an event in . Each event block has exactly references. One of the references is selfreference, and the other 1 references point to the top events of ’s 1 peer nodes.
(Top event) An event is a top event of a node if there is no other event in referencing .
(Indegree Vector) The indegree vector refers to the number of edges from other event blocks created by other nodes to the top event block of this node. The top event block indicates the most recently created event block by this node.
(Ref) An event is called “ref” of event if the reference hash of points to the event . Denoted by . For simplicity, we can use to denote a reference relationship (either or ).
(Selfref) An event is called “selfref” of event , if the selfref hash of points to the event . Denoted by .
( references) Each event block has at least references. One of the references is selfreference, and the other 1 references point to the top events of ’s 1 peer nodes.
(Selfancestor) An event block is selfancestor of an event block if there is a sequence of events such that . Denoted by .
(Ancestor) An event block is an ancestor of an event block if there is a sequence of events such that . Denoted by .
For simplicity, we simply use to refer both ancestor and selfancestor relationship, unless we need to distinguish the two cases.
(Flag Table) The flag table is a matrix, where is the number of nodes and k is the number of roots that an event block can reach. If an event block created by th node can reach th root, then the flag table stores the hash value of the th root.
2.2 Lamport timestamps
Our Lachesis protocols relies on Lamport timestamps to define a topological ordering of event blocks in OPERA chain. By using Lamport timestamps, we do not rely on physical clocks to determine a partial ordering of events.
The “happened before” relation, denoted by , gives a partial ordering of events from a distributed system of nodes. Each node (also called a process) is identified by its process identifier . For a pair of event blocks and , the relation ”” satisfies: (1) If and are events of process , and comes before , then . (2) If is the send() by one process and is the receive() by another process, then . (3) If and then . Two distinct events and are said to be concurrent if and .
For an arbitrary total ordering of the processes, a relation is defined as follows: if is an event in process and is an event in process , then if and only if either (i) or (ii) and . This defines a total ordering, and that the Clock Condition implies that if then .
We use this total ordering in our Lachesis protocol. This ordering is used to determine consensus time, as described in Section 3.
(HappenedImmediateBefore) An event block is said HappenedImmediateBefore an event block if is a (self) ref of . Denoted by .
(Happenedbefore) An event block is said HappenedBefore an event block if is a (self) ancestor of . Denoted by .
Happenedbefore is the relationship between nodes which have event blocks. If there is a path from an event block to , then Happenedbefore . “ Happenedbefore ” means that the node creating knows event block . This relation is the transitive closure of happensimmediatelybefore. Thus, an event happened before an event if one of the followings happens: (a) , (b) , or (c) . The happenedbefore relation of events form an acyclic directed graph such that an edge has a reverse direction of the same edge in .
(Concurrent) Two event blocks and are said concurrent if neither of them happened before the other. Denoted by .
Given two vertices and both contained in two OPERA chains (DAGs) and on two nodes. We have the following:

in if in .

in if in .
2.3 State Definitions
Each node has a local state, a collection of histories, messages, event blocks, and peer information, we describe the components of each.
(State) A state of a process is denoted by .
(Local State) A local state consists of a sequence of event blocks .
In a DAGbased protocol, each event block is valid only if the reference blocks exist before it. From a local state , one can reconstruct a unique DAG. That is, the mapping from a local state into a DAG is injective or onetoone. Thus, for Fantom, we can simply denote the th local state of a process by the DAG (often we simply use to denote the current local state of a process ).
(Action) An action is a function from one local state to another local state.
Generally speaking, an action can be one of: a action where is a message, a action, and an internal action. A message is a triple where is the sender of the message, is the message recipient, and is the body of the message. Let denote the set of messages. In the Lachesis protocol, consists of the content of an event block .
Semanticswise, in Lachesis, there are two actions that can change a process’s local state: creating a new event and receiving an event from another process.
(Event) An event is a tuple consisting of a state, an action, and a state. Sometimes, the event can be represented by the end state .
The th event in history of process is , denoted by .
(Local history) A local history of process is a (possibly infinite) sequence of alternating local states — beginning with a distinguished initial state. A set of possible local histories for each process in .
The state of a process can be obtained from its initial state and the sequence of actions or events that have occurred up to the current state. In the Lachesis protocol, we use appendonly semantics. The local history may be equivalently described as either of the following:
In Lachesis, a local history is equivalently expressed as:
where is the th local DAG (local state) of the process .
(Run) Each asynchronous run is a vector of local histories. Denoted by .
Let denote the set of asynchronous runs. We can now use Lamport’s theory to talk about global states of an asynchronous system. A global state of run is an vector of prefixes of local histories of , one prefix per process. The happensbefore relation can be used to define a consistent global state, often termed a consistent cut, as follows.
2.4 Consistent Cut
Consistent cuts represent the concept of scalar time in distributed computation, it is possible to distinguish between a “before” and an “after”.
In the Lachesis protocol, an OPERA chain is a directed acyclic graph (DAG). is a set of vertices and is a set of edges. DAG is a directed graph with no cycle. There is no path that has source and destination at the same vertex. A path is a sequence of vertices (, , …, , ) that uses no edge more than once.
An asynchronous system consists of the following sets: a set of process identifiers; a set of channels; a set is the set of possible local histories for each process ; a set of asynchronous runs; a set of all messages.
Each process / node in Lachesis selects other nodes as peers. For certain gossip protocol, nodes may be constrained to gossip with its peers. In such a case, the set of channels can be modelled as follows. If node selects node as a peer, then . In general, one can express the history of each node in DAGbased protocol in general or in Lachesis protocol in particular, in the same manner as in the CCK paper [9].
(Consistent cut) A consistent cut of a run is any global state such that if and is in the global state, then is also in the global state. Denoted by .
The concept of consistent cut formalizes such a global state of a run. A consistent cut consists of all consistent DAG chains. A received event block exists in the global state implies the existence of the original event block. Note that a consistent cut is simply a vector of local states; we will use the notation to indicate the local state of in cut of run .
A message chain of an asynchronous run is a sequence of messages , , , , such that, for all , . Consequently, .
The formal semantics of an asynchronous system is given via the satisfaction relation . Intuitively , “ satisfies ,” if fact is true in cut of run .
We assume that we are given a function that assigns a truth value to each primitive proposition . The truth of a primitive proposition in is determined by and . This defines .
(Equivalent cuts) Two cuts and are equivalent with respect to if:
( knows ) represents the statement “ is true in all possible consistent global states that include ’s local state”.
( partially knows ) represents the statement “there is some consistent global state in this run that includes ’s local state, in which is true.”
(Majority concurrently knows) The next modal operator is written and stands for “majority concurrently knows.” The definition of is as follows.
where and .
This is adapted from the “everyone concurrently knows” in CCK paper [9].
In the presence of onethird of faulty nodes, the original operator “everyone concurrently knows” is sometimes not feasible.
Our modal operator fits precisely the semantics for BFT systems, in which unreliable processes may exist.
(Concurrent common knowledge) The last modal operator is concurrent common knowledge (CCK), denoted by . is defined as a fixed point of .
CCK defines a state of process knowledge that implies that all processes are in that same state of knowledge, with respect to , along some cut of the run. In other words, we want a state of knowledge satisfying: . will be defined semantically as the weakest such fixed point, namely as the greatest fixedpoint of .It therefore satisfies:
Thus, states that there is some cut in the same asynchronous run including ’s local state, such that is true in that cut.
Note that implies . But it is not the case, in general, that implies or even that implies . The truth of is determined with respect to some cut . A process cannot distinguish which cut, of the perhaps many cuts that are in the run and consistent with its local state, satisfies ; it can only know the existence of such a cut.
(Global fact) Fact is valid in system , denoted by , if is true in all cuts of all runs of .
Fact is valid, denoted , if is valid in all systems, i.e.
.
(Local fact) A fact is local to process in system if .
2.5 Dominator (graph theory)
In a graph a dominator is the relation between two vertices. A vertex is dominated by another vertex , if every path in the graph from the root to have to go through . Furthermore, the immediate dominator for a vertex is the last of ’s dominators, which every path in the graph have to go through to reach .
(Pseudo top) A pseudo vertex, called top, is the parent of all top event blocks. Denoted by .
(Pseudo bottom) A pseudo vertex, called bottom, is the child of all leaf event blocks. Denoted by .
With the pseudo vertices, we have happenedbefore all event blocks. Also all event blocks happenedbefore . That is, for all event , and .
(Dom) An event dominates an event if every path from to must go through . Denoted by .
(Strict dom) An event strictly dominates an event if and does not equal . Denoted by .
(Domfront) A vertex is said “domfront” a vertex if dominates an immediate predecessor of , but does not strictly dominate . Denoted by .
(Dominance frontier) The dominance frontier of a vertex is the set of all nodes such that . Denoted by .
From the above definitions of domfront and dominance frontier, the following holds. If , then .
2.6 OPERA chain (DAG)
The core idea of the Lachesis protocol is to use a DAGbased structure, called the OPERA chain for our consensus algorithm. In the Lachesis protocol, a (participant) node is a server (machine) of the distributed system. Each node can create messages, send messages to, and receive messages from, other nodes. The communication between nodes is asynchronous.
Let be the number of participant nodes. For consensus, the algorithm examines whether an event block is dominated by nodes, where is the number of all nodes. The Happenbefore relation of event blocks with nodes means that more than twothirds of all nodes in the OPERA chain know the event block.
The OPERA chain (DAG) is the local view of the DAG held by each node, this local view is used to identify topological ordering, select Clotho, and create time consensus through Atropos selection. OPERA chain is a DAG graph consisting of vertices and edges. Each vertex is an event block. An edge refers to a hashing reference from to ; that is, .
(Leaf) The first created event block of a node is called a leaf event block.
(Root) The leaf event block of a node is a root. When an event block can reach more than of the roots in the previous frames, becomes a root.
(Root set) The set of all first event blocks (leaf events) of all nodes form the first root set ( = ). The root set consists of all roots such that , = 1..(1) and can reach more than 2n/3 other roots in the current frame, = 1..(1).
(Frame) Frame is a natural number that separates Root sets. The root set at frame is denoted by .
(Consistent chains) OPERA chains and are consistent if for any event contained in both chains, . Denoted by .
When two consistent chains contain the same event , both chains contain the same set of ancestors for , with the same reference and selfref edges between those ancestors.
If two nodes have OPERA chains containing event , then they have the same hashes contained within . A node will not accept an event during a sync unless that node already has references for that event, so both OPERA chains must contain references for . The cryptographic hashes are assumed to be secure, therefore the references must be the same. By induction, all ancestors of must be the same. Therefore, the two OPERA chains are consistent.
(Creator) If a node creates an event block , then the creator of , denoted by , is .
(Consistent chain) A global consistent chain is a chain if for all .
We denote to stand for is a subgraph of . Some properties of are given as follows:

().

().

() () (().
(Consistent root) Two chains and are root consistent, if for every contained in both chains, is a root of th frame in , then is a root of th frame in .
By consistent chains, if and belongs to both chains, then = . We can prove the proposition by induction. For = 0, the first root set is the same in both and . Hence, it holds for = 0. Suppose that the proposition holds for every from 0 to . We prove that it also holds for = + 1. Suppose that is a root of frame in . Then there exists a set reaching 2/3 of members in of frame such that (). As , and in , then (). Since the proposition holds for =, As is a root of frame in , is a root of frame in . Hence, the set of 2/3 members happens before in . So belongs to in .
Thus, all nodes have the same consistent root sets, which are the root sets in . Frame numbers are consistent for all nodes.
(Flag table) A flag table stores reachability from an event block to another root. The sum of all reachabilities, namely all values in flag table, indicates the number of reachabilities from an event block to other roots.
(Consistent flag table) For any top event in both OPERA chains and , and , then the flag tables of are consistent if they are the same in both chains.
From the above, the root sets of and are consistent. If contained in , and is a root of th frame in , then is a root of th frame in . Since , . The reference event blocks of are the same in both chains. Thus the flag tables of of both chains are the same.
(Clotho) A root in the frame can nominate a root as Clotho if more than 2n/3 roots in the frame dominate and dominates the roots in the frame .
Each node nominates a root into Clotho via the flag table. If all nodes have an OPERA chain with same shape, the values in flag table will be equal to each other in OPERA chain. Thus, all nodes nominate the same root into Clotho since the OPERA chain of all nodes has same shape.
(Atropos) An Atropos is assigned consensus time through the Lachesis consensus algorithm and is utilized for determining the order between event blocks. Atropos blocks form a Mainchain, which allows time consensus ordering and responses to attacks.
For any root set in the frame , the time consensus algorithm checks whether more than 2n/3 roots in the frame selects the same value. However, each node selects one of the values collected from the root set in the previous frame by the time consensus algorithm and Reselection process. Based on the Reselection process, the time consensus algorithm can reach agreement. However, there is a possibility that consensus time candidate does not reach agreement [10]. To solve this problem, time consensus algorithm includes minimal selection frame per next frame. In minimal value selection algorithm, each root selects minimum value among values collected from previous root set. Thus, the consensus time reaches consensus by time consensus algorithm.
(Mainchain (Blockchain)) For faster consensus, Mainchain is a special subgraph of the OPERA chain (DAG).
The Main chain — a core subgraph of OPERA chain, plays the important role of ordering the event blocks. The Main chain stores shortcuts to connect between the Atropos. After the topological ordering is computed over all event blocks through the Lachesis protocol, Atropos blocks are determined and form the Main chain. To improve path searching, we use a flag table — a local hash table structure as a cache that is used to quickly determine the closest root to a event block.
In the OPERA chain, an event block is called a root if the event block is linked to more than twothirds of previous roots. A leaf vertex is also a root itself. With root event blocks, we can keep track of “vital” blocks that of the network agree on.
Figure 2 shows an example of the Main chain composed of Atropos event blocks. In particular, the Main chain consists of Atropos blocks that are derived from root blocks and so are agreed by of the network nodes. Thus, this guarantees that at least of nodes have come to consensus on this Main chain.
Each participant node has a copy of the Main chain and can search consensus position of its own event blocks. Each event block can compute its own consensus position by checking the nearest Atropos event block. Assigning and searching consensus position are introduced in the consensus time selection section.
The Main chain provides quick access to the previous transaction history to efficiently process new incoming event blocks. From the Main chain, information about unknown participants or attackers can be easily viewed. The Main chain can be used efficiently in transaction information management by providing quick access to new event blocks that have been agreed on by the majority of nodes. In short, the Mainchain gives the following advantages:
 All event blocks or nodes do not need to store all information. It is efficient for data management.
 Access to previous information is efficient and fast.
Based on these advantages, OPERA chain can respond strongly to efficient transaction treatment and attacks through its Mainchain.
3 Lachesis Protocol
Algorithm 1 shows the pseudo algorithm for the Lachesis core procedure. The algorithm consists of two parts and runs them in parallel.
 In part one, each node requests synchronization and creates event blocks. In line 3, a node runs the Node Selection Algorithm. The Node Selection Algorithm returns the IDs of other nodes to communicate with. In line 4 and 5, the node synchronizes the OPERA chain (DAG) with the other nodes. Line 6 runs the Event block creation, at which step the node creates an event block and checks whether it is a root. The node then broadcasts the created event block to all other known nodes in line 7. The step in this line is optional. In line 8 and 9, Clotho selection and Atropos time consensus algorithms are invoked. The algorithms determines whether the specified root can be a Clotho, assign the consensus time, and then confirm the Atropos.
 The second part is to respond to synchronization requests. In line 10 and 11, the node receives a synchronization request and then sends its response about the OPERA chain.
3.1 Peer selection algorithm
In order to create an event block, a node needs to select other nodes. Lachesis protocols does not depend on how peer nodes are selected. One simple approach can use a random selection from the pool of nodes. The other approach is to define some criteria or cost function to select other peers of a node.
Within distributed system, a node can select other nodes with low communication costs, low network latency, high bandwidth, high successful transaction throughputs.
3.2 Dynamic participants
Our Lachesis protocol allows an arbitrary number of participants to dynamically join the system. The OPERA chain (DAG) can still operate with new participants. Computation on flag tables is set based and independent of which and how many participants have joined the system. Algorithms for selection of Roots, Clothos and Atroposes are flexible enough and not dependence on a fixed number of participants.
3.3 Peer synchronization
We describe an algorithm that synchronizes events between the nodes.
The algorithm assumes that a node always needs the events in topological ordering (specifically in reference to the lamport timestamps), an alternative would be to use an inverse bloom lookup table (IBLT) for completely potential randomized events.
Alternatively, one can simply use a fixed incrementing index to keep track of the top event for each node.
3.4 Node Structure
This section gives an overview of the node structure in Lachesis.
Each node has a height vector, indegree vector, flag table, frames, clotho check list, maxmin value, mainchain (blockchain), and their own local view of the OPERA chain (DAG). The height vector is the number of event blocks created by the th node. The indegree vector refers to the number of edges from other event blocks created by other nodes to the top event block of this node. The top event block indicates the most recently created event block by this node. The flag table is a matrix, where n is the number of nodes and k is the number of roots that an event block can reach. If an event block created by th node can reach th root, then the flag table stores the hash value of the th root. Each node maintains the flag table of each top event block.
Frames store the root set in each frame. Clotho check list has two types of check points; Clotho candidate () and Clotho (). If a root in a frame is a , a node check the part and if a root becomes Clotho, a node check part. Maxmin value is timestamp that addresses for Atropos selection. The Mainchain is a data structure storing hash values of the Atropos blocks.
Figure 3 shows an example of the node structure component of a node . In the figure, each value excluding self height in the height vector is 1 since the initial state is shared to all nodes. In the indegree vector, node stores the number of edges from other event blocks created by other nodes to the top event block. The indegrees of node , , and are 1. In flag table, node knows other two root hashes since the top event block can reach those two roots. Node also knows that other nodes know their own roots. In the example situation there is no clotho candidate and Clotho, and thus clotho check list is empty. The mainchain and maxmin value are empty for the same reason as clotho check list.
3.5 Peer selection algorithm via Cost function
We define three versions of the Cost Function (). Version one is focused around updated information share and is discussed below. The other two versions are focused on root creation and consensus facilitation, these will be discussed in a following paper.
We define a Cost Function () for preventing the creation of lazy nodes. A lazy node is a node that has a lower work portion in the OPERA chain (has created fewer event blocks). When a node creates an event block, the node selects other nodes with low value outputs from the cost function and refers to the top event blocks of the reference nodes. An equation (1) of is as follows,
(1) 
where and denote values of indegree vector and height vector respectively. If the number of nodes with the lowest is more than , one of the nodes is selected at random. The reason for selecting high is that we can expect a high possibility to create a root because the high indicates that the communication frequency of the node had more opportunities than others with low . Otherwise, the nodes that have high (the case of ) have generated fewer event blocks than the nodes that have low ..
Figure 4 shows an example of the node selection based on the cost function after the creation of leaf events by all nodes. In this example, there are five nodes and each node created leaf events. All nodes know other leaf events. Node creates an event block and calculates the cost functions. Step 2 in Figure 4 shows the results of cost functions based on the height and indegree vectors of node . In the initial step, each value in the vectors are same because all nodes have only leaf events. Node randomly selects nodes and connects to the leaf events of selected nodes. In this example, we set =3 and assume that node selects node and .
Figure 5 shows an example of the node selection after a few steps of the simulation in Figure 4. In Figure 5, the recent event block is created by node . Node calculates the cost function and selects the other two nodes that have the lowest results of the cost function. In this example, node has 0.5 as the result and other nodes have the same values. Because of this, node first selects node and randomly selects other nodes among nodes , , and .
The height of node in the example is 2 (leaf event and event block ). On the other hand, the height of node in node structure of is 1. Node is still not aware of the presence of the event block . It means that there is no path from the event blocks created by node to the event block . Thus, node has 1 as the height of node .
Algorithm 3 shows the selecting algorithm for selecting reference nodes. The algorithm operates for each node to select a communication partner from other nodes. Line 4 and 5 set min_cost and to initial state. Line 7 calculates the cost function for each node. In line 8, 9, and 10, we find the minimum value of the cost function and set min_cost and to and the ID of each node respectively. Line 11 and 12 append the ID of each node to if min_cost equals . Finally, line 13 selects randomly node IDs from as communication partners. The time complexity of Algorithm 2 is , where is the number of nodes.
After the reference node is selected, each node communicates and shares information of all event blocks known by them. A node creates an event block by referring to the top event block of the reference node. The Lachesis protocol works and communicates asynchronously. This allows a node to create an event block asynchronously even when another node creates an event block. The communication between nodes does not allow simultaneous communication with the same node.
Figure 6 shows an example of the node selection in Lachesis protocol. In this example, there are five nodes ( and ) and each node generates the first event blocks, called leaf events. All nodes share other leaf events with each other. In the first step, node generates new event block . Then node calculates the cost function to connect other nodes. In this initial situation, all nodes have one event block called leaf event, thus the height vector and the indegree vector in node has same values. In other words, the heights of each node are 1 and indegrees are 0. Node randomly select the other two nodes and connects to the top two event blocks from the other two nodes. Step 2 shows the situation after connections. In this example, node select node and to connect and the event block is connected to the top event blocks of node and . Node only knows the situation of the step 2.
After that, in the example, node generates a new event block and also calculates the cost function. randomly select the other two nodes; , and , since only has information of the leaf events. Node requests to and to connect , then nodes and send information for their top event blocks to node as response. The top event block of node is and node is the leaf event. The event block is connected to and leaf event from node . Step 4 shows these connections.
3.6 Event block creation
In the Lachesis protocol, every node can create an event block. Each event block refers to other event blocks using their hash values. In the Lachesis protocol, a new event block refers to neighbor event blocks under the following conditions:

Each of the reference event blocks is the top event blocks of its own node.

One reference should be made to a selfref that references to an event block of the same node.

The other 1 reference refers to the other 1 top event nodes on other nodes.
Figure 7 shows the example of an event block creation with a flag table. In this example the recent created event block is by node . The figure shows the node structure of node . We omit the other information such as height and indegree vectors since we only focus on the change of the flag table with the event block creation in this example. The flag table of in Figure 7 is updated with the information of the previous connected event blocks , , and . Thus, the set of the flag table is the results of OR operation among the three root sets for (, , and ), (), and (, , and ).
Figure 8, shows the communication process is divided into five steps for two nodes to create an event block. Simply, a node requests to . then, responds to directly.
3.7 Topological ordering of events using Lamport timestamps
Every node has a physical clock and it needs physical time to create an event block. However, for consensus, Lachesis protocols relies on a logical clock for each node. For the purpose, we use ”Lamport timestamps” [11] to determine the time ordering between event blocks in a asynchronous distributed system.
The Lamport timestamps algorithm is as follows:

Each node increments its count value before creating an event block.

When sending a message include its count value, receiver should consider which sender’s message is received and increments its count value.

If current counter is less than or equal to the received count value from another node, then the count value of the recipient is updated.

If current counter is greater than the received count value from another node, then the current count value is updated.
We use the Lamport’s algorithm to enforce a topological ordering of event blocks and use it in the Atropos selection algorithm.
Since an event block is created based on logical time, the sequence between each event blocks is immediately determined. Because the Lamport timestamps algorithm gives a partial order of all events, the whole time ordering process can be used for Byzantine fault tolerance.
3.8 Domination Relation
Here, we introduce a new idea that extends the concept of domination.
For a vertex in a DAG , let denote an inducedsubgraph of such that consists of all ancestors of including , and is the induced edges of in .
For a set of vertices, an event dominates if there are more than 2/3 of vertices in such that dominates .
Recall that is the set of all leaf vertices in . The dom set is the same as the set .The dom set is defined as follows:
A vertex belongs to a dom set within the graph , if dominates .
The dom set consists of all roots such that , = 1..(1), and dominates .
The dom set is the same with the root set , for all nodes.
3.9 Examples of domination relation in DAGs
This section gives several examples of DAGs and the domination relation between their event blocks.
Figure 10 shows an examples of a DAG and dominator trees.
Figure 11 depicts an example of a DAG and 2/3 dom sets.
Figure 12 shows an example an dependency graphs. On each row, the left most figure shows the latest OPERA chain. The left figures on each row depict the dependency graphs of each node, which are in their compact form. When no fork presents, each of the compact dependency graphs is a tree.
Figure 13 shows an example of a pair of fork events. Each row shows an OPERA chain (left most) and the compact dependency graphs on each node (right). The fork events are shown in red and green vertices
3.10 Root Selection
All nodes can create event blocks and an event block can be a root when satisfying specific conditions. Not all event blocks can be roots. First, the first created event blocks are themselves roots. These leaf event blocks form the first root set of the first frame . If there are total nodes and these nodes create the event blocks, then the cardinality of the first root set is . Second, if an event block can reach at least 2n/3 roots, then is called a root. This event does not belong to , but the next root set of the next frame . Thus, excluding the first root set, the range of cardinality of root set is . The event blocks including before is in the frame . The roots in does not belong to the frame . Those are included in the frame when a root belonging to occurs.
We introduce the use of a flag table to quickly determine whether a new event block becomes a root. Each node maintains a flag table of the top event block. Every event block that is newly created is assigned hashes for its referenced event blocks. We apply an operation on each set in the flag table of the referenced event blocks.
Figure 14 shows an example of how to use flag tables to determine a root. In this example, is the most recently created event block. We apply an operation on each set of the flag tables for ’s referenced event blocks. The result is the flag table of . If the cardinality of the root set in ’s flag table is more than , is a root. In this example, the cardinality of the root set in is 4, which is greater than (=5). Thus, becomes root. In this example, is added to frame since becomes new root.
The root selection algorithm is as follows:

The first event blocks are considered as roots.

When a new event block is added in the OPERA chain (DAG), we check whether the event block is a root by applying an operation on each set of the flag tables connected to the new event block. If the cardinality of the root set in the flag table for the new event block is more than 2n/3, the new event block becomes a root.

When a new root appears on the OPERA chain, nodes update their frames. If one of the new event blocks becomes a root, all nodes that share the new event block add the hash value of the event block to their frames.

The new root set is created if the cardinality of the previous root set is more than 2n/3 and the new event block can reach roots in .

When the new root set is created, the event blocks from the previous root set to before belong to the frame .
3.11 Clotho Selection
A Clotho is a root that satisfies the Clotho creation conditions. Clotho creation conditions are that more than 2n/3 nodes know the root and a root knows this information.
In order for a root in frame to become a Clotho, must be reached by more than n/3 roots in the frame . Based on the definition of the root, each root reaches more than 2n/3 roots in previous frames. If more than n/3 roots in the frame can reach , then is spread to all roots in the frame . It means that all nodes know the existence of . If we have any root in the frame , a root knows that is spread to more than 2n/3 nodes. It satisfies Clotho creation conditions.
In the example in Figure 15, n is 5 and each circle indicates a root in a frame. Each arrow means one root can reach (happenedbefore) to the previous root. Each root has 4 or 5 arrows (outdegree) since n is 5 (more than 2n/3 4). and in frame are roots that can reach in frame . and also can reach , but we only marked and (when n is 5, more than n/3 2) since we show at least more than n/3 conditions in this example. And it was marked with a blue bold arrow (Namely, the roots that can reach root have the blue bold arrow). In this situation, an event block must be able to reach or in order to become a root in frame (In our example, n=5, more than n/3 2, and more than 2n/3 4. Thus, to be a root, either must be reached). All roots in frame reach in frame .
To be a root in frame , an event block must reach more than 2n/3 roots in frame that can reach . Therefore, if any of the root in frame exists, the root must have happenedbefore more than 2n/3 roots in frame . Thus, the root of knows that is spread over more than 2n/3 of the entire nodes. Thus, we can select as Clotho.
Figure 16 shows an example of a Clotho. In this example, all roots in the frame have happenedbefore more than n/3 roots in the frame . We can select all roots in the frame as Clotho since the recent frame is .
Algorithm 4 shows the pseudo code for Clotho selection. The algorithm takes a root as input. Line 4 and 5 set and to and 0 respectively. Line 68 checks whether any root in has happenedbefore with the 2n/3 condition where is the current frame. In line 910, if the number of roots in which happenedbefore is more than , the root is set as a Clotho. The time complexity of Algorithm 3 is , where is the number of nodes.
Figure 17 shows the state of node A when a Clotho is selected. In this example, node A knows all roots in the frame become Clotho’s. Node A prunes unnecessary information on its own structure. In this case, node A prunes the root set in the frame since all roots in the frame become Clotho and the Clotho Check list stores the Clotho information.
3.12 Atropos Selection
Atropos selection algorithm is the process in which the candidate time generated from Clotho selection is shared with other nodes, and each root reselects candidate time repeatedly until all nodes have same candidate time for a Clotho.
After a Clotho is nominated, each node then computes a candidate time of the Clotho. If there are more than twothirds of the nodes that compute the same value for candidate time, that time value is recorded. Otherwise, each node reselects candidate time. By the reselection process, each node reaches time consensus for candidate time of Clotho as the OPERA chain (DAG) grows. The candidate time reaching the consensus is called Atropos consensus time. After Atropos consensus time is computed, the Clotho is nominated to Atropos and each node stores the hash value of Atropos and Atropos consensus time in MainChain (blockchain). The Mainchain is used for time order between event blocks. The proof of Atropos consensus time selection is shown in the section 5.2.
Figure 18 shows the example of Atropos selection. In Figure 16, all roots in the frame are selected as Clotho through the existence of roots in the frame . Each root in the frame computes candidate time using timestamps of reachable roots in the frame . Each root in the frame stores the candidate time to minmax value space. The root in the frame can reach more than 2n/3 roots in and can know the candidate time of the reachable roots that takes. If knows the same candidate time than more than 2n/3, we select the candidate time as Atropos consensus time. Then all Clotho in the frame become Atropos.
Figure 19 shows the state of node B when Atropos is selected. In this example, node B knows all roots in the frame become Atropos. Then node B prunes information of the frame in clotho check list since all roots in the frame become Atropos and main chain stores Atropos information.
Algorithm 5 and 6 show pseudo code of Atropos consensus time selection and Consensus time reselection. In Algorithm 5, at line 6, saves the deference of relationship between root set of and . Thus, line 8 means that is one of the elements in root set of the frame , where the frame includes . Line 10, each root in the frame selects own Lamport timestamp as candidate time of when they confirm root as Cltoho. In line 12, 13, and 14, , , and save the set of root that can be happenedbefore with 2n/3 condition , the result of function, and the number of root in having . Line 15 is checking whether there is a difference as much as between and where is a constant value for minimum selection frame. Line 1620 is checking whether more than twothirds of root in the frame nominate the same candidate time. If twothirds of root in the frame nominate the same candidate time, the root is assigned consensus time as . Line 22 is minimum selection frame. In minimum selection frame, minimum value of candidate time is selected to reach byzantine agreement. Algorithm 6 operates in the middle of Algorithm 5. In Algorithm 6, input is a root set and output is a reselected candidate time. Line 45 computes the frequencies of each candidate time from all the roots in . In line 611, a candidate time which is smallest time that is the most nomitated. The time complexity of Algorithm 6 is where is the number of nodes. Since Algorithm 5 includes Algorithm 6, the time complexity of Algorithm 5 is where is the number of nodes.
In the Atropos Consensus Time Selection algorithm, nodes reach consensus agreement about candidate time of a Clotho without additional communication (i.e., exchanging candidate time) with each other. Each node communicates with each other through the Lachesis protocol, the OPERA chain of all nodes grows up into same shape. This allows each node to know the candidate time of other nodes based on its OPERA chain and reach a consensus agreement. The proof that the agreement based on OPERA chain become agreement in action is shown in the section 5.2.
Atropos can be determined by the consensus time of each Clotho. It is an event block that is determined by finality and is nonmodifiable. Furthermore, all event blocks can be reached from Atropos guarantee finality.
3.13 Lachesis Consensus
Figure 20 illustrates how consensus is reached through the domination relation in the OPERA chain. In the figure, leaf set, denoted by , consists of the first event blocks created by individual participant nodes. is the set of event blocks that do not belong neither in nor in any root set . Given a vertex in , there exists a path from that can reach a leaf vertex in . Let and be root event blocks in root set and , respectively. is the block where a quorum or more blocks exist on a path that reaches a leaf event block. Every path from to a leaf vertex will contain a vertex in . Thus, if there exists a vertex in such that is created by more than a quorum of participants, then is already included in . Likewise, is a block that can be reached for including through blocks made by a quorum of participants. For all leaf event blocks that could be reached by , they are connected with more than quorum participants through the presence of . The existence of the root shows that information of is connected with more than a quorum. This kind of a path search allows the chain to reach consensus in a similar manner as the pBFT consensus processes. It is essential to keep track of the blocks satisfying the pBFT consensus process for quicker path search; our OPERA chain and Mainchain keep track of these blocks.
The sequential order of each event block is an important aspect for Byzantine fault tolerance. In order to determine the preandpost sequence between all event blocks, we use Atropos consensus time, Lamport timestamp algorithm and the hash value of the event block.
First, when each node creates event blocks, they have a logical timestamp based on Lamport timestamp. This means that they have a partial ordering between the relevant event blocks. Each Clotho has consensus time to the Atropos. This consensus time is computed based on the logical time nominated from other nodes at the time of the 2n/3 agreement.
Each event block is based on the following three rules to reach an agreement:

If there are more than one Atropos with different times on the same frame, the event block with smaller consensus time has higher priority.

If there are more than one Atropos having any of the same consensus time on the same frame, determine the order based on the own logical time from Lamport timestamp.

When there are more than one Atropos having the same consensus time, if the local logical time is same, a smaller hash value is given priority through hash function.
Figure 22 shows the part of OPERA chain in which the final consensus order is determined based on these 3 rules. The number represented by each event block is a logical time based on Lamport timestamp. Final topological consensus order containing the event blocks are based on agreement from the apropos. Based on each Atropos, they will have different colors depending on their range.
3.14 Detecting Forks
(Fork) A pair of events (, ) is a fork if and have the same creator, but neither is a selfancestor of the other. Denoted by .
For example, let be an event in node and two child events and of . if , , , , then (, ) is a fork. The fork relation is symmetric; that is iff .
By definition, (, ) is a fork if , and . Using HappenedBefore, the second part means and . By definition of concurrent, we get .
Lemma 3.1.
If there is a fork , then and cannot both be roots on honest nodes.
Here, we show a proof by contradiction. Any honest node cannot accept a fork so and cannot be roots on the same honest node. Now we prove a more general case. Suppose that both is a root of and is root of , where and are honest nodes. Since is a root, it reached events created by more than 2/3 of member nodes. Similarly, is a root, it reached events created by more than 2/3 of member nodes. Thus, there must be an overlap of more than /3 members of those events in both sets. Since we assume less than /3 members are not honest, so there must be at least one honest member in the overlap set. Let be such an honest member. Because is honest, does not allow the fork.
4 Conclusion
We further optimize the OPERA chain and Mainchain for faster consensus. By using Lamport timestamps and domination relation, the topological ordering of event blocks in OPERA chain and Main chain is more intuitive and reliable in distributed system.
5 Appendix
5.1 Preliminaries
The history of a Lachesis protocol can be represented by a directed acyclic graph , where is a set of vertices and is a set of edges. Each vertex in a row (node) represents an event. Time flows lefttoright of the graph, so left vertices represent earlier events in history. A path in is a sequence of vertices (, , , ) by following the edges in . Let be a vertex in . A vertex is the parent of if there is an edge from to . A vertex is an ancestor of if there is a path from to .
Definition 5.1 (node).
Each machine that participates in the Lachesis protocol is called a node.
Let denote the total number of nodes.
Definition 5.2 (event block).
Each node can create event blocks, send (receive) messages to (from) other nodes.
Definition 5.3 (vertex).
An event block is a vertex of the OPERA chain.
Suppose a node creates an event after an event in . Each event block has exactly references. One of the references is selfreference, and the other 1 references point to the top events of ’s 1 peer nodes.
Definition 5.4 (peer node).
A node has peer nodes.
Definition 5.5 (top event).
An event is a top event of a node if there is no other event in referencing .
Definition 5.6 (selfref).
An event is called “selfref” of event , if the selfref hash of points to the event . Denoted by .
Definition 5.7 (ref).
An event is called “ref” of event if the reference hash of points to the event . Denoted by .
For simplicity, we can use to denote a reference relationship (either or ).
Definition 5.8 (selfancestor).
An event block is selfancestor of an event block if there is a sequence of events such that
Comments
There are no comments yet.