1. Introduction
Blockchains are a sequential data structure in which each element depends in a structured, predefined manner on every prior element. Most blockchains implement this property recursively by including in each data element a hash of the previous element. This makes it easy to append an element to the end of a blockchain, but difficult to alter or insert elements in the middle of a blockchain, since every subsequent element must be modified to preserve validity. In parallel, the word ‘blockchain’ has also come to mean the network and consensus algorithms that enable a distributed set of nodes to maintain such a data structure robustly and consistently.
In practice, there are many obstacles to maintaining a distributed blockchain, including peer churn, adversarial behavior, and unreliable networks. In this paper, we focus on the latter challenge and consider how to build efficient blockchains over unreliable networks. Although the research community is increasingly studying peertopeer (P2P) networks in blockchain systems (Decker and Wattenhofer, 2013; Miller et al., 2015; Biryukov et al., 2014; Heilman et al., 2015; Basu et al., [n. d.]), network effects are arguably the aspect of blockchains that have received the least attention thus far. In particular, we are interested in how the network affects blockchain performance metrics like latency and throughput for new data elements. To explain the problem, we start with a brief description of blockchain functionality.
Blockchain Primer. Blockchain systems are typically used to track sequential events, such as financial transactions in a cryptocurrency. A block is simply a data structure that stores a batch of such events, along with a hash of the previous block contents. The core problem in blockchain systems is determining (and agreeing on) the next block in the data structure. Many leading cryptocurrencies (e.g., Bitcoin, Ethereum, Cardano, EOS, Monero) handle this problem by electing a proposer who is responsible for producing a new block and sharing it with the network. This proposer election happens via a distributed, randomized protocol chosen by the system designers.
In Bitcoin, proposers are selected with probability proportional to the computational energy they have expended; this mechanism is called proofofwork (PoW). Under PoW, each node solves a computational puzzle of random duration; upon solving the puzzle, the node relays its block over the underlying P2P network, along with proof that it solved the puzzle. Due to the high energy cost of solving PoW puzzles (or
mining) (Lee, 2017), a new paradigm recently emerged called proofofstake (PoS). Under PoS, a proposer is elected with probability proportional to their stake in the system. This election process happens at fixed time intervals.When a node is elected proposer, its job is to propose a new block, which contains a hash of the previous block’s contents. Hence the proposer must choose where in the blockchain to append her new block. Most blockchains use a longest chain fork choice rule, under which the proposer always appends her new block to the end of the longest chain of blocks in the proposer’s local view of the blocktree. If there is no network latency and no adversarial behavior, this rule ensures that the blockchain will always be a perfect chain. However, in a network with random delays, it is possible that the proposer may not have received all blocks when she is elected. As such, she might propose a block that causes the blockchain to fork (e.g. Figure 2). In longestchain blockchains, this forking is eventually resolved with probability 1 because one fork eventually overtakes the other.
Forking occurs in almost all major blockchains, and it implies that blockchains are often not chains at all, but blocktrees. For many consensus protocols (particularly chainbased ones like Bitcoin’s), forking reduces throughput, because blocks that are not on the main chain are discarded. It also has security implications; even protocols that achieve good block throughput in the highforking regime have thus far been prone to security vulnerabilities (which has been resolved in a recent work (Bagaria et al., [n. d.]), which also guarantees low latency). Nonetheless, forking is a significant obstacle to practical performance in existing blockchains. There are two common approaches to mitigate forking. One is to improve the network itself, e.g. by upgrading hardware and routing. This idea has been the basis for recent projects like the Falcon network (Basu et al., [n. d.]) and Bloxroute. The other is to design consensus algorithms that tolerate network latency by making use of forked branches. Examples include GHOST (Sompolinsky and Zohar, 2015), SPECTRE (Sompolinsky et al., 2016), and Inclusive/Conflux (Lewenberg et al., 2015; Li et al., 2018). In this paper, we design a P2P protocol called Barracuda that effectively reduces forking for a wide class of existing consensus algorithms.
Contributions. We propose a novel probabilistic framework that allows one to formally investigate the tradeoff between the network delays and the throughput. We propose a new block proposal protocol to mitigate the forking due to those network delays. We prove that when the proposer node polls randomly selected nodes for their local blocktree information, then it has the same effect as speeding up the communication network by a factor of , thus reducing forking significantly. This is stated informally in the following and precisely in Theorem 1.
Theorem 1 (Informal).
In a fully connected network with exponential network delays of mean , let denote the (random) number of blocks included in the longest chain at time . For sufficiently small , under the proposed Barracuda polling, the resulting height of the longest chain is close to for any arbitrary block arrival process and any local attachment protocol.
These results hold without actually changing any network hardware, and they apply generally to any block arrival process or fork choice rule. In fact, we prove a significantly stronger statement; the entire blocktree probability mass function changes to as if the network is faster by a factor of , not just the downstream statistic of longest chain length. The analysis also has connections to load balancing in ballsandbins problems, which may be of independent interest. We make the following three specific contributions:

We propose a new probabilistic model for the evolution of a blockchain in proofofstake cryptocurrencies, where the main source of randomness comes from the network delay. This captures the network delays measured in real world P2P cryptocurrency networks (Decker and Wattenhofer, 2013). Simulations under this model explain the gap observed in realworld cryptocurrencies, between the achievable block throughput and the best block throughput possible in an infinitecapacity network. Our model differs from that of prior theoretical papers, which typically assume a worstcase network model that allows significant simplification in the analysis (Garay et al., 2015; Sompolinsky and Zohar, 2015). We analyze the effect of average network delay on system throughput and provide a lower bound on the block throughput.

To mitigate forking due to network delays, we propose a new block proposal algorithm called Barracuda, under which nodes poll randomlyselected nodes for their local blocktree information before proposing a new block. We show that for small values of , Barracuda has approximately the same effect as if the entire network were a factor of faster.

We provide guidelines on how to implement Barracuda in practice in order to provide robustness against several realworld factors, such as network model mismatch and adversarial behavior.
Outline.
We begin by describing a stochastic model for blocktree evolution in Section 2; we analyze the block throughput of this model in Section 3. Next, we present Barracuda and analyze its block throughput in Section 4. Finally, we describe realworld implementation issues in Section 5, such as how to implement polling and analyzing adversarial robustness.
2. Model
We propose a probabilistic model for blocktree evolution with two sources of randomness: randomness in the timing and the proposer of each new block, and the randomness in the delay in transmitting messages over the network. The whole system is parametrized by the number of nodes , average network propagation delay , proposer waiting time , and number of concurrent proposers .
2.1. Modeling block generation
We model block generation as a discretetime arrival process, where the block is generated at time . We previously discussed the election of a single proposer for each block; in practice, some systems elect multiple proposers at once to provide robustness if one proposer fails or is adversarial. Hence at time , nodes are chosen uniformly at random as proposers, each of which proposes a distinct block. The index is a positive integer, which we also refer to as time when it is clear from the context whether we are referring to or . The randomness in choosing the proposers is independent across time and of other sources of randomness in the model. We denote the blocks proposed at time as . The block arrival process follows the distribution of a certain point process, which is independent of all other randomness in the model.
Two common block arrival process are Poisson and deterministic. Under a Poisson arrival process, for some constant , and is independent of . In proofofwork (PoW) systems like Bitcoin, block arrivals are determined by independent attempts at solving a cryptographic puzzle, where each attempt has a fixed probability of success. With high probability, one proposer is elected each time a block arrival occurs (i.e., ), and the arrival time can be modeled as a Poisson arrival process.
In many PoS protocols (e.g., Cardano, Qtum, and Particl), time is split into quantized intervals. Some protocols give each user a fixed probability of being chosen to propose the next block in each time interval, leading to a geometricallydistributed block arrival time. If the probability of selecting
any proposer in each time slot is smaller than one, the expected interblock arrival time will be greater than one, as in Qtum and Particl. Other protocols explicitly designate one proposer per time slot (e.g., Cardano (Cardano, [n. d.])). Assuming all nodes are active, such protocols can be modeled with a deterministic interval process, , for all . The deterministic arrival process may even be a reasonable approximation for certain parameter regimes of protocols like Qtum and Particl. If the probability of electing any proposer in a time step is close to one, there will be at least one block proposer in each time slot with high probability, which can be approximated by a deterministic arrival process. Regardless, our main results apply to arbitrary arrival processes , including geometric and deterministic.When a block is generated by a proposer, the proposer attaches the new block to one of the existing blocks, which we refer to as the parent block of . The proposer chooses this parent block according to a predetermined rule called a forkchoice rule; we discuss this further in Section 2.1. Upon creating a block, the proposer broadcasts a message containing the following information:
to all the other nodes in the system. The broadcasting process is governed by our network model, which is described in Section 2.1.
In this work, we focus mainly on the PoS setting due to subtleties in the practical implementation of Barracuda (described in Section 4). In particular, PoW blockchains require candidate proposers to choose a block’s contents—including the parent block—before
generating the block. But in PoW, block generation itself takes an exponentiallydistributed amount of time. Hence, if a proposer were to poll nodes before proposing, that polling information would already be (somewhat) stale by the time the block gets broadcast to the network. In contrast, PoS cryptocurrencies allow block creation to happen
after a proposer is elected; hence polling results can be simultaneously incorporated into a block and broadcast to the network. Because of this difference, PoS cryptocurrencies benefit more from Barracuda than PoW ones.Global view of the blocktree. Notice that the collection of all messages forms a rooted tree, called the blocktree. Each node represents a block, and each directed edge represents a pointer to a parent block. The root is called the genesis block, and is visible to all nodes. All blocks generated at time point to the genesis block as a parent. The blocktree grows with each new block, since the block’s parent must be an existing block in the blocktree; since each block can specify only one parent, the data structure remains a tree. Formally, we define the global blocktree as follows.
Definition 0 (Global tree).
We define the global tree at time , denoted as , to be a graph whose edges are described by the set with the vertices being the union of the genesis block and all the blocks indexed as .
If there is no network delay in communicating the messages, then all nodes will have the same view of the blocktree. However, due to network delays and the distributed nature of the system, a proposer might add a block before receiving all the previous blocks. Hence, the choice of the parent node depends on the local view of the blocktree at the proposer node.
Local view of the blocktree. Each node has its own local view of the blocktree, depending on which messages it has received. Upon receiving the message , a node updates its local view as follows. If the local view contains the parent block referred in the message, then the block is attached to it. If the local view does not contain the parent block, then the message is stored in an orphan cache until the parent block is received. Notice that is random and each node’s local view is a subgraph of .
2.2. Network model and fork choice rule
We avoid modeling the topology of the underlying communication network by instead modeling the (stochastic) endtoend delay of a message from any source to any destination node. Stochastic network models have been studied for measuring the effects of selfish mining (Göbel et al., 2016) and blockchain throughput (Papadis et al., 2018)
. We assume each block reaches a given node with delay distributed as an independent exponential random variable with mean
. This exponential delay captures the varying and dynamic network effects of real blockchain networks, as empirically measured in (Decker and Wattenhofer, 2013) on Bitcoin’s P2P network. In particular, this exponential delay encompasses both network propagation delay and processing delays caused by nodes checking message validity prior to relaying it. These checks are often used to protect against denialofservice attacks, for instance.When a proposer is elected to generate a new block at time , she waits time
and decides on where to append the new block in its local blocktree. The choice of parent block is governed by the fork choice rule. The most common one is the Nakamoto protocol (longest chain), though other fork choice rules do exist. When a node is elected as a proposer under the Nakamoto protocol (or longest chain rule), the node attaches the block to the leaf of the
longest chain in the local blocktree. When there is a tie, the proposer chooses one arbitrarily. Longest chain is widelyused, including in Bitcoin, ZCash, and Monero. The Nakamoto protocol belongs to the family of local attachment protocols, where the proposer makes the decision on where to attach the block solely based on the snapshot of its local tree at time , stripping away the information on the proposer of each block. In other words, we require that the protocol be invariant to the identity of the proposers of the newly generated block. We show in Section 4 that our analysis applies generally to all local attachment protocols. In practice, almost all blockchains use local attachment protocols.Notice that if is much smaller than the block interarrival time and all nodes obey protocol, then the global blocktree is more likely to form a chain. On the other hand, if is much larger than the block interarrival time, then is more likely to be a star (i.e. a depthone rooted tree). To maximize blockchain throughput, it is desirable to design protocols that maximize the expected length of the longest chain of . Intuitively, a faster network infrastructure with a smaller implies less forking. In this work, we are interested primarily in settings where is larger than the mean interblock time. This is admittedly not a conventional setting for existing blockchain systems, but a current trend in nextgeneration blockchains is to minimize block times and/or to run blockchains on increasingly unreliable networks (e.g., ad hoc networks, wireless networks, etc.). In both settings, we may expect to be comparable to or larger than the block time. Hence our paper aims in part to understand the feasibility of operating blockchains in this regime.
3. Block Throughput Analysis
A key performance metric in blockchains is transaction throughput, or the number of transactions that can be processed per unit time. Transaction throughput is closely related to a property called block throughput, also known as the main chain growth rate. Given a blocktree , the length of the main chain is defined as the number of hops from the genesis block to the farthest leaf. Precisely,
where denotes the set of leaf blocks in , and denotes the hop distance between two vertices and in . We define block throughput as . Block throughput describes how quickly blocks are added to the blockchain; if each block is full and contains only valid transactions, then block throughput is proportional to transaction throughput. In practice, this is not the case, since adversarial strategies like selfish mining (Eyal and Sirer, 2018) can be used to reduce the number of valid transactions per block. Regardless, block throughput is frequently used as a stepping stone for quantifying transaction throughput (Sompolinsky and Zohar, 2015; Garay et al., 2015; Bagaria et al., [n. d.]).
For this reason, a key objective of our work is to quantify block throughput, both with and without polling. We begin by studying block throughput without polling under the Nakamoto protocol forkchoice rule, as in Bitcoin. This has been previously studied in (Sompolinsky and Zohar, 2015; Garay et al., 2015; Bagaria et al., [n. d.]), under a simple network model where there is a fixed deterministic delay between any pair of nodes. This simple network model is justified by arguing that if all transmission of messages are guaranteed to arrive within a fixed maximum delay , then the worst case of block throughput happens when all transmission have delay of exactly . Such practice ignores all the network effects, for the sake of tractable analysis. In this section, we focus on capturing such network effect on the block throughput. We ask the fundamental question of how block throughput depends on the average network delay, under a more realistic network model where each communication is a realization of a random exponential variable with average delay . In the following (Theorem 1), we provide a lower bound on the block throughput, under the more nuanced network model from Section 2, and Nakamoto protocol forkchoice rule. This result holds for a deterministic arrival process. We refer to a longer version of this paper (Fanti et al., 2018) for a proof.
Theorem 1 ().
Suppose there is a single proposer () at each discrete time, , with no waiting time (). For any number of nodes , any time , any average delay , and , under the Nakamoto protocol, we have that
Notice that trivially, , with equality when there is no network delay, . Theorem 2 and our experiments in Figure 1 suggest that Theorem 1 is tight when . Hence there is an (often substantial) gap between the realized block throughput and the desired upper bound. This gap is caused by network delays; since proposers may not have an uptodate view of the blocktree due to network latency, they may append to blocks that are not necessarily at the end of the global main chain, thereby causing the blockchain to fork.
One goal is to obtain a blocktree with no forking at all, i.e., a perfect blockchain with . Setting , which implies that , we obtain that . The following result shows that if , then with high probability.
Theorem 2 ().
Fix a confidence parameter . For the Nakamoto protocol, if
(1) 
then the chain happens with probability at least as and .
Conversely, when and
(2) 
then the chain happens with probability at most as . Here ignores the dependence on the parameter , which is fixed throughout.
The proof is included in Section 6.1. This result shows the prevalence of forking. For example, if we conservatively use Bitcoin’s parameters settings, taking , , and , equation (2) implies that for blocks, forking occurs with high probability. Hence forking is pervasive even in systems with parameters chosen specifically to avoid it.
A natural question is how to reduce forking, and thereby increase block throughput. To this end, we next introduce a blockchain evolution protocol called Barracuda, that effectively reduces forking without changing the system parameter , which is determined by network bandwidth.
4. Barracuda
To reduce forking and increase block throughput, we propose Barracuda, which works as follows: upon arrival of a block , the proposer of block selects nodes in the network uniformly at random, and inquires about their local tree.^{1}^{1}1We use the name Barracuda to refer to the general principle, and Barracuda to refer to an instantiation with polling parameter . The proposer aggregates the information from the other nodes and makes a decision on where to attach block based on the local attachment protocol it follows. One key observation is that there is no conflict between the local trees of each node, so the Barracuda strategy simply merges totally local trees into a single tree with union of all the edges in the local trees that are polled. Note that we poll nodes, such that a total local trees are contributing, as the proposers own local tree also contributes to the union.
We assume that when Barracuda polling happens, the polling requests arrive at the polled nodes instantaneously, and it takes the proposer node time to make the decision on where to attach the block. The instantaneous polling assumption is relaxed in Section 5. Recall that in our model, accounts for both network delay and processing delays. In live blockchain P2P networks, a substantial fraction of block propagation delays originate from the processing (e.g. validity checks) done by each node before relaying the block. These delays could grow more pronounced for blockchains with more complex processing requirements, such as smart contracts. Since these computational checks are not included in the polling process, the polling delay can be much smaller than the overall network propagation delay. To simplify the analysis, we also assume that each node processes the additional polled information in real time, but does not store the polled information. In other words, the information a node obtains from polling at time is forgotten at time . This modeling choice is made to simplify the analysis; it results in a lower bound on the improvements due to polling since nodes are discarding information. In practice, network delay affects polling communication as well, and we investigate experimentally these effects in Section 5.1.
To investigate the effect of polling on the blockchain, we define appropriate events on the probabilistic model of block arrival and block tree growth. We denote
an exponential random variable with probability density function
, and define set for any integer . For a messagedenote its arrival time to node as . If is the proposer of block , then . If is not the proposer of block , then , where . It follows from our assumptions that the random variables are mutually independent for all . We also denote the proposer of block as . To denote polled nodes, we also write as , and denote the other nodes polled by node as .
When block is being proposed, we define the following random variables. Let random variable
(3) 
Here . For any , we denote .
Since we will aggregate the information from the total nodes whenever a proposer proposes, we also define as the event that when was proposed, at least one node has received block . The crucial observation is that when the proposer tries to propose block , the complete information it utilizes for decision is the collection of random variables
(4) 
The global tree at time , denoted as , is a tree consisting of blocks including the Genesis block. We are interested in the distribution of the global tree . To illustrate how to compute the probability of a certain tree structure, we demonstrate the computation through an example where , and .
For simplicity, we denote as since for this example . The probability of some of the configurations of in Figure 2a can be written as
Note that for the event in Figure 2, it does not matter whether node has received block or not, as the parent of that block is missing in ’s local tree. Block is therefore not included in the local tree of node at that point in time.
4.1. Main result
Under any local attachment protocol and any block arrival distribution, the event that depends on the random choices of proposers and polled nodes, , and the messages received at those respective nodes, , and some additional outside randomness on the network delay and the block arrival time. The following theorem characterizes the distribution of on the system parameters for a general local attachment protocol (including the longest chain protocol). We provide a proof in Section 6.2.
Theorem 1 ().
For any local attachment protocol and any interblock arrival distribution, define random variable which takes values in the set of all possible structures of tree such that ^{2}^{2}2The random variable is well defined, since the protocol is assumed not to depend the identity of the proposer of each block. Hence, the conditional expectation is identical conditioned on each specific whenever all nodes in it are distinct.
(5) 
We have the following results:

There exists a function independent of all the parameters in the model such that for any possible tree structure ,
(6) 
The total variation distance between the distribution of and is upper bounded:
(7)
In the definition in Eq. (5), we condition on the event that all proposers and polled nodes are distinct. This conditioning ensures that all received blocks ’s at those nodes are independent over time . This in turn allows us to capture the precise effect of in the main result in Eq. (6). Further, the bound in Eq. (7) implies that such conditioning is not too far from the actual evolution of the blockchains, as long as the number of nodes are large enough: . While this condition may seem restrictive since , notice that in practice, many blockchains operate in epochs
of finite duration, such that the state of the blockchain is finalized between epochs
(Buterin and Griffith, 2017; Kiayias et al., 2017). Finalization means that the system chooses a single fork, and builds on the last block of that fork in the subsequent epoch. Hence, the above condition can be physically met with finite . Moreover, in practice, need not be so large, as we show in Figure 3. Even with for 4polling, the experiments support the predictions of Theorem 1.The main message of the above theorem is that Barracuda effectively reduces the network delay by a factor of . For any local attachment protocol and any block arrival process, up to a total variation distance of , the distribution of the evolution of the blocktree with Barracuda is the same as the distribution of the evolution of the blocktree with no polling, but with a network that is times faster. We confirm this in numerical experiments (plotted in Figure 3), for a choice of , , , , , and the longest chain fork choice rule. In the inset we show the same results, but scaled the xaxis as . As predicted by Theorem 1, the curves converge to a single curve, and are indistinguishable from one another. We used the network model from Section 2.2.
Without polling, the throughput degrades quickly as the network delay increases. This becomes critical as we try to scale up PoS systems; blocks should be generated more frequently, pushing network infrastructure to its limits. With polling, we can achieve an effective speedup of the network without investing resources on hardware upgrades. Note that in this figure, we are comparing the average block throughput, which is the main property of interest. We make this connection between the throughput and precise in the following. Define to be the length of the longest chain in excluding the Genesis block. Throughput is defined as . We have the following Corollary of Theorem 1.
Corollary 0 ().
There exists a function independent of all the parameters in the model such that
(8) 
In other words, in the regime that , the expectation of the length of the longest chain depends on the delay parameter and the polling parameter only through their ratio . Hence, the block throughput enjoys the same polling gain, as the distribution of the resulting block trees.
4.2. Connections to ballsinbins example
In this section, we give a brief explanation of the ballsinbins problem and the power of two choices in load balancing. We then make a concrete connection between the blockchain problem and the power of polling in information balancing.
In the classical ballsinbins example, we have balls and bins, and we sequentially throw each ball into a uniformly randomly selected bin. Then, the maximum loaded bin has load (i.e. number of balls in that bin) scaling as (Mitzenmacher and Upfal, 2005). The result of power of two choices states that if every time we select () bins uniformly at random and throw the ball into the least loaded bin, the maximum load enjoys an nearexponential reduction to (Mitzenmacher and Upfal, 2005).
Our polling idea is inspired by this power of two choices in load balancing. We make this connection gradually more concrete in the following. First, consider the case when the underlying network is extremely slow such that no broadcast of the blocks is received. When there is no polling, each node is only aware of its local blockchain consisting of only those blocks it generated. There is a onetoone correspondence to the ballsinbins setting, as blocks (balls) arriving at each node (bin) build up a load (local blockchain). When there are nodes and blocks, then it trivially follows that the length of the longest chain scales as , when there is no polling.
The main departure is that in blockchains, the goal is to maximize the length of the longest chain (maximum load). This leads to the following fundamental question in the ballsinbins problem, which has not been solved, to the best of our knowledge. If we throw the ball into the most loaded bin among randomly chosen bins at each step, how does the maximum load scale with and ? That is, if one wanted to maximize the maximum load, leading to load unbalancing, how much gain does the power of choices give? We give a precise answer in the following.
Theorem 3 ().
Given empty bins and balls, we sequentially allocate balls to bins as follows. For each ball, we select uniformly at random bins, and put the ball into the maximallyloaded bin among the chosen ones. Then, the maximum load of the bins after the placement of all balls is at most
(9) 
with probability at least , where is a universal constant.
We refer to a longer version of this paper (Fanti et al., 2018) for a proof. This shows that the gain of polling in maximizing the maximum load is linear in . Even though this is not as dramatic as the exponential gain of the load balancing case, this gives a precise characterization of the gain in the throughput of Barracuda in blockchains when . This is under a slightly modified protocol where the polling happens in a bidirectional manner, such that the local tree and the newly appended block of the proposer are also sent to the polled nodes.
For moderate to small regime, which is the operating regime of real systems, blocktree evolution is connected to a generalization of the ballsinbins model. Now, it is as if the balls are copied and broadcasted to all other bins over a communication network. This is where the intuitive connection to ballsandbins stops, as we are storing the information in a specific data structure that we call blocktrees. However, we borrow the terminology from ‘load balancing’, and refer to the effect of polling as ‘information balancing’, even though load balancing refers to minimizing the maximum load, whereas information balancing refers to maximizing the maximum load (longest chain) by balancing the information throughout the nodes using polling.
5. System and implementation issues
We empirically verify the robustness of our proposed protocol under various issues that might come up in a practical implementation of Barracuda. Our experiment consists of nodes connected via a network which emulates the end to end delay as an exponential distribution; this model is inspired by the measurements of the Bitcoin P2P network made in (Decker and Wattenhofer, 2013).
Each of the nodes maintains a local blocktree which is a subset of the global blocktree. We use a deterministic block arrival process with , i.e. we assume a unit block arrival time which is also termed as an epoch in this section. This represents an upper bound on block arrivals in realworld PoS systems, where blocks can only arrive at fixed time intervals. At the start of arrival , proposers are chosen at random and each of these proposers proposes a block.
When there is no polling, each proposer chooses the most eligible block from its blocktree to be a parent to the block it is proposing, based on the fork choice rule. In the case of Barracuda, the proposer sends a pull message to randomly chosen nodes, and these nodes send their block tree back to the proposer. The proposer receives the block trees from the polled nodes after a delay , and updates her local blocktree by taking the union of all received blocktrees. The same fork choice rule is applied to decide the parent to the newly generated block. In all experiments, Nakamoto longest chain fork choice rule is used. Experiments are run for time epochs on a network with nodes with .
5.1. Effect of polling delay
In reality, there is delay between initializing a poll request and receiving the blocktree information. We expect polling delay to be smaller than the delay of the P2P relay network because polling communication is pointtopoint rather than occurring through the P2P relay network. To understand the effects of polling delay, we ran simulations in which a proposer polls nodes at the time of proposal, and each piece of polled information arrives after time , , .., . The proposer determines the pointer of the new block when all polled messages are received.
Figure 7 shows the effect of such polling delay, as measured by , the largest delay that achieves a block throughput of at least 0.8 under Barracuda. More precisely,
Under this model, polling more nodes means waiting for more responses; the gains of polling hence saturate for large enough , and there is an appropriate practical choice of that depends on the interplay between the P2P network speed and the polling delay.
In practice, there is a strategy to get a large polling gain, even with delays: the proposer polls a large number of nodes, but only waits a fixed amount of time before making a decision. Under this protocol, polling more nodes can only help; the only cost of polling is the communication cost. The results of our experiments under this protocol are illustrated in Figure 7 (‘poll delay fixed wait’ curve).
This implies a gap in our model, which does not fully account for the practical cost of polling. In order to account for polling costs, we make the model more realistic by assigning a small and constant delay of to set up a connection with a polling node, and assume that the connection setup occurs sequentially for nodes. The proposer follows the same strategy as above: waiting for a fixed amount of time before making the decision. We see that under such model, there is a finite optimal as shown in Figure 7.
As practical polling delays might be larger than , we compare it to a more practical setting where polling delay is with a threshold wait time of in Figure 4. With this larger delays, the performance is still continuously increasing with , and still provides 250% improvement at =10.
5.2. Heterogeneous networks
The theoretical and experimental evidence of the benefits of Barracuda have so far been demonstrated in the context of a homogeneous network: all the nodes in the network have the same bandwidth and processing speeds. Further, individual variation in endtoend delay due to network traffic is captured by statisticallyidentical exponential random variables. In practice, heterogeneity is natural: some nodes have stronger network capabilities. We model this by clustering the nodes into different groups based on average network speed. The speed of a connection is determined by the speed of the slower node. We compare the performance of Barracuda with no polling (which has worse performance and serves as a lower bound). We follow the following uniform polling strategy: Let the delay of a node be a part of the set ; a node’s delay is defined as follows: the average delay of transmitting a block across the P2P network from node with delay to a node with delay is . In Figure 7, we show the performance of a heterogeneous network with : half of the nodes have delay and the others have delay . Every node has the same proposer election probability. Barracuda gives a throughput increase in line Theorem 1.
5.3. Other practical issues
There are remaining three major practical issues. First, the polling studied in this paper requires syncing of the complete local blocktree, which is redundant and unnecessarily wastes network resources. For efficient bandwidth usage, we propose polling, where the polled nodes only send the blocks that were generated between times and . Secondly, to ensure timely response from polled nodes, we propose appropriate incentive mechanism, motivated by the reputation systems used in BitTorrent. Finally, a fraction of the participants may deviate from the proposed protocol with explicit malicious intent (of causing harm to the key performance metrics). It is natural to explore potential security vulnerabilities exposed by the Barracuda protocol proposed in this paper. All these practical issues are expanded in detail with numerical experiments to support them, in the longer version of this paper available at (Fanti et al., 2018).
6. Proofs of the main results
6.1. Proof of Theorem 2
We apply Theorem 1 with general and then specialize it to to obtain the theorem statement. Denote the chain as , and as since here . The event can be written as
(10) 
Let denote the event that every node has proposed or been polled at most once. Conditioned on , and defining :
We now claim that if , we have . Let . Indeed, in this case, we have . Hence,
where follows from Lemma lmm:limit and the fact that . Conversely, we show that if , then . Indeed, in this case we have , and
Lemma 0 ().
Let be fixed. Then we have that
The proof is included in the extended version (Fanti et al., 2018). Note that the distribution of is independent of , hence we could condition on a specific realization of and compute the conditional expectation of the event that leads to the final global tree as a chain. We claim:
Lemma 0 ().
For any , letting , we have .
The full proof is included in (Fanti et al., 2018). The upper bound uses the independence of the propagation delays, whereas the lower bound relies on the fact that all the events in the indicators functions of are nonnegatively correlated.
Since Lemma 2 does not depend on the values of , we know that the bounds apply to as well. For the polling strategy, it can be verified that both the upper and lower bound computations in Lemma 2 are still valid, if we replace with . Indeed, for the upper bound, each block contributes at least independent random variables; for the lower bound, we can show that the events are positively correlated. Now we claim that in order to ensure that , the required number of is at least approximately for large. Let . We claim that if , then the probability lower bound is satisfied. Indeed, in this case,
as . We also claim that if , then the probability lower bound is asymptotically not satisfied. Indeed, in this case
as . Hence, the threshold we aim for should be precisely . Replacing it with we get
6.2. Proof of Theorem 1
Part (1). One key observation is that, if every node has only been polled or proposed at most once, i.e,, the set contains distinct nodes, then conditioned on this specific sequence , all the random variables are mutually independent. Furthermore, conditioned on this specific sequence, we have
(11)  
(12) 
for all such that . Let denote the event that . It follows from the definition of local attachment protocol that for all such that . Note that the event only depends on and plus some additional outside randomness. Since it follows from the independence of and equation (11) that
(13) 
all such that . Hence, we have
Now we show the second part of Theorem 1. Denote by any collection of distinct tree structures that may take values in. Then, we have
(14)  
(15) 
It follows from the birthday paradox compution (Mitzenmacher and Upfal, 2005, Pg. 92) that . Hence, we have shown that for any measurable set that or take values in, we have . The result follows from the definition of the total variation distance .
Part (2). We note that there exists some function independent of all the parameters in the model such that the expectation of the longest chain of is equal to . To obtain the final result, it suffices to use the variational representation of total variation distance, , and taking , upon noticing that the length of the longest chain in the tree is at most .
7. Related Work
Four main approaches exist for reducing forking.
(1) Reducing proposer diversity. A natural approach is to make the same node propose consecutive blocks; for instance, BitcoinNG (Eyal et al., 2016) proposers use the longestchain fork choice rule, but within a given time epoch, only a single proposer can propose blocks. This allows the proposer to quickly produce blocks without forking effects. Although BitcoinNG has high throughput, it exhibits a few problems. When a single node is in charge of block proposal for an extended period of time, attackers may be able to learn that node’s IP address and take it down. The idea of fixing the proposer is also used in other protocols, such as Thunderella (Pass and Shi, 2018) and ByzCoin (Kogias et al., 2016).
(2) Embracing forking. Other protocols use forking to contribute to throughput. Examples include GHOST (Sompolinsky and Zohar, 2015), PHANTOM (Sompolinsky and Zohar, [n. d.]), SPECTRE (Sompolinsky et al., 2016), and Inclusive/Conflux (Lewenberg et al., 2015; Li et al., 2018). GHOST describes a fork choice rule that tolerates honest forking by building on the heaviest subtree of the blocktree. SPECTRE, PHANTOM, and Conflux instead use existing fork choice rules, but build a directed acyclic graph (DAG) over the produced blocks to define a transaction ordering. A formal understanding of such DAGbased protocols is evolving; their security properties are not yet wellunderstood.
(3) Structured DAGs. A related approach is to allow structured forking. The Prism consensus mechanism explicitly codesigns a consensus protocol and fork choice rule to securely deal with concurrent blocks, thereby achieving optimal throughput and latency (Bagaria et al., [n. d.]). The key intuition is to run many concurrent blocktrees, where a single proposer tree is in charge of ordering transactions, and the remaining voter trees are in charge of confirming blocks in the proposer tree. Barracuda is designed to be integrated into existing consensus protocols, whereas (Bagaria et al., [n. d.]) is a new consensus protocol.
(4) Forkfree consensus. Consensus protocols like Algorand (Gilad et al., 2017), Ripple, and Stellar (Mazieres, 2015) prevent forking entirely by conducting a full round of consensus for every block. Although votingbased consensus protocols consume additional time for each block, they may improve overall efficiency by removing the need to resolve forks later; this hypothesis remains untested. A challenge in such protocols is that BFT voting protocols can be communicationintensive, and require a known set of participants. Although some work addresses these challenges (Kogias et al., 2016; Pass and Shi, 2017), many industrial blockchain systems running on BFT voting protocols require some centralization.
Our approach can be viewed as a partial execution of a pollingbased consensus protocol. Polling has long been used in consensus protocols (Cruise and Ganesh, 2014; Abdullah and Draief, 2015; Fischer et al., 1985; Rocket, 2018). Our approach differs in part because we do not use polling to reach complete consensus, but to reduce the number of inputs to a (separate) consensus protocol.
8. Conclusion
In this paper, we propose polling as a technique for improving block throughput in proofofstake cryptocurrencies. We show that for small , polling has the same effect on block throughput as if the mean network delay were reduced by a factor of . This simple, lightweight method improves throughput without substantially altering either the underlying consensus protocol or the network. Several open questions remain, particularly with regards to analyzing adversarial behavior in polling. We have avoided them in this paper by proposing a symmetric version of the protocol (cf. Section 5.3), but even within the original polling protocol, it is unclear how much an adversary could affect block throughput and/or chain quality by responding untruthfully to poll requests.
Acknowledgements.
We thank Sanjay Shakkottai for helpful discussions on the impact of polling on load balancing. This work was supported by NSF grants CCF1705007, CCF1617745, and CNS1718270, ARO grant W911NF1810332(73198NS), the Distributed Technologies Research Foundation, and Input Output Hong Kong.References
 (1)
 Abdullah and Draief (2015) Mohammed Amin Abdullah and Moez Draief. 2015. Global majority consensus by local majority polling on graphs of a given degree sequence. Discrete Applied Mathematics 180 (2015), 1–10.
 Bagaria et al. ([n. d.]) Vivek Bagaria, Sreeram Kannan, David Tse, Giulia Fanti, and Pramod Viswanath. [n. d.]. Deconstructing the Blockchain to Approach Physical Limits. https://arxiv.org/abs/1810.08092.
 Basu et al. ([n. d.]) S Basu, Ittay Eyal, and EG Sirer. [n. d.]. The Falcon Network.
 Biryukov et al. (2014) Alex Biryukov, Dmitry Khovratovich, and Ivan Pustogarov. 2014. Deanonymisation of clients in Bitcoin P2P network. In Proceedings of the 2014 ACM SIGSAC Conference on Computer and Communications Security. ACM, 15–29.
 Buterin and Griffith (2017) Vitalik Buterin and Virgil Griffith. 2017. Casper the friendly finality gadget. arXiv preprint arXiv:1710.09437 (2017).
 Cardano ([n. d.]) Cardano. [n. d.]. Cardano Settlement Layer Documentation. https://cardanodocs.com/technical/.
 Cruise and Ganesh (2014) James Cruise and Ayalvadi Ganesh. 2014. Probabilistic consensus via polling and majority rules. Queueing Systems 78, 2 (2014), 99–120.
 Decker and Wattenhofer (2013) Christian Decker and Roger Wattenhofer. 2013. Information propagation in the bitcoin network. In PeertoPeer Computing (P2P), 2013 IEEE Thirteenth International Conference on. IEEE, 1–10.
 Eyal et al. (2016) Ittay Eyal, Adem Efe Gencer, Emin Gün Sirer, and Robbert Van Renesse. 2016. BitcoinNG: A Scalable Blockchain Protocol.. In NSDI. 45–59.
 Eyal and Sirer (2018) Ittay Eyal and Emin Gün Sirer. 2018. Majority is not enough: Bitcoin mining is vulnerable. Commun. ACM 61, 7 (2018), 95–102.
 Fanti et al. (2018) Giulia Fanti, Jiantao Jiao, Ashok Makkuva, Sewoong Oh, Ranvir Rana, and Pramod Viswanath. 2018. Barracuda: the Power of polling in ProofofStake Blockchains. (2018). available at http://swoh.web.engr.illinois.edu/polling.pdf and arXiv.
 Fischer et al. (1985) MJ Fischer, N Lynch, and MS Paterson. 1985. Impossibility of Distributed Consensus with One Faulty Process.
 Garay et al. (2015) Juan Garay, Aggelos Kiayias, and Nikos Leonardos. 2015. The bitcoin backbone protocol: Analysis and applications. In Annual International Conference on the Theory and Applications of Cryptographic Techniques. Springer, 281–310.
 Gilad et al. (2017) Yossi Gilad, Rotem Hemo, Silvio Micali, Georgios Vlachos, and Nickolai Zeldovich. 2017. Algorand: Scaling byzantine agreements for cryptocurrencies. In Proceedings of the 26th Symposium on Operating Systems Principles. ACM, 51–68.
 Göbel et al. (2016) Johannes Göbel, Holger Paul Keeler, Anthony E Krzesinski, and Peter G Taylor. 2016. Bitcoin blockchain dynamics: The selfishmine strategy in the presence of propagation delay. Performance Evaluation 104 (2016), 23–41.
 Heilman et al. (2015) Ethan Heilman, Alison Kendler, Aviv Zohar, and Sharon Goldberg. 2015. Eclipse Attacks on Bitcoin’s PeertoPeer Network.. In USENIX Security Symposium. 129–144.
 Kiayias et al. (2017) Aggelos Kiayias, Alexander Russell, Bernardo David, and Roman Oliynykov. 2017. Ouroboros: A provably secure proofofstake blockchain protocol. In Annual International Cryptology Conference. Springer, 357–388.
 Kogias et al. (2016) Eleftherios Kokoris Kogias, Philipp Jovanovic, Nicolas Gailly, Ismail Khoffi, Linus Gasser, and Bryan Ford. 2016. Enhancing bitcoin security and performance with strong consistency via collective signing. In 25th USENIX Security Symposium (USENIX Security 16). 279–296.
 Lee (2017) Timothy Lee. 2017. Bitcoin?s insane energy consumption, explained. (2017).
 Lewenberg et al. (2015) Yoad Lewenberg, Yonatan Sompolinsky, and Aviv Zohar. 2015. Inclusive block chain protocols. In International Conference on Financial Cryptography and Data Security. Springer, 528–547.
 Li et al. (2018) Chenxing Li, Peilun Li, Wei Xu, Fan Long, and Andrew Chichih Yao. 2018. Scaling Nakamoto Consensus to Thousands of Transactions per Second. arXiv preprint arXiv:1805.03870 (2018).
 Mazieres (2015) David Mazieres. 2015. The stellar consensus protocol: A federated model for internetlevel consensus. Stellar Development Foundation (2015).
 Miller et al. (2015) Andrew Miller, James Litton, Andrew Pachulski, Neal Gupta, Dave Levin, Neil Spring, and Bobby Bhattacharjee. 2015. Discovering bitcoin?s public topology and influential nodes. et al. (2015).
 Mitzenmacher and Upfal (2005) Michael Mitzenmacher and Eli Upfal. 2005. Probability and computing: Randomized algorithms and probabilistic analysis. Cambridge university press.
 Papadis et al. (2018) Nikolaos Papadis, Sem Borst, Anwar Walid, Mohamed Grissa, and Leandros Tassiulas. 2018. Stochastic Models and WideArea Network Measurements for Blockchain Design and Analysis. In IEEE INFOCOM 2018IEEE Conference on Computer Communications. IEEE, 2546–2554.
 Pass and Shi (2017) Rafael Pass and Elaine Shi. 2017. Hybrid consensus: Efficient consensus in the permissionless model. In LIPIcsLeibniz International Proceedings in Informatics, Vol. 91. Schloss DagstuhlLeibnizZentrum fuer Informatik.
 Pass and Shi (2018) Rafael Pass and Elaine Shi. 2018. Thunderella: Blockchains with optimistic instant confirmation. In Annual International Conference on the Theory and Applications of Cryptographic Techniques. Springer, 3–33.
 Rocket (2018) Team Rocket. 2018. Snowflake to avalanche: A novel metastable consensus protocol family for cryptocurrencies,?
 Sompolinsky et al. (2016) Yonatan Sompolinsky, Yoad Lewenberg, and Aviv Zohar. 2016. SPECTRE: A Fast and Scalable Cryptocurrency Protocol. IACR ePrint Archive 2016 (2016), 1159.
 Sompolinsky and Zohar ([n. d.]) Yonatan Sompolinsky and Aviv Zohar. [n. d.]. PHANTOM. ([n. d.]).
 Sompolinsky and Zohar (2015) Yonatan Sompolinsky and Aviv Zohar. 2015. Secure highrate transaction processing in bitcoin. In International Conference on Financial Cryptography and Data Security. Springer, 507–527.
Comments
There are no comments yet.