Agent-Based Simulations of Blockchain protocols illustrated via Kadena's Chainweb

04/29/2019 ∙ by Tarun Chitra, et al. ∙ Kadena LLC Association for Computing Machinery 0

While many distributed consensus protocols provide robust liveness and consistency guarantees under the presence of malicious actors, quantitative estimates of how economic incentives affect security are few and far between. In this paper, we describe a system for simulating how adversarial agents, both economically rational and Byzantine, interact with a blockchain protocol. This system provides statistical estimates for the economic difficulty of an attack and how the presence of certain actors influences protocol-level statistics, such as the expected time to regain liveness. This simulation system is influenced by the design of algorithmic trading and reinforcement learning systems that use explicit modeling of an agent's reward mechanism to evaluate and optimize a fully autonomous agent. We implement and apply this simulation framework to Kadena's Chainweb, a parallelized Proof-of-Work system, that contains complexity in how miner incentive compliance affects security and censorship resistance. We provide the first formal description of Chainweb that is in the literature and use this formal description to motivate our simulation design. Our simulation results include a phase transition in block height growth rate as a function of shard connectivity and empirical evidence that censorship in Chainweb is too costly for rational miners to engage in. We conclude with an outlook on how simulation can guide and optimize protocol development in a variety of contexts, including Proof-of-Stake parameter optimization and peer-to-peer networking design.



There are no comments yet.


page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

Blockchain systems provide security via the economic disincentivization of malicious behaviors, such as double-spending or long-range attacks. Formally, this security is established by proving that the system achieves liveness, consistency, and persistence under suitable networking conditions (e.g. partial synchrony or asynchrony) and under the assumption that honest and rational agents are in the majority [1, 2, 3]. Many of these results give precise quantitative conditions that prescribe when a protocol will fail as a function of parameters, such as the difficulty adjustment period [1]

or epoch length

[2]. However, it becomes exceedingly difficult to provide similarly strong results when dealing with sharded or parallelized blockchains. There is a marked loss in quantitative strength in the existing literature of results, as accommodating for communication complexity and resource (e.g. energy, space, or stake) redistribution leads to significantly worse liveness performance [4, 5] and weaker tolerances for Byzantine actors. Mathematically, the main reason for this loss in statistical power comes from an inability to apply non-trivial concentration inequalities to all chains simultaneously, due to residual correlations that stem from cross-shard transactions and the overhead of repeated committee selection. Moreover, the increased communication cost for cross-shard or cross-chain transactions is only described asymptotically, which makes it hard to tune practical peer-to-peer networking algorithms for cross-shard transactions. Finally, as one increases the number of shards, there is often a dramatic increase in the number of parameters, such as timeouts and resource limits per shard, that a protocol designer must choose. The choice of these parameters can dramatically impact real-world performance and security of a sharded or parallelized blockchain. This paper will introduce agent-based simulation, which is used in a variety of other fields, as a way for protocol designers and practitioners to overcome some of these problems.

Agent-based simulation is used by practitioners and researchers in algorithmic trading [6]

, artificial intelligence

[7], autonomous vehicles [8], cybersecurity [9], economics [10], energy allocation [11], and by the US Commodities and Futures Trading Commission to detect fraudulent market activity [12]. These systems define a set of agents that each have a state space and interact with each other via a prescribed set of actions . Each agent receives updates to their state, caused either by the choice of actions of other agents or due to exogenous signals (e.g. external market prices), and then computes a policy function that selects an action for an agent to take. After running a number of simulations of agents interacting under different ensembles of initial conditions, exogenous data, and policies, a practitioner usually computes statistical averages and/or estimators of game theoretic quantities [13]

to provide estimates of ‘macroscopic’ quantities that are emergent properties of the agent interaction. The statistical sampling of these ensembles is usually implemented via Monte Carlo (MC) and Markov chain Monte Carlo (MCMC) methods, while the designing of policies, state spaces, and action spaces is performed via analytic modeling and more recently, deep reinforcement learning


While these techniques have been successfully deployed in practice within other financial disciplines, the use of agent-based simulations for blockchain development has been scant. As far as the authors can tell, agent-based blockchain simulation has focused on refining selfish mining rewards [14], reducing the energy usage of Proof-of-Work systems [15], and on disproving the claims of various block-free ledgers (e.g. IOTA [16]). However, agent-based simulation can be used as a production tool to help protocol designers optimize parameters, estimate latency and bandwidth usage, and to estimate the true cost of security for Proof-of-Stake systems. Moreover, the pervasive usage of agent-based simulations in production environments within algorithmic trading intimates that the technique is useful for monitoring and estimating risk within live blockchain systems.

We aim to illustrate the versatility of agent-based simulation via an analysis of Chainweb, a parallelized Proof-of-Work protocol that has eluded a closed-form security proof due to explicit correlation between chains that makes it difficult to use standard probabilistic tools. Our simulations, which are designed to handle arbitrary distributed consensus protocols, will show that we can get high-fidelity estimates for the latency-security trade-off in Chainweb and an estimate for how much miners lose when they try to censor particular chains.

Finally, we note that estimating the precise cost of security is a difficult challenge that involves a number of variables, some of which are exogenous to the underlying chain like rental markets [17], arbitrage opportunities on centralized and decentralized exchanges, and the difficulty of estimating the market impact of attacks. The simulations in this paper illustrate how endogenous design choices, such as difficulty adjustment periods or shard correlations, affect practical protocol performance. Our final section will conclude with a description of how simulations can interact with exogenous data (akin to trading strategies) and how we can use simulation to model the effects of volatile exchange prices and derivatives markets on chain security. These uses are increasingly important as the security of Proof-of-Stake protocols is inseparably tied to exogenous data and judicious parameter selection (e.g. slashing rates in live protocols such as Tezos [18]). Even alternative Sybil resistance mechanisms such as Proof-of-Replication (used in FileCoin [19]) and privacy data marketplaces [20] end up relying on market making and order matching functionality that is best optimized via agent-based simulation.

Ii Simulation Methodology

We will discuss our simulation methodology in two parts: one that evinces the features of our generic simulation platform and another that describes how we modeled agents within our system.

Ii-a Simulation Platform

Our C++ simulation platform consists of a custom discrete event simulation that emulates the peer-to-peer network of a blockchain system and uses a probabilistic generative model to generate events that corresponds to actions that agents can take within a blockchain system. We use MCMC methods to sample from this model to generate the next event and to propagate this event via the network graph. Each protocol, whether it be Proof-of-Work, Proof-of-Stake, of Proof-of-Replication, has a standardized interface to define its generative model. This interface forces the protocol designer to specify a model for event arrival times, a method for selecting an agent or committee to produce an event, and actions that agents are supposed to take. The protocol designer also has to select a routing algorithm that represents how protocol users are gossiping blocks and off-chain packets with each other. We deliberately separate the routing algorithm from the underlying graph of miners and users, allowing us to test how the system behaves with different gossip protocols. This allows for a protocol designer to test their protocol under deterministic algorithms, such as Kademlia [21], or a randomized gossip method [22]. Moreover, the design of this platform is based on a combination of high-frequency financial back-testing simulations, which use highly optimized, low latency versions of probabilistic networking models, and computational physics simulations, which provide optimized methods for evolving systems of interacting objects.

The simulation also includes a domain-specific language for describing the statistical calculations and policies of individual agents. In order to mimic cryptographic simulation-based proofs [23]

and ideal functionalities, we allow for agents to directly interact with the MCMC model. This allows for honest agents to mutate stochastic components of the simulation related to their local state (e.g. hash power, bonded stake or space) and lets Byzantine agents mutate the state of other agents. Agent computations are not restricted and allowed to be generic; for instance, an agent can use a trained TensorFlow or PyTorch model as a policy. Given that there is a modicum, at best, of data stored in blockchains, all agent policies used in the sequel will be rules-based and closer to high precision, low recall trading strategies versus low precision, high recall reinforcement learning policies.

Ii-B Agent Design Methodology

We use agent-based models to model rational, Byzantine, and adversarial miner strategies. We assume, without the loss of generality, that our agents satisfy the Byzantine-Altruistic-Rational assumption [24]. Agents are represented via a state and action space model, in which agents provide a utility function that is used to adapt a policy, , where the state space is and the action space is . In this paper, we define our state space to be , where is the miner’s local arboretum at time and represents the miner’s hash power distribution. Note that we explicitly exclude block withholding from the action space, as the only choices that a miner can make involve changing their hash power distribution. In particular, this also means that our action space is simply , as any action corresponds to a choice of hash power distribution. As this is preliminary work, we excluded selfish mining and withholding attacks to simplify the statistical analysis111Our future work will discuss using adaptive sampling methods to deal with the non-stationarity that occurs when adding block withholding in §4. We will expand upon these results and include selfish mining and withholding attacks in future work. Agents adjust their policies by attempting to optimize their expected reward under a prescribed utility function. For simplicity, we assume that the utility function is , so that we can optimize it via gradient descent. Drawing inspiration from the reinforcement learning literature, we define the expected reward as the -step exponential moving average () of the utility function:

where is the state observed by the agent at time . In this paper, is simply the agent’s local copy of the Chainweb arboretum at time and their current hash power distribution. From here on, we fix and let . Thus, given a utility function, an agent will update their state via the ordinary gradient descent iteration,

Using gradient descent also helps us enforce a locality constraint that ensures that miner strategies are purely local and not reliant on long-term historical results.

In order to program the agents within our simulation environment, we have developed a custom domain-specific language that aims to optimize the computation of utilities and policies for each simulated agent. This will be discussed in more detail in a subsequent paper by a subset of the authors. Using this language, we can specify what statistical signals trigger agents to take actions, allowing for us to describe complex agent strategies and policies via a simple script. Our language also allows for us to treat agents as a template, which lets us take a set of nodes and assign the same strategy to that set of nodes. This allows for easy specification of agent distributions, allowing for a user to specify policies and their relative proportions .

Ii-B1 Utility Function

The authors of [25] show that in the continuous time and zero latency (e.g. no peer-to-peer network) setting, Markovian agents maximize their expected reward in Bitcoin by optimizing a utility function propotional to , where is the number of rewards that the agent has accrued at time and the agent’s hash power expenditure at time . Since this paper assumes that is constant and uniform for all agents, we can simply define the single chain utility function to be:

where is the fraction of hash power placed on chain is the th chain of the arboretum and is the fraction of blocks in chain that agent produced (see equation 4). Since both values fall in , we’ve removed the issue of ensuring that the dynamic ranges of these quantities are compatible and will not under/overflow. In future work, we plan on allowing to be an arbitrary positive real number, representing energy costs, which will force us to add in a scaling constant to this utility function.

In the multiple chain setting, we simply define the th miner’s utility function as:

Iii Chainweb Details

For demonstration of the proposed modeling techniques we will analyze Chainweb [26], a parallelized Proof-of-Work network architecture designed to provide high security and high transaction throughput. Chainweb serves as an ideal use case for simulation because designers have to choose parameters and because it is difficult to formally prove that Chainweb is resistant to miner censorship. As there has not been a formal description of Chainweb in the literature, we will provide both an informal description (with visualization) and a formal description that proves that the parallelized system achieves liveness.

Iii-a Informal Description

Chainweb is a parallel chain Proof of Work architecture that combines hundreds or thousands of individually mined peer chains into a single network. Figure 1 depicts ten chains that are connected via the Petersen graph. The three-dimensional figures show the dependencies of blocks at different heights on blocks of lower heights. Each chain in the Chainweb braid maintains its own ledger, and blocks on each chain are mined with Nakamoto-style Proof of Work hashing. Peer chains incorporate each other’s Merkle roots to enforce a single super chain that offers an effective hash power that is the sum of the hash rate of each individual chain. Each chain in the network mines the same cryptocurrency which can be transferred cross-chain via a Proof-of-Burn verified with trustless Simple Payment Verification (SPV) at the smart contract level. The configuration of the Chainweb braid and the relationship between chains is fixed at launch and can be hard forked to larger configurations as throughput demands.

Fig. 1: Visualization of Chainweb Base Graph using the Petersen graph as base protocol configuration

The majority of miners are expected to mine the entire Chainweb braid, and as such each miner will maintain its own local version of Chainweb, the braid of which will be recombined with those of other miners as blocks are created.

Iii-B Formal Description


  • For any

  • is the set of a directed trees with maximum height and maximum degree

  • . We can take this limit as there is a natural lattice: if and

  • : The space of admissible blocks

  • : The tree with one node that represents the genesis block (e.g. the empty blockchain). We define

  • If we have a graph define the boundary operator by

Fig. 2: A visualization of the multiple local copies of Chainweb that are maintained by miners, here depicted as ovals, with blocks being generated from left to right and Merkle root references as arrows; this replication generates the arboretum structure. The red block represents two conflicting blocks in the same location, creating a fork which must be reconciled.

From single chain to multiple chains with constraints

In order to fully-specify a single chain PoW system, one needs to define the following [1]:

  1. Network graph that describes how miners and users communicate

  2. Merkle trees for each miner that represent each miner’s copy of the blockchain.

  3. A process for dynamically growing and updating the Merkle tree

When we refer to trees, we will refer to a block tree, where each vertex of the tree represents a block. In the multiple chain world, in order to balance security and throughput, we need to replace the single Merkle tree rooted at each miner with a set of Merkle trees and a set of constraints that do not let the different chains get too far out of sync with out other. To formalize this, we will first need to define the terms network graph, base graph, and arboretum:

Definition 1.

A network graph is a weighted, undirected graph , where each vertex represents a miner, each edge represents a peer-to-peer network connection between miners and , and maps an edge to a latency. An -base graph is an unweighted, undirected graph , where each vertex represents a separate chain and each edge represents a constraint.

In practice, the network graph is changing as users join and leave the network and the edges change in accordance with the peer-to-peer networking protocol.222Formally, if we think of all possible node sets as being a subset of , then we can view the networking protocol as a map , which maps a node set into an edge set. In this work, we will consider the network graph static and unchanging during the remainder of our analysis. We next need to define how our constraint set, represented by , affects the allowable Merkle trees.

Definition 2.

An arboretum is a triple where is a vertex set, is an edge set and is an operator that maps a vertex to a tree. If is a predicate, a -arboretum is an arboretum that for all . Finally, if is a subgraph of , we define a subarboretum centered on to be

An arboretum is distinct from a forest, as a forest does not prescribe connectivity between trees, whereas the edge set in an arboretum provides a natural graph distance between any two trees. Figure 2 depicts an arboretum where is the triangle graph and is a length-1 path with two vertices and one edge. The use of an arboretum is a key feature that is important to Chainweb’s performance claims, as the choice of graph can dramatically affect performance in a sharded PoW setting. More precisely, if an edge , this means that th block of chain can only be added if the st block of has been added. Thus, a valid Chainweb arboretum will need to satisfy the following predicate for all :


The last predicate, ensures that the block at the tip of at height includes a header from a block at height of . This condition is illustrated in figure 2, where we see the two blocks in red depending on both their predecessor on the same chain and the adjacent chain. By assumption, we have Finally, given a predicate , we define a dynamic process on an arboretum that represents a single miner’s mining activity:

Data: An initial arboretum , where ,
while true do
       largest -subarboretum of ;
       Apply mining strategy and find chain that is admissible and mine a valid block on it (possibly causing a fork);
       Apply block to
end while

In order to prove that the system achieves liveness, assuming that there is a non-zero amount of hash power on every shard, we need to show that this process can continue in an infinite loop without halting for Chainweb. This can happen, for instance, if there are no -subarboreta of . We will show that this cannot happen:

Claim 1.

If is connected, then for all , there exists a non-empty -subarboretum of

This claim is proved in the appendix. Let be the non-empty -subarboretum of . We call the cut set of the chain at time . The above claim says that the cut set is never empty and we can always make progress and the liveness of the set of chains reduces to the liveness of the individual chains. Chainweb thus consists of a miner graph and a series of arboreta, one for each vertex, denoted . We note for completeness that one can also frame Chainweb as an arboretum on the strong graph product of a network graph and base graph.

Iii-C Implementation of Chainweb

We represent Chainweb via a graph of miners each of whom holds a copy of their local arboretum. We sample new blocks from a probabilistic model representing Chainweb that has the following parameters:

  • : Block frequency for chain

  • : Hash power distribution of miner

  • : Hash power of miner

  • : The total hash power of the system, defined as

  • : The fraction of hash power on chain , defined as:

We sample a new block on chain at time created by miner , , via the following generative model:

Given this model and initial conditions (such as a choice of genesis block and random number seeds), our simulation platform can sample a trajectory of how Chainweb might evolve. Each miner’s policy is represented via a function that adapts and optimizes their own hash power distribution, , which allows for the miner to affect the sampled trajectory. We also note that this model shares a number of similarities with multivariate Hawkes processes [27]. In all simulations performed in this paper, we make the following assumptions about the above parameters (as these likely reflect the choices in Chainweb upon launch [26]):

  • All chains have the same block production rate:

  • We use uniform hash power, e.g.

  • Our network graph will only contain miners (e.g. we are assuming that all transaction generating participants are also miners)

  • All simulations in this paper route blocks using either -nearest neighbor routing (like Bitcoin) or use randomized gossip [22]

  • All latencies were sampled from the Bitcoin latency distribution in [28]

Finally, we note that we do not include an explicit difficult adjustment in our simulations, even though it is supported within the platform. While changing the difficulty adjustment time window can have dramatic effects on the profitability of selfish mining [29], we wanted to reduce the noise in our statistics and plan on describing selfish mining optimization in a subsequent work.

Iv Results

We performed two experiments to validate our simulation methodology and to estimate endogenous costs of security in Chainweb. Our goal is to illustrate how modeling adversarial and networking behavior can help choose design parameters such as and the choice of base graph that is used in the production client.

Iv-a Network Analysis

Our first experiment aims to test how different network and base graph configurations lead to changes in system dynamics under different routing profiles. As Claim 1 only guarantees that we will eventually achieve liveness in Chainweb provided that all chains have non-trivial chain growth, our goal is to perform a numerical study of how realistic chain growth looks and to find an estimate for how long it takes for a block to reach the whole network. Statistically estimating chain growth and the time to achieve liveness is crucial for deciding how to set parameters the block production rates, , as there is a natural trade-off between liveness time and block production rate [30]. Since Chainweb’s throughput is bounded by the diameter of the base graph [26], we aimed to use graphs that are the best known solutions to the degree-diameter problem. For these experiments, we used and , with the choice of shards due to the existence of a Moore Graph, the Hoffman-Singleton graph, that solves the degree-diameter problem [31].

We will describe the statistics of interest. For a miner and chain , let be the height of the longest branch of chain that miner has seen. For notational convenience, we assume that for . Define the height function, as:


where is the expectation over all environments up to time333

Our Chainweb model is a composed of Markov models and inherits a natural filtration that

is defined on (regardless of initial environment). . In practice, if we simulate trajectories until time , we can approximate this expectation as:

Intuitively, this tells us what the expected height change of the blockchain is within a window of size . By averaging over windows that start at different times , we can de-noise the variations of this measurement and observe behaviors that are persistent across different initial conditions. Finally, let be the time that the th miner receives block on chain . Let

be the random variable that is

conditional on block being on the main branch of chain . Define the liveness time to be:


This measures the average time that it takes for a block that reaches the main chain to reach all miners. If we use randomized gossip, one expects this time to simply be the covering time of random walk on the miner graph, which is controlled by the spectral gap of the miner graph [32]. However, since we are conditioning on blocks that make up the main chain, this number can be significantly greater than the covering time.

Iv-A1 Verification Runs

We performed a few verification runs to show that our simulation is replicating the behavior of Chainweb. For these runs, we let set the base graph be equal to the complete graph, . In this situation, since every shard depends on every other shard, we expect the time between successive blocks on the same chain to have super-linear scaling in the size of the base graph. We ran 100 simulations with 16,384 miners and a base graph with and calculated this time. In figure 3

, we see super-linear scaling (note that both scales are logarithmic) and decreasing variance as we increase the base graph size. We also note that one expected some finite-size effects as

, which is the likely cause of the decay in the rate of growth of this curve.

Fig. 3: This figure shows the average inter-arrival time (e.g. time between successive blocks on the same chain) as a function of with , the complete graph on vertices. We expect super-linear growth in this time, as each chain has to wait on all other chains before it can make progress. This continues to be true even up to .

Iv-A2 Liveness Time

To assess liveness time (3) under realistic conditions, we constructed a realistic internet graph, the Barabási small world graph [33], used a randomized gossip protocol [22], and sampled latencies the Bitcoin latency distribution [28]. In order to stress the system, we assumed a block production rate of . In figure 4, we see a plot of the expected degree of a Barabási graph versus liveness time. We used a miner graph of size 32,768 and used the Hoffman-Singleton graph as a base graph to generate these figures. The different curves correspond to a different number of simultaneous connections, which is how we quantify bandwidth. This corresponds to the upper bound on the number of neighbors forwarded to by a miner. We can see that by the time we get to 10 simultaneous connections, we are close to saturating the high-bandwidth limit, even for low-connectivity Barabási graphs. Data of this form helps protocol developers assess design decisions and choose peer-to-peer networking capabilities whose bandwidth-latency tradeoff matches the expectations of the consensus mechanism.

Fig. 4:

In this figure, which plots expected latency (and confidence intervals, represented via error bands) against expected degree of a Barabśi graph, we see that by increasing the bandwidth used, we quickly saturate the graph.

Iv-A3 Random Miner Graph

In order to assess how Chainweb performs under more extreme network graphs, we looked at a modified version of Erdös-Renyi random graphs such that if , then the edge

is included with probability

and such that the edge is always included. We chose this ensemble because it ensures that our graph is connected (via the inclusion of the line edges ), but still inherits the Erdös-Renyi phase transition. In these experiments, we set the block production rate to be that of Bitcoin () so that these results could be interpreted in terms of real world data. Moreover, we averaged over 10 instances of each random graph and used the Hoffman-Singleton graph as a base graph.

In Figure 5, we see computations of within one block interval. We see that for high values of (e.g. the miner graph is more connected), there is a sharp uptick in expected height increase around a half block time (300s). As we decrease connectivity, we expect there to be a more gradual height increase as some of the graph will have received the latest block, whereas other are still waiting to receive it and/or are on other forks. This is the expected behavior: when height increases propagate uniformly and rapidly throughout the network graph, we expect a sharp uptick in expected height change, as most participants receive the block (on average) at the same time (since miners have the same hash power). However, it is curious to note that the Erdös-Renyi phase transition [34] takes place around for our graphs and yet we only observe the signs of the transition (e.g. the sharp derivative for ) far away from it. This suggests that the presence of forks can dramatically slow down block propagation and adds in a non-linear latency effect. This effect can likely be measured by miners, who measure their deviation from these expected curves, who can potentially use this information to boost the rewards from selfish mining, akin to latency arbitrage in high-frequency trading.

Fig. 5: Computation of (denoted height derivative) for times within one block interval (.) The legend indicates the Erdös-Renyi probability that was used to generate .

Iv-B Adversarial Censorship

In order to test how resilient Chainweb is to miners attempting to censor a chain by not allocating any hash power to , we used our domain specific language to construct two agents that do the following:

  1. Honest Agent: Computes utility and gradient on all chains and updates its hash power distribution via gradient descent

  2. Adversarial Agent: Computes utility and gradient on all chains, updates its hash power via gradient descent, sets the hash power allocated to the censored chain to zero, and then redistributes the excess hash power equally.

We chose a sequence of twenty five evenly-spaced adversary fractions and for each of these, ran 256 simulations (with the Hoffman-Singleton base graph) of 30,000 blocks produced with a fraction of honest agents and censoring agents. For each miner and chain , we computed the fraction of blocks on the main branch of chain that were created by miner at block height , where . For simplicity, we assume that the block reward is constant and is equal to one coin per block throughout our simulations (as all of our plots are about relative rewards, this choice of units does not affect our results). We define the following two quantities:


where is the set of honest miners and . In Figure 6, we see the plot for a single trajectory with , where we see that and that the honest curve is much smoother than the noisy adversary curve.

Fig. 6: In this image, we see that the expected fraction of honest blocks, stabilizes rapidly, whereas fluctuates wildly, with excursions to the stable point that is achieved by honest agents. This data is taken from a single simulation.

In Figure 7, we see how the expected difference in rewards and risk-adjusted rewards vary as a function of

. We measure risk-adjusted rewards using the Sharpe Ratio, a common measure of trading strategy performance, which is measured via the mean reward (over all trajectories sampled) divided by the standard deviation of the reward. As the plots illustrate, honest, profit-maximizing agents beat out the censoring agents until 50-60%, at which point the censoring adversary is increasingly favored.

Fig. 7: In the top figure, we see the relative expected reward, between expected adversary rewards and honest rewards. We see that the honest miner advantage decays quickly from 7% to 1%, although honesty appears to be the dominant strategy. On the other hand, the risk adjusted returns, measured via the Sharpe Ratio , , has a stronger transition around 80.

From this data, we can conclude that it appears to be hard for miners that are aiming to censor a chain to successfully do it as long as there are enough rational, profit-maximizing miners in the system.

V Conclusions and Further Work

We explored how adversarial, agent-based simulation can be used to assess claims and measure networking behavior for a blockchain protocol. Our techniques helped us assess different design choices, such as how the choice of base graph affects liveness time, and we were able to statistically estimate the rewards sacrificed by an adversarial, censoring agent. These techniques helped us evaluate the safety of Chainweb and help us statistically justify that degree-diameter minimizing base graphs prevent censorship attacks. As our experiments with liveness time show, one can use these techniques to optimize peer-to-peer networking performance based on various metrics, such as the number of simultaneous connection in 4. Finally, we note that more complicated consensus protocols, such as Proof-of-Stake and Proof-of-Space tend to have a variety of other parameters (slashing, market making or auction parameters, etc.) and these can be optimized in a similar fashion, given a certain mixture of honest and adversarial agents. Our future work will include an analysis of block withholding and selfish mining and analyzing Chainweb (once it is live) by incorporating exogenous data to estimate the true cost of security and selfish mining, conditional on the existence of active fiat and derivatives markets.

Vi Acknowledgments

The authors would like to thank Joseph Bonneau, Yi Sun, Emily Pillmore, and the anonymous reviewers for insightful and constructive comments.


  • [1] Rafael Pass, Lior Seeman, and Abhi Shelat. Analysis of the blockchain protocol in asynchronous networks. In Annual International Conference on the Theory and Applications of Cryptographic Techniques, pages 643–673. Springer, 2017.
  • [2] Yossi Gilad, Rotem Hemo, Silvio Micali, Georgios Vlachos, and Nickolai Zeldovich. Algorand: Scaling byzantine agreements for cryptocurrencies. In Proceedings of the 26th Symposium on Operating Systems Principles, pages 51–68. ACM, 2017.
  • [3] Timo Hanke, Mahnush Movahedi, and Dominic Williams. Dfinity technology overview series, consensus system. arXiv preprint arXiv:1805.04548, 2018.
  • [4] Mahdi Zamani, Mahnush Movahedi, and Mariana Raykova. Rapidchain: scaling blockchain via full sharding. In Proceedings of the 2018 ACM SIGSAC Conference on Computer and Communications Security, pages 931–948. ACM, 2018.
  • [5] Eleftherios Kokoris-Kogias, Philipp Jovanovic, Linus Gasser, Nicolas Gailly, Ewa Syta, and Bryan Ford. Omniledger: A secure, scale-out, decentralized ledger via sharding. In 2018 IEEE Symposium on Security and Privacy (SP), pages 583–598. IEEE, 2018.
  • [6] Sandrine Jacob Leal, Mauro Napoletano, Andrea Roventini, and Giorgio Fagiolo. Rock around the clock: An agent-based model of low-and high-frequency trading. Journal of Evolutionary Economics, 26(1):49–76, 2016.
  • [7] David Silver, Julian Schrittwieser, Karen Simonyan, Ioannis Antonoglou, Aja Huang, Arthur Guez, Thomas Hubert, Lucas Baker, Matthew Lai, Adrian Bolton, et al. Mastering the game of go without human knowledge. Nature, 550(7676):354, 2017.
  • [8] Mariusz Bojarski, Davide Del Testa, Daniel Dworakowski, Bernhard Firner, Beat Flepp, Prasoon Goyal, Lawrence D Jackel, Mathew Monfort, Urs Muller, Jiakai Zhang, et al. End to end learning for self-driving cars. arXiv preprint arXiv:1604.07316, 2016.
  • [9] Igor Kotenko, Alexey Konovalov, and Andrey Shorov. Agent-based modeling and simulation of botnets and botnet defense. In Conference on Cyber Conflict. CCD COE Publications. Tallinn, Estonia, pages 21–44, 2010.
  • [10] J Doyne Farmer and Duncan Foley. The economy needs agent-based modelling. Nature, 460(7256):685, 2009.
  • [11] George Grozev, David Batten, Miles Anderson, Geoff Lewis, John Mo, and Jack Katzfey. Nemsim: Agent-based simulator for australia’s national electricity market. In SimTecT 2005 Conference Proceedings. Citeseer, 2005.
  • [12] Steve Yang, Mark Paddrik, Roy Hayes, Andrew Todd, Andrei Kirilenko, Peter Beling, and William Scherer. Behavior based learning in identifying high frequency trading strategies. In Computational Intelligence for Financial Engineering & Economics (CIFEr), 2012 IEEE Conference on, pages 1–8. IEEE, 2012.
  • [13] Marc Lanctot, Vinicius Zambaldi, Audrunas Gruslys, Angeliki Lazaridou, Karl Tuyls, Julien Perolat, David Silver, and Thore Graepel. A unified game-theoretic approach to multiagent reinforcement learning. In Advances in Neural Information Processing Systems, pages 4190–4203, 2017.
  • [14] Johannes Göbel, Holger Paul Keeler, Anthony E Krzesinski, and Peter G Taylor. Bitcoin blockchain dynamics: The selfish-mine strategy in the presence of propagation delay. Performance Evaluation, 104:23–41, 2016.
  • [15] Kei-Leo Brousmichc, Andra Anoaica, Omar Dib, Tesnim Abdellatif, and Gilles Deleuze. Blockchain energy market place evaluation: an agent-based approach. In 2018 IEEE 9th Annual Information Technology, Electronics and Mobile Communication Conference (IEMCON), pages 321–327. IEEE, 2018.
  • [16] Michele Bottone, Franco Raimondi, and Giuseppe Primiero. Multi-agent based simulations of block-free distributed ledgers. 2018.
  • [17] Joseph Bonneau. Why buy when you can rent? In International Conference on Financial Cryptography and Data Security, pages 19–26. Springer, 2016.
  • [18] LM Goodman. Tezos—a self-amending crypto-ledger white paper. URL: https://www. tezos. com/static/papers/white_paper. pdf, 2014.
  • [19] Ben Fisch. Poreps: Proofs of space on useful data. Technical report, Cryptology ePrint Archive, Report 2018/678, 2018. https://eprint. iacr. org …, 2018.
  • [20] Nick Hynes, David Dao, David Yan, Raymond Cheng, and Dawn Song. A demonstration of sterling: a privacy-preserving data marketplace. Proceedings of the VLDB Endowment, 11(12):2086–2089, 2018.
  • [21] Petar Maymounkov and David Mazieres. Kademlia: A peer-to-peer information system based on the xor metric. In International Workshop on Peer-to-Peer Systems, pages 53–65. Springer, 2002.
  • [22] Stephen Boyd, Arpita Ghosh, Balaji Prabhakar, and Devavrat Shah. Randomized gossip algorithms. IEEE transactions on information theory, 52(6):2508–2530, 2006.
  • [23] Yehuda Lindell. How to simulate it–a tutorial on the simulation proof technique. In Tutorials on the Foundations of Cryptography, pages 277–346. Springer, 2017.
  • [24] Amitanand S Aiyer, Lorenzo Alvisi, Allen Clement, Mike Dahlin, Jean-Philippe Martin, and Carl Porth. Bar fault tolerance for cooperative services. In ACM SIGOPS operating systems review, volume 39, pages 45–58. ACM, 2005.
  • [25] Swapnil Dhamal, Tijani Chahed, Walid Ben-Ameur, Eitan Altman, Albert Sunny, and Sudheer Poojary. A stochastic game framework for analyzing computational investment strategies in distributed computing with application to blockchain mining. arXiv preprint arXiv:1809.03143, 2018.
  • [26] Will Martino, Monica Quaintance, and Stuart Popejoy. Chainweb: A proof-of-work parallel-chain architecture for massive throughput., January 2018.
  • [27] Paul Embrechts, Thomas Liniger, and Lu Lin. Multivariate hawkes processes: an application to financial data. Journal of Applied Probability, 48(A):367–378, 2011.
  • [28] Adem Efe Gencer, Soumya Basu, Ittay Eyal, Robbert Van Renesse, and Emin Gün Sirer. Decentralization in bitcoin and ethereum networks. arXiv preprint arXiv:1801.03998, 2018.
  • [29] Cyril Grunspan and Ricardo Pérez-Marco. On profitability of selfish mining. arXiv preprint arXiv:1805.08281, 2018.
  • [30] Yonatan Sompolinsky and Aviv Zohar. Secure high-rate transaction processing in bitcoin. In International Conference on Financial Cryptography and Data Security, pages 507–527. Springer, 2015.
  • [31] Mirka Miller and Jozef Sirán. Moore graphs and beyond: A survey of the degree/diameter problem. The electronic journal of combinatorics, 1000:DS14–Dec, 2005.
  • [32] László Lovász et al. Random walks on graphs: A survey. Combinatorics, Paul erdos is eighty, 2(1):1–46, 1993.
  • [33] Réka Albert, Hawoong Jeong, and Albert-László Barabási. Internet: Diameter of the world-wide web. nature, 401(6749):130, 1999.
  • [34] Remco Van Der Hofstad. Random graphs and complex networks. Available on http://www. win. tue. nl/rhofstad/NotesRGCN. pdf, 11, 2009.

Vii Appendix: Proof of Claim 1


We will prove this by induction. By construction, is a -arboretum since and . Now assume the induction hypothesis that has a -subarboretum and suppose for a contradiction that does not have a -arboretum. We first note that if is not satisfied, but all of the height conditions are satisfied, then one can start a fork at or that forces compatibility. By assumption and are admissible, so this is possible. Thus, we can assume that the compatibility condition is satisfied. Since has a -subarboretum, was non-empty. Let be the index of the block mined in the second step of the loop. From the definition of , this means that and this set is non-empty since is connected. We have two cases:

  1. : This implies that , so is admissible in the next round

  2. : This implies that in the next round, so both are admissible

Thus is admissible and there must be a non-empty -arboretum ∎