Nonlinear Blockchain Scalability: a Game-Theoretic Perspective

01/22/2020 ∙ by Lin Chen, et al. ∙ MSN University of Houston Yahoo! Inc. 0

Recent advances in the blockchain research have been made in two important directions. One is refined resilience analysis utilizing game theory to study the consequences of selfish behaviors of users (miners), and the other is the extension from a linear (chain) structure to a non-linear (graphical) structure for performance improvements, such as IOTA and Graphcoin. The first question that comes to people's minds is what improvements that a blockchain system would see by leveraging these new advances. In this paper, we consider three major metrics for a blockchain system: full verification, scalability, and finality-duration. We establish a formal framework and prove that no blockchain system can achieve full verification, high scalability, and low finality-duration simultaneously. We observe that classical blockchain systems like Bitcoin achieves full verification and low finality-duration, Harmony and Ethereum 2.0 achieve low finality-duration and high scalability. As a complementary, we design a non-linear blockchain system that achieves full verification and scalability. We also establish, for the first time, the trade-off between scalability and finality-duration.



There are no comments yet.


page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1. Introduction

Blockchain technology provides a mechanism for untrusted parties to carry out transactions without a fully trusted central system. The basic idea behind this is that, instead of having trust in a centralized system or any other specific participant, each participant chooses to trust the majority of the participants and accepts the outcome achieved through consensus among them.

One major reason that hinders the adoption of blockchain is scalability (Vukolić, 2015). For example, Bitcoin network can only process less than 10 transactions per second on average (França, 2015), while typical payment systems like Visa can process thousands of transactions per second.

Recently, a variety of approaches are proposed to address the scalability issue. Most of them follow the general framework of divide and conquer, e.g., Zilliqa (The Zilliqa Team, 2017), Harmony (The Harmony Team, 2018), and Ethereum 2.0 (The Ethereum Team, 2019), and use a sharding scheme that allows transactions to be processed by a subgroup of nodes (a sharding committee). A sharding scheme usually has a critical issue in terms of resilience, as the correctness of each transaction now solely depends on a subgroup of voters. Consequently, if common consensus protocols like Proof-of-Work (PoW) or Byzantine Fault Tolerance (BFT) is used within subgroups, then the fraction of malicious nodes within every subgroup cannot exceed 1/2 or 1/3, which is a significantly stronger assumption than that of a standard blockchain system, which only requires that the fraction of malicious nodes does not exceed 1/2 or 1/3 of all nodes. We remark that both Harmony and Ethereum 2.0 claim that if subgroups are generated in a perfect randomized way, then the percentage of honest nodes within each subgroup is almost the same as their percentage in the whole group of nodes; however, this requires a perfect distributed random number generation as a separate procedure, which brings an additional assumption on the security of this additional procedure.

If we aim to guarantee that every transaction is correctly executed by only relying on the standard assumption that the majority of the nodes are honest, then it is natural to require every transaction to be verified by all the nodes. Consequently, it is a natural question of whether scalability is achievable at all, as it appears that any divide and conquer based solution would inevitably reduce the total number of verifications received by a transaction. A “non-linear” blockchain structure recently introduced by IOTA (Popov, 2016) and Graphcoin (Boyen et al., 2017) brings the hope. The basic idea is to allow blocks to be connected as a directed acyclic graph (DAG) instead of a chain. Such a non-linear structure implements a divide and conquers approach implicitly by allowing multiple blocks to be appended simultaneously, as a general graph can be extended in multiple directions. Meanwhile, if we treat different growing directions as soft forks or branches, then they have the possibility (depending on system parameters) to “merge” again in the future (see the following figure 2, where the sequences of blocks the following block and meet at block ). Therefore, if a node who tries to append a new block is required to verify a few previous blocks, there is a possibility that a block may still be verified by all the nodes, albeit the delay of such verification.

On a high level, there are three crucial metrics involved in a general blockchain system: full verification, scalability, and finality-duration. In a nutshell, full verification requires every transaction to be verified by essentially all the nodes (which thus ensures resilience under the standard assumption that majority of the nodes follow the protocol); scalability means the system throughput, or the total number of transactions executed per unit of time, is proportional to the total number of participating nodes; and finality-duration means the delay in reaching consensus on the correctness of the execution of each transaction. We give a precise definition later.

Classical blockchain systems like Bitcoin achieves full verification and low finality-duration, but not scalability. This is because Bitcoin requires every block, and hence the transactions within a block, to be verified by all the nodes; meanwhile, it has a constant finality-duration because every block is finalized after a constant number of blocks are appended afterward. However, it does not scale, as the increase in the number of nodes does not allow the system to handle more transactions per unit of time, which has been pointed out in many prior papers. On the other hand, blockchain systems like Harmony (The Harmony Team, 2018) and Ethereum 2.0 (The Ethereum Team, 2019) achieve constant finality-duration and scalability, but not full verification.

In this paper, we provide a view on the relationships between fully verification, low finality-duration, and scalability. We also consider the potential trade-off among these three metrics. Our detailed contribution is summarized as follows:

Our contributions. Based on a counting argument, we show that it is impossible to achieve full verification, low finality-duration and scalability simultaneously.

Given our impossibility result, and the fact that: (i) Bitcoin achieves full verification and low (constant) finality-duration, but not scalability; and (ii) Harmony and Ethereum 2.0 achieves low (constant) finality-duration and scalability, but not full verification, It is natural to ask whether there exists a blockchain system that satisfies both full verification and scalability. We prove that, interestingly, a modified version of a non-linear blockchain system, which is in line with a well-known blockchain system – IOTA, can achieve both full verification and scalability at the cost of high (non-constant) finality-duration. We further employ a game-theoretical analysis to characterize the trade-off between scalability and finality-duration for such a non-linear blockchain system. Informally speaking, the followings hold simultaneously for the non-linear blockchain system:

  • new blocks are generated per unit of time on average;

  • after

    units of time, with a very high probability, each block will be verified by all users in the systems.

Here is a system parameter that can be set suitably at the genesis block. When , the non-linear system degenerates to a linear system with a fixed block generation rate that is independent of the nodes in the system, while the delay is a constant. This coincides with the classical Bitcoin system. On the other hand, can be as high as where is the number of nodes in the system. In this case, the system is fully scalable, albeit that only a sufficiently long delay () can ensure full verification. However, if we set to be to enforce scalability and meanwhile enforce the delay to be some constant instead of , then full verification cannot be guaranteed.

We remark that the big- notation in our statements hides a constant which is roughly the average time for a block to be generated, that is, we measure the delay in terms of the number of blocks; therefore, Bitcoin is considered as low finality-duration as the delay is constant blocks. Our result does not conflict with prior researches that complain about the “high” finality-duration of Bitcoin because of the long time it takes to generate a single block. The research that tries to decrease such a block generation time is parallel to this paper. For example, if a lighter version of PoW can be used in the existing Bitcoin system, then it can also be used directly in our non-linear blockchain system, while our impossibility result, as well as the trade-off between finality-duration and scalability, remain the same.

2. Related Works

The study of e-cash systems dates back to 1983 (Chaum, 1983; Sander and Ta-Shma, 1999). However, all such systems require a centrally or quasi-centrally controlling authority. A well-known exception, Bitcoin, was introduced by Nakamoto (Nakamoto, 2008) in 2008, which uses a public ledger known as a blockchain to record transactions carried out between users. Following this line of research, various alternative blockchain-based transaction systems are proposed (Triantafyllidis and Oskar van Deventer, 2016; Miers et al., 2013; Sasson et al., 2014), further improving the performance and security of Bitcoin as well as extending the system to deal with applications beyond transactions (e.g., smart contracts). We refer the readers to several surveys on blockchain systems (Tschorsch and Scheuermann, 2016; Conti et al., 2018; Khalilov and Levi, 2018; Salman et al., 2018; Ali et al., 2018; Yang et al., 2019; Liu et al., 2019). In particular, (Tschorsch and Scheuermann, 2016) provides a comprehensive introduction to the bitcoin network, (Conti et al., 2018; Khalilov and Levi, 2018; Salman et al., 2018) focus on the security and privacy results on blockchain, (Ali et al., 2018) focuses on the applications of blockchain. The most relevant survey to this paper is (Liu et al., 2019), which summarizes recent results on game-theoretical studies of blockchain. However, most of the existing game theoretical research primarily focuses on the traditional linear blockchain system, only a very recent paper of Popov et al. (Popov et al., 2019) gives the first game-theoretical analysis of IOTA. Their result, however, does not establish the trade-off between scalability and finality-duration.

2.1. Classical and Non-linear Blockchain

Chain-structured blockchain. Most of the existing blockchain systems, e.g., Bitcoin, Ethereum, Hyperledger, follow the classical structure where blocks form a chain as illustrated by Fig 1.

Figure 1. Chain-structured blockchain. White squares form the main chain, and gray squares form the side chains that are discarded eventually.

Non-linear (graph-structured) blockchain. Popov introduced the concept of tangle (Popov, 2016) which allows a blockchain to adopt a directed acyclic graph (DAG) architecture. We summarize the abstract model of a non-linear blockchain in Section 3. We briefly review IOTA, which is the most well-known non-linear blockchain system so far. On a high level, IOTA allows each transaction be an individual node linked in the distributed ledger. We may interpret a transaction as a block in such a system. In the tangle, each user needs to select one transaction from the pool as well as two previous blocks (transactions) in the system. The user verifies these two transactions and mines a new block referring to them. Then this new block (transaction) is broadcasted to the tangle network. Figure 2 gives a simple example of a non-linear blockchain.

Figure 2. A non-linear blockchain. White squares are verified transactions/blocks.

3. The Abstract Model

We describe an abstract model of a non-linear blockchain which is general enough to incorporate existing well-known non-linear blockchain systems like IOTA and Graphcoin.

A non-linear blockchain NLB is defined by a quadruple

  • defines the rules of building and adding a new block to the blockchain. Since we are considering non-linear blockchain, allows multiple blocks to be added simultaneously.

  • defines the way to check a block, including validity verification, such as whether the block has the correct format and whether transactions included in the block are valid, and whether the block is finalized.

  • defines the way how the award is assigned to a user who adds a new block to the DAG. A NLB needs to encourage users to participate in the construction of the blockchain by giving rewards to those who add new blocks.

  • defines the rules to eliminate conflict blocks. Similar to a linear blockchain, it is possible that multiple participants have different local copies of the blockchain, and determines which version should be kept.

Next, we provide provide formal definitions of the three metrics of a blockchain system we mentioned earlier.

Definition 1 (Full verification).

A blockchain system satisfies the property of full verification if every block is verified by all the nodes in the system before it is finalized.

If a blockchain system satisfies full verification, then resilience follows directly from standard assumptions on the percentage of honest nodes among all nodes, e.g., if the blockchain uses PoW or BFT as the consensus protocol, then it is resilient once the majority or 2/3 of nodes follow the protocol.

Definition 2 (Scalability).

The throughput of a blockchain system is the number of blocks that can be added to the system in a fixed time. A blockchain system scales with the number of nodes in the system if when . Particularly, a blockchain system fully scales with the number of nodes if .

It should be clear that the definition of scalability or fully scalability does not depend on the length of the time period chosen for throughput. It captures the possibility of speeding up blockchain generation with more participating nodes; consequently, classical blockchain systems like Bitcoin does not scale at all.

Definition 3 (Finality-duration).

The finality-duration of a blockchain system is the time difference between the time point when a block is appended and the time point when a block receives full verification.

We say the finality-duration of a blockchain system is low (or constant) if the finality-duration is independent of the nodes in the system; consequently, classical blockchain systems like Bitcoin has a low finality-duration because after a fixed number of blocks are appended, all the nodes start following the main chain, thus blocks on the main chain will receive full verification.

4. Impossibility Result

Theorem 1 ().

There does not exist a blockchain system that simultaneously satisfies (i) scalability; (ii) low finality-duration; and (iii) full verification.

The proof follows from a counting argument on the total number of verifications. See full version.

5. Achieving Full Verification with Trade-off between Scalability and Finality-duration

As we have mentioned before, classical blockchain systems like Bitcoin achieve full verification and low finality-duration at the cost of scalability, Harmony and Ethereum 2.0 achieve scalability and low finality-duration at the cost of full verification. In this section, we complement the characterization by constructing a non-linear blockchain system, which is a modified version of IOTA, and show that it achieves full verification and scalability at the cost of finality-duration. We will further characterize the trade-off between the scalability and finality-duration, allowing the system to balance these two parameters for different applications.

5.1. Non-linear Blockchain (NLB) Construction

We first propose a concrete construction of NLB that achieves both security and scalability under the agent model. Without loss of generality, we assume that each block only includes one transaction. In the following, we broaden the terms and use them interchangeably. We first define some concepts.

Definition 4 (Block distance, descendant, and ancestor).

Given two blocks and , we define the distance between the two blocks as the length of the shortest directly path from to , which is denoted as . If there is no such a directed path, we define . If , we say is a descendant of , and is an ancestor of . For a block and each , let and , where is a given parameter.

The new NLB is constructed as follows:

  • . The new NLB assumes that there is a pool of new transactions from which a user can select one to construct a new block, which refers to two previous blocks111Our analysis in this paper also works if a new block refers to any fixed constant (greater than or equal to 2) number of blocks. For ease of presentation, we take this number to be 2 throughout this paper.. The user then does lightweight mining to fix this information in the newly constructed block. Suppose that the newly built block is , the user also verifies blocks in , where is a pre-defined system parameter that determines how many previous blocks the producer of a new block should verify.

  • . To check a block , the algorithm first checks whether the block format is correct, including the verification of the mining outcome. The algorithm also checks whether is finalized or not, which is determined by

    If is larger than the system pre-defined threshold, is finalized.

  • . Each block has a reward value and the system imposes an upper bound on the maximal reward offered by a transaction, so that the largest and smallest reward among transactions (and blocks) can differ by a factor at most . The producer of the new block also receives rewards from previous blocks. Specifically, each block is associated with a uniform verification cost vrf, which is divided into parts such that and . For each , the producer of block gets reward for each . This means that the verification reward of from block is evenly distributed among all is descendants in . Note that the reward will be only collected when the new block is finalized.

  • . The constructed NLB adopts the largest-weighted-descendants principle (LWD) to eliminate disagreement. For each block , let . If there are two conflicting blocks , and , then prevails, that is, users will abandon together with all its descendants in the sense that a new block will not refer to any of these blocks.

5.2. Scalability and Finality-duration Analysis

We first give a high-level summarization of the workflow of the proposed NLB system. Transactions are generated over time and form a pool. Each transaction is associated with a distinct transaction reward and a fixed verification reward vrf. Each time, a miner will select one transaction from the pool and append a block, which refers to two previous blocks. Here the miner needs to decide two things: (i) which transaction to include, and (ii) which two previous blocks to refer to. As we assume that miners are rational players, they will strategically make their decisions to maximize their profits, and this section is devoted to analyzing the scalability and security of the system under an arbitrary Nash equilibrium.

We formalize the problem as follows. Let the pool consist of transactions, with the transaction reward being . Let be the number of miners, with computational powers being . As we mentioned, each miner will mine a new block by including one transaction from the pool. If multiple miners say, miners in the subset of , all choose the same transaction, then they compete, and only one of them will succeed, and the probability that some miner succeeds is . If, however, all miners choose different transactions, then each of them can append a new block.

In the following section, we will analyze the scalability and finality-duration of the constructed NLB separately.

5.2.1. Scalability

For scalability, we are interested in how many different transactions from the pool can be selected by the miners simultaneously. Note that the more different transactions are chosen, the higher scalability is. By considering the situation where miners choose transactions simultaneously, we are considering the worst-case because if miners are selecting transactions at different times, later ones may be able to avoid conflicts with earlier ones. Let be the number of available transactions and be the number of miners, the following Theorem 1 implies that the system is scalable even in the worst case such that when there are sufficiently many transactions, the throughput will be where is a system parameter part of . By controlling , we can control the scalability of the system. In particular, when we set to be a constant, the system becomes fully scale with the number of nodes.

The following of this subsection is devoted to proving Theorem 1.

Theorem 1 ().

With probability at least , the number of blocks mined by miners in an arbitrary Nash equilibrium is at least for some universal constants .

Notice that a Nash equilibrium always exists by allowing mixed strategies (Nash, 1951). Towards the proof, we introduce some notations. For simplicity, let all the transaction rewards be . By the design of our system we require that . Note that the strategy of a miner is to select one transaction. We consider the general mixed strategy of a miner where he/she can specify a probability for each transaction.

Consider an arbitrary Nash equilibrium and let be the strategy of miner in the equilibrium, where is the probability that he chooses transaction . It is obvious that for any . Let be the -random variable that indicates whether miner chooses transaction . Then with probability and with probability .

Consider the above Nash equilibrium. Intuitively, if only a small number of transactions are selected, then miners must have devoted their probabilities to a few transactions. Therefore, to show that a sufficient number of distinct transactions are selected in expectation by miners, we need to show that the miners are distributing their probabilities in a fair way among transactions, as is implied by the following lemma.

Lemma 0 ().

If there exists some transaction such that then for every transaction it holds that .


Suppose on the contrary that the lemma is not true, that is, there exists some transactions and such that and . Consider the set of miners that choose transaction with positive probability. For simplicity, let these miners be miner such that . We show in the following that miner can change his strategy to get a strictly higher profit, contradicting the fact that this is a Nash equilibrium, and consequently, the lemma is proved. More precisely, we argue that player can get strictly larger profit (in expectation) by increasing his probability of choosing transaction and meanwhile decreasing his probability of choosing .

The expected profit that miner can get from transaction and using his current strategy is equal to

where for , we have

If changes his strategy by choosing with the probability of and choosing with the probability of , then the expected profit he can get from and is equal to where

and is the - random variable that takes the value with the probability of .

In the following we show that

which implies the correctness of the lemma. We prove the following two claims.

Claim 1 ().

Claim 2 ().

Given the two claims and the fact that , follows and the lemma is proved. The proof of the two claims is quite involved. Please refer to the full version for details. ∎

Lemma 2 shows that: Either no transaction has received a total amount of probability that is larger than , or every transaction receives a total amount of probability at least . Note that the two cases are not mutually exclusive. Nevertheless, we show in the following that in both cases, miners will select sufficiently many transactions with very high probability. The proofs of the following lemmas are mathematically involved. Please refer to the full version for details.

Lemma 0 ().

Let such that . Let be numbers such that , . Let . Then we have

Lemma 0 ().

If holds for every transaction , and , then the probability that only different transactions are selected by miners is at most .

Note that if , . As miners complete at least transaction, the lemma is trivially true. Now we consider the other case and have the following.

Lemma 0 ().

If holds for every transaction , then the probability that no more than transactions are selected is at most .

Given Lemma 4 and Lemma 5, Theorem 1 follows directly.

5.2.2. Finality-duration

For ease of presentation, let (recall Theorem 1). We will characterized finality-duration in terms of .

Recall that a miner needs to make two decisions: (i) which transaction to include in the new block, and (ii) which two previous blocks to refer to. The two decisions are independent. In the previous subsection we have discussed (i), and in this subsection we focus on (ii), as this affects how the DAG grows.

It should be clear that since the verification reward of a transaction (block) is evenly distributed among miners who append a block of the same distance to it, a miner always prefers a block with no descendants. At any particular time , we call a block without descendants as a leaf at , and denote by the set of leaves. We are interested in the size of . Notice that in the classical blockchain system, is 1 since it is a chain. However, in a non-linear model, is not necessarily 1. In principle, could grow arbitrarily large, but what we will show in this section is that, is always bounded when miners are taking their equilibrium strategies. In this case, although we are considering a non-linear model, it is “almost linear”, as implied by Theorem 8. Based on this result, we further leverage the techniques from random walk to prove that, for every block, after a delay of units of time, all blocks will be its descendant (Theorem 10), consequently, if we set in our design, every block will be verified by all the users, and security follows.

As we mentioned before, each new block will refer to two leaves in . As every block offers the same total amount of verification reward, every leaf appears the same to the miners (unless they are in conflict with previous blocks and then miners will be biased based on the LWD rule). Therefore, a new block will randomly select two leaves to refer to. Assuming leaves are not conflicting with previous blocks, we show that will be in the long run with an extremely high probability.

First, it is easy to see that if , then as the new blocks will be leaves at . The following lemma shows that if is sufficiently large, then with very high probability it will reduce to after enough time.

Lemma 0 ().

Let be an arbitary small constant. If and , then with sufficiently high probability (at least ), , i.e., decreases by at least .


Consider an arbitrary . For any and any new block , the probability that refers to is , hence, the probability that none of the new blocks refer to is , i.e., the probability that an arbitrary has descendant(s) at is

Let a random variable denote whether the event that has descendant(s) at , then and . Denote by the total number of leaves in that has descendant(s) at .

According to Chernoff bound, with the probability of at least ,

. Now we estimate

. It is easy to verify that

Hence, if and , then with sufficiently high probability (at least ), , consequently,

The above lemma shows that if is large, then with high probability shall decrease, however, what we are interested in is the probability that for all . Towards this, we need to cast the problem as a random walk. Lemma 6 shows that with the probability of , can decrease by , while with probability of at most , can increase by at most . This can be interpreted as a random walk which walks right (increase) by steps with the probability of , and walks left (decrease) by steps with the probability of . The following lemma is proved for a general random walk.

Lemma 0 ((Feller, 1968), pp.272).

Consider a random walk starting at , , where and . If , then

If , the above limit is .

Now we are ready to prove the following theorem.

Theorem 8 ().

Let be a small constant such that . With very high probability (at least ), for all .


Recall that . Let be the smallest time where , then . Now we take as a starting time, as a starting point and take the random walk interpretation. Using Lemma 7, we have that

Therefore, the probability that is bounded by for all is at least . ∎

Lemma 0 ().

Let be a small constant such that . For any transaction at that is not in conflict with prior transactions, with sufficiently high probability (at least ) every block appended at or after will be its descendant.

Given the above lemma, if we set , the verification depth to be , then any transaction at will be verified by all the users after units of time with high probability. The following theorem is thus true.

Theorem 10 ().

If and , then with probability at least , any transaction at will be verified by all the users after units of time.

Remark. Recall that the value of can be adjusted by setting different values of the system parameter . Theorem 10 shows the trade-off between the scalability and finality-duration.

6. Conclusion

We provide the first systematic analysis on blockchain systems with respect to three major parameters, full verification, scalability, and finality-duration. We establish an impossibility result showing no blockchain system can simultaneously achieve the three properties. We complement the existing blockchain systems by establishing the first NLB that achieves both full verification and scalability. We also reveal, for the first time, the trade-off between scalability and finality-duration in NLB. It is not clear whether a better trade-off exists or not.


  • Ali et al. [2018] Muhammad Salek Ali, Massimo Vecchio, Miguel Pincheira, Koustabh Dolui, Fabio Antonelli, and Mubashir Husain Rehmani. Applications of blockchains in the internet of things: A comprehensive survey. IEEE Communications Surveys & Tutorials, 2018.
  • Boyen et al. [2017] Xavier Boyen, Christopher Carr, and Thomas Haines. Blockchain-free cryptocurrencies. 2017.
  • Chaum [1983] David Chaum. Blind signatures for untraceable payments. In Advances in cryptology, pages 199–203. Springer, 1983.
  • Conti et al. [2018] Mauro Conti, E Sandeep Kumar, Chhagan Lal, and Sushmita Ruj. A survey on security and privacy issues of bitcoin. IEEE Communications Surveys & Tutorials, 20(4):3416–3452, 2018.
  • Feller [1968] William Feller.

    An introduction to probability theory and its applications: volume I

    , volume 3.
    John Wiley & Sons New York, 1968.
  • França [2015] BF França. Homomorphic mini-blockchain scheme, 2015.
  • Khalilov and Levi [2018] Merve Can Kus Khalilov and Albert Levi. A survey on anonymity and privacy in bitcoin-like digital cash systems. IEEE Communications Surveys & Tutorials, 20(3):2543–2585, 2018.
  • Liu et al. [2019] Ziyao Liu, Nguyen Cong Luong, Wenbo Wang, Dusit Niyato, Ping Wang, Ying-Chang Liang, and Dong In Kim. A survey on applications of game theory in blockchain. arXiv preprint arXiv:1902.10865, 2019.
  • Miers et al. [2013] Ian Miers, Christina Garman, Matthew Green, and Aviel D Rubin. Zerocoin: Anonymous distributed e-cash from bitcoin. In Security and Privacy (SP), 2013 IEEE Symposium on, pages 397–411. IEEE, 2013.
  • Nakamoto [2008] Satoshi Nakamoto. Bitcoin: A peer-to-peer electronic cash system, 2008.
  • Nash [1951] John Nash. Non-cooperative games. Annals of mathematics, pages 286–295, 1951.
  • Popov et al. [2019] Serguei Popov, Olivia Saa, and Paulo Finardi. Equilibria in the tangle. Computers & Industrial Engineering, 136:160–172, 2019.
  • Popov [2016] Serguei Popov. The tangle. cit. on, page 131, 2016.
  • Salman et al. [2018] Tara Salman, Maede Zolanvari, Aiman Erbad, Raj Jain, and Mohammed Samaka. Security services using blockchains: A state of the art survey. IEEE Communications Surveys & Tutorials, 21(1):858–880, 2018.
  • Sander and Ta-Shma [1999] Tomas Sander and Amnon Ta-Shma. Auditable, anonymous electronic cash. In Annual International Cryptology Conference, pages 555–572. Springer, 1999.
  • Sasson et al. [2014] Eli Ben Sasson, Alessandro Chiesa, Christina Garman, Matthew Green, Ian Miers, Eran Tromer, and Madars Virza. Zerocash: Decentralized anonymous payments from bitcoin. In 2014 IEEE Symposium on Security and Privacy, pages 459–474. IEEE, 2014.
  • The Ethereum Team [2019] The Ethereum Team. On sharding blockchains, 2019.
  • The Harmony Team [2018] The Harmony Team. Harmony - technical whitepaper, 2018.
  • The Zilliqa Team [2017] The Zilliqa Team. The zilliqa technical whitepaper, 2017.
  • Triantafyllidis and Oskar van Deventer [2016] Nikolaos Petros Triantafyllidis and TNO Oskar van Deventer. Developing an ethereum blockchain application. 2016.
  • Tschorsch and Scheuermann [2016] Florian Tschorsch and Björn Scheuermann. Bitcoin and beyond: A technical survey on decentralized digital currencies. IEEE Communications Surveys & Tutorials, 18(3):2084–2123, 2016.
  • Vukolić [2015] Marko Vukolić. The quest for scalable blockchain fabric: Proof-of-work vs. bft replication. In International Workshop on Open Problems in Network Security, pages 112–125. Springer, 2015.
  • Yang et al. [2019] Ruizhe Yang, F Richard Yu, Pengbo Si, Zhaoxin Yang, and Yanhua Zhang. Integrated blockchain and edge computing systems: A survey, some research issues and challenges. IEEE Communications Surveys & Tutorials, 2019.


Appendix A Proof of Theorem 1

Proof of Theorem 1.

Suppose, on the contrary, that there exists such a blockchain. Then by definition, every block or transaction will receive verifications from all the nodes within a constant delay. Let be the constant delay. Consider an arbitrary node and let be the fixed time it takes for node to perform one verification. Let the throughput of the blockchain be , then by definition of scalability we have when . Note that all the blocks generated shall be verified by every node within the delay of , which means every node should perform verifications within . However, node can only perform verifications, which is a constant. Since , when is sufficiently large, . Therefore, it is impossible for an arbitrary node to complete all the verifications. Hence, the three properties, scalability, low finality-duration and full verifications, cannot be satisfied simultaneously. ∎

Appendix B Proof of Claim 1

Proof of Claim 1.

Let and . For any , we have

Given that , we know that

Meanwhile, by , we have

According to Chernoff bound, we know that


Consider the function , it is easy to verify that is convex in , therefore, by Jensen’s inequality we have

Now consider the function . It is easy to verify that the function decreases when , therefore for , hence, for , we have

Using the fact that and taking , we have

Appendix C Proof of Claim 2

Proof of Claim 2.

Let . Then

Note that and are independent, hence we have the following,

Further notice that and are independent, thus

Now we have

For simplicity, let and , then we have

Notice that is a quadratic function in whose quadratic term has negative coefficient, therefore if and , then for any we have and the claim is proved. It is easy to see that , and , hence, the claim is proved. ∎

Appendix D Proof of Lemma 3


Without loss of generality we assume that . We define

For arbitrary and some small such that , , we prove in the following that

For simplicity we write . We have

Consider the function . Due to its convexity, for , we have

Hence, let , then for any , we have

therefore, . Now we can iteratively change into such that for , and otherwise, and get

It is not difficulte to compute that

where for the last inequality we make use of the Stirling’s approximation that . Hence, the lemma is proved. ∎

Appendix E Proof of Lemma 5

Proof of Lemma 5.

Consider the event that at most transactions are selected by miners for some . Note that in this case , hence . Again we define as the superset of all the subsets of cardinality . The probability that miner does not select any transaction in some is . Given that miners select transactions independently, the probability that all miners do not select transactions in is

where the first inequality follows by inequality of arithmetic and geometric means, the second inequality follows by the fact that

, and the third inequality follows by for . Taking the summation over all possible , the probability that at most transactions are selected is at most