SoK: Tools for Game Theoretic Models of Security for Cryptocurrencies

05/21/2019 ∙ by Sarah Azouvi, et al. ∙ UCL 0

Cryptocurrencies have garnered much attention in recent years, both from the academic community and industry. One interesting aspect of cryptocurrencies is their explicit consideration of incentives at the protocol level. Understanding how to incorporate this into the models used to design cryptocurrencies has motivated a large body of work, yet many open problems still exist and current systems rarely deal with incentive related problems well. This issue arises due to the gap between Cryptography and Distributed Systems security, which deals with traditional security problems that ignore the explicit consideration of incentives, and Game Theory, which deals best with situations involving incentives. With this work, we aim to offer a systematization of the work that relates to this problem, considering papers that blend Game Theory with Cryptography or Distributed systems and discussing how they can be related. This gives an overview of the available tools, and we look at their (potential) use in practice, in the context of existing blockchain based systems that have been proposed or implemented.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

Since the deployment of Bitcoin in 2009, cryptocurrencies have garnered much attention from both academia and industry. Many challenges in this area have since been recognized, from privacy and scalability to governance and economics. In particular, the explicit consideration of incentives in the protocol design of cryptocurrencies (sometimes referred to as “cryptoeconomics”) has become an important topic.

The importance of economic considerations in security has been acknowledged since early work by Anderson [1, 2], who recognized that many security failures could be explained by applying established ideas from the fields of Game Theory and Economics. However, it tends to be the case that the incentives at play are external to the system design (and sometimes implicit). This leads to failures as the intended use of systems is misaligned with the incentives users respond to.

Cryptocurrencies, on the other hand, explicitly define some incentives in their protocols, for example in the form of mining rewards. The fact that incentives are considered by default in the design of the system suggests that they could be properly aligned with the intended use of the system, avoiding traditional modes of failure. Unfortunately, this has not yet been the case, as many attacks related to incentives have been found for many cryptocurrencies [3, 4, 5].

Failures here arise from the lacking models used in the design process of cryptocurrencies. While many projects and papers aim to consider both standard security and game theoretic guarantees, the vast majority end up considering them separately despite their relation in practice. To this end, we consider the ways in which models in Cryptography and Distributed Systems can be made to explicitly consider game theoretic properties. We also consider how these can be tied up into a whole system, looking at requirements based on existing blockchain based cryptocurrencies.

Methodology

As we are covering a topic that incorporates many different fields (Economy, Cryptography and Distributed Systems security) coming up with an extensive list of papers would have been quite challenging, and would lead to an output of much greater length. In order to pick a representative subset of papers, we started by looking at existing surveys on the topic of Game Theory and Security [6, 7, 8, 9, 10, 11], as well as specific book chapters on the topic e.g., Chapter in the book by Nisan et al. [12]. We then looked at work published in popular Cryptography venues (e.g., CRYPTO) where papers on rational cryptography have been presented, as well as Distributed System venues (e.g., PODC) and interdisciplinary venues (e.g., WEIS, ACM Economics and Computation, P2PECON), looking specifically for papers that cover both Game Theory and Security. Interestingly, few papers that we present here come from interdisciplinary venues as the papers published there focused more on applying game theoretic methods to solve a problem rather than an approach blending Game Theory and Security ideas, which is what we aim to discuss.

In many cases, different papers present definitions for similar concepts, so for the sake of exposure we do not always include all these definitions. We also omit to present work on security and game theory that does not directly relate to what we discuss, e.g., the body of work by Tambe et al. [13] about (physical) security and how to apply game theory to allocate limited security resources (e.g. police force), or by Grossklags et al. [14] about security investments. Instead, we focus on some specific models of interest (e.g., Rational Protocol Design and Bayesian machine games in Section IV) when we think they are worth more attention. We then match these concepts with open problems in the security of cryptocurrencies. For each paper or proposed model, we consider the following questions. What are the assumptions and security models? How is the notion of security captured in practice? How are game theoretic aspects included? What are the gaps or areas that could be expanded on?

Our contributions

The goal of this work is to give an overview of the intersection of the three fields that are essential to the design of cryptocurrencies: Cryptography, Distributed Systems and Game Theory. Our contribution is an analysis of existing work that proposes solutions to this problem. Our analysis highlights new concepts introduced by these papers, as well as deficiencies. We do this in the context of security requirements that we formulate, arguing that they address deficiencies in existing security models that fail to cover all aspects of a decentralized monetary system. Finally, we discuss open challenges and how they could be addressed.

We give a brief introduction to these fields in Section III, along with cryptocurrencies, and discuss security in the context of a decentralized system. In Section IV we then look at the intersection of Cryptography and Game Theory, followed by the intersection of Distributed Systems and Game Theory in Section V. This allows us to review the low level building blocks of existing solutions, which we put in the context of our security notions and requirements. We then look at how these results are used in Section VI, where we look at proposed systems and their failures, tracing back to deficiencies identified in the two previous sections. We then discuss in Section VII the open challenges posed by failures that are observed, and how they could be addressed. Finally, Appendix A collects formal definitions for the concepts presented in the paper.

Ii Related work

The work that most closely resembles ours are the previous surveys bridging Computer Science and Game Theory [6, 7, 8, 9, 10, 11]. They were of great inspiration for this work, but they are quite outdated (dating back to 2002, 2005, 2007, 2008, 2010) given the recent output of research tied to cryptocurrencies and mother blockchain based systems.

On the topic of blockchains, many SoK papers and surveys exist that cover consensus protocols and security [15, 16, 17, 18, 19, 20, 21]. These are very different from our work as we present general concepts and definitions related to designing decentralized systems with incentives. In particular, many concepts presented in this paper were not introduced in the context of consensus, but rather in the context of secure multiparty computation (MPC) or other problems tied to distributed systems. Mst of the work presented in this SoK does not directly concern blockchains, which is the motivation behind this work.

Iii Background

In this section we briefly introduce game theoretic tools that are mentioned throughout the paper such as solutions concepts and mechanism design. For a complete introduction to Game Theory, the reader is invited to look at any of the books (or other resources) on the topic [22, 23]. For the sake of exposition we omit to cover concepts that are relevant e.g., Pareto efficiency and the single deviation test, as these do not appear in the papers we mention. We also discuss the interface between Game Theory and Cryptography and Security in practice, from how both areas define their agents to the models used to reason about them. We then give a brief introduction on Distributed Systems and cryptocurrencies.

Iii-a Game Theory and Mechanism Design

Iii-A1 Games and solution concepts

A game is defined by a set of players and a set of actions for each player . A strategy for player , denoted , is a function from ’s local state to actions . We denote the set of all possible strategy for player . We denote the joint strategy of all players and . We denote player’s utility when is played. Players may be of different types, which are denoted for player .

Game Theory uses solution concepts in order to predict the outcome of a game, the most well known is the Nash equilibirum (NE). A strategy is a Nash equilibrium if given that all the other players follow , player is better off playing as well. More formally, for all players and all strategies : . Note that multiple NE can exist for a given game.

Extensive form games

In practice, many games involve multiple rounds of play with moves made sequentially by players, as in chess for example. These are generally described as extensive form

games, which can be represented in a tree form that represents the possible sequences of actions in the game. Going from a normal form game to an extensive form games requires specifying an ordering of play, payoffs as a function of moves, information sets (the moves that could have taken place given what a player has observed), and a probability distribution over Nature’s moves (moves by players with no strategic interest in the game’s outcome e.g., a dealer).

When considering the tree representation of the game, one can find subgames i.e., subsets of a game that have an initial node such that it is the only member of its information set, its successors are in the subgame, and the nodes in the informations sets of the subgame are in the subgame. Subgames are very relevant because they allow us to define a subgame perfect equilibrium (SPE), a refinement of a Nash equilibrium that eliminates Nash equilibria involving irrational subgame behavior. As in the case of a NE, at least one SPE is guaranteed to exist for a finite extensive form game, and we define them as follows: a strategy profile is a subgame perfect equilibrium if it represents a Nash equilibrium of every subgame of the original game.

Incomplete and imperfect information

So far we have assumed that players have complete information about the game they are in, but this is not always realistic. A game where players do not always know exactly what has taken place earlier in the game, is said to have imperfect information. In the case where players do not know exactly the type of the other players, which determines their payoff function, the game is said to have incomplete information.

Imperfect and incomplete extensive form games can be related by having Nature make the first move in the game and randomly assign types to players, turning an incomplete game into an imperfect game as different outcomes of Nature’s initial move are possible. The players then have a probability distribution over the types of other players. (Formally speaking, the probability distribution is over the set of states of Nature, which are mapped to player types.) The beliefs of players are expressed as conditional probabilities of a history in an information set, given the information set. Players can update their beliefs when they gain new information according to Bayes’ theorem, leading games of this form to be called

Bayesian games. A collection of beliefs is called a belief system, which paired with a strategy profile is an assessment.

Now that we do no longer have perfect information, we must leave behind some solution concepts. The lack of perfect information means that players cannot always tell which subgame they are in, so a SPE is no longer possible. Players now also think about expected payoffs, so the standard definition of a NE is no longer ideal but we can define a Bayesian Nash equilibrium (BNE) analogously by replacing utilities with expected utilities, although we usually still refer to them simply as utilities. This does not, however, take into account the beliefs of the players that now have to be considered.

Going further, we can define a sequential equilibrium which requires an assessment that is sequentially rational and consistent i.e., given any information set reached with positive probability (according to the strategy profile) the beliefs at that information set are derived using Bayes’ rule and the strategy profile. This gives a slightly stronger equilibrium due to the stricter constraint on the beliefs.

Iii-A2 Correlated Equilibria

NE and BNE are very powerful tools use widely in GT, however they also have a lot of drawbacks (e.g., computing it is not tractable [24]) and do not yield to the obvious best general outcome (see the famous prisoner’s dilemma for example [25]). For this reason some other solution concepts exist. One notable is the correlated equilibrium. The intuition behind a correlated equilibrium can be interpreted as follows: let’s assume that there exists a public signal that tells players what to play; based on the information they receive players can make inference about the move that the other players are going to play. For example a public signal could be a traffic light. If a player sees a red light they can assume that another player on another line will see a green light and thus that their best decision is to stop at the intersection. Thus in a correlated equilibrium, each players choose their actions according to information given by the public signal. The role of the public signal is usually to help players achieve the optimal outcome of the game.

Iii-A3 Mechanism design (MD) and Implementations

While Game Theory is typically about understanding the behavior of players in a given game, systems are usually designed and implemented with a goal in mind e.g., preventing double spending in cryptocurrencies. To achieve this, our goals can be expressed as a social choice function (SCF), a function that given the preference (or types) of all players outputs an outcome. For example, in a voting system, given all the ranked preferences of voters, a SCF will choose a candidate.

Once we have a target outcome in mind, the idea is to make sure that the incentives are designed in a way such that selfish players reach this outcome. In some ways, this can be thought of as reversing the basic idea of Game Theory, designing a game that leads to a specific outcome.

To do this, we use a mechanism that maps the action profile of players to a distribution over outcomes. A mechanism is then said to implement a SCF if , where

is the vector of all of the players utility functions and

represents all strategy vectors that could reasonably result from selfish behavior. Informally the previous equality means that whatever selfish strategy players choose, the outcome of the game will correspond to the SCF. The solution concept is supposed to reflect reality, in the sense that when it holds, players’ selfish strategies lead to the desired outcome. For example in a voting system we would like to design a system where given all the preferences of the players, the one chosen by the SCF is elected, one way to do this is to incentivize players to report their truthful preferences. A mechanism can also be viewed as a protocol, with the corresponding game being thought of as having the protocol as the recommended strategy, and deviations from the protocol as other possible strategies.

It must be pointed out that doing this in practice is not always easy, as experimental Game theory reveals. Gneezy and Rustichini [26] looked at the effects of implementing incentives at a nursery in order to reduce the rate at which parents collected their child late. This was done by punishing late parents with a fine, which intuitively should motivate parents to arrive on time. Instead, parents interpreted the fine as a way of paying for extra childcare, and started coming even later. Furthermore, once the fine was removed as it was counter productive, the parents behavior did not revert back so the system had in fact been irreversibly damaged.

A very important result in MD is the revelation principle that states that any social choice function that can be implemented by any mechanism can be implemented by a direct truthful mechanism. (A mechanism is direct if players need only reveal their type/utility function to the designer of the game and truthful if players’ best strategy is to reveal their true type.)

Iii-B Agents in Game Theory and Security

Both Game Theory and Security deal with the interaction of agents in a given setting, but they differ noticeably in the way they model these agents. On the other hand, security proofs are often game based, which points to some similarities.

Security scholars deal with adversaries, agents that aim to circumvent or otherwise break a security property of the system. The value that an adversary attaches to their success is not usually given, as security should ideally be robust to any adversary, although they may have computational limitations. Game theorists, on the other hand, deal with rational (sometimes also called selfish) agents. These assign a value to their goals, and would rather optimize their payoff than achieve an arbitrary goal, but they typically do not have restrictions (e.g., computational) like a security adversary would.

In practice, this translates to different assumptions being used when formally modeling a game or the security of a system. This makes it difficult to then prove statements that involve security and game theoretic properties. As both deal with different types of agents, a proof will involve the complexity from both sides and quickly become hard to manage. This leads to a disjoint treatment of both aspects in works that attempt to cover both.

Despite this, there are nonetheless inherent connections as security often uses game based proofs, although an adversary wins if they have a good enough probability of succeeding in their attack rather than if their utility is high enough. But if we assume that the adversary has a high payoff associated with the success of their attack, then we start to recover game theoretic intuition. Similarly for proofs based on the simulation (ideal/real world) paradigm, some connections are evident.

The idea behind the simulation paradigm first originated in the context of secure computation. Goldreich et al. [27] introduced the idea of bypassing the need for a trusted third party (i.e., a mediator) in games of incomplete information, by replacing it with a protocol that effectively simulates it such that any information known by players at any step of game is the same as they would have known in an execution of the game involving the trusted third party. This was then refined by Micali and Rogaway [28] in terms of ideal and secure function evaluation. The ideal function evaluation corresponds to the evaluation of the function with a trusted third party that receives the private inputs of the parties and evaluates the function before returning the result to each party. Intuitively, this achieves the best possible result. The secure function evaluation involves the parties trying to simulate the ideal trusted third party, the protocol used to do so is then considered secure if the parties cannot distinguish between the ideal and secure function evaluations. This means that the protocol performs close enough to the standard of the ideal function evaluation, and an adversary would not have anything to gain.

Simulation has become a popular tool for cryptographers [29]. In particular, Canetti’s Universal Composability Model [30] expands on these ideas to provide a framework for secure composability of protocols. As the UC model is used in some of the work that is covered in Section IV, we quickly introduce it here. Canetti considers an ideal functionality , an algorithm for the trusted party, that is executed in an instance of the ideal protocol. A protocol then UC-realizes an ideal functionality if it emulates the instance of the ideal protocol for . The protocol UC-emulates a protocol if for any adversary there exists an adversary (also called the simulator) such that, for any environment , the output of interacting with and parties running is indistinguishable from the output of interacting with and . This builds on the idea of secure function evaluation, where will be the ideal protocol of some ideal functionality . If the outputs of are then indistinguishable from those provided by then we recover Micali and Rogaway’s idea of secure computation. Using this, the UC-model guarantees composability in the sense that if UC-emulates , then for any composed protocol UC-emulates i.e., if has as subroutine, then replacing calls to by calls to does not change the behavior of

. This is formalized with interactive Turing machines (ITM) as model of computation, and allows for parallel and sequential executions of a protocol that is secure.

Iii-C Distributed Systems (DS)

A distributed system is a system that consists of a set of connected components that communicate with each other in order to achieve a common goal. One important problem is the consensus or Byzantine agreement one where, informally, the components of the system have to agree on a single value.

DS consider two types of agents, good (i.e., agents that follow the protocol) and bad agents that can be of different types (e.g., a crash fault where a component halts). In this paper we will focus on the most general type of fault, Byzantine faults, where a bad agent may act in an arbitrary way. Traditionally, DS focus on proving two properties: safety and liveness. Intuitively, these properties are often summarized as follows. Safety means that nothing bad happens and liveness says that something good eventually happens.

For example, a solution to a consensus problem usually requires three conditions to be met, agreement (all good agents should agree on the same value), validity (if all good agents have the same initial value then they should agree on that value), termination (all good agents should decide on a value). Agreement and validity fall under safety and termination under liveness. A system where the safety and liveness conditions are met even in the presence of Byzantine fault is called Byzantine fault tolerant.

A famous solution to the consensus problem in the presence of Byzantine adversaries, is the Practical Byzantine Fault Tolerance algorithm (PBFT) [31] that can tolerate up to a third of the network being Byzantine. PBFT proceeds in three phases: pre-prepare, prepare and commit. At each stage each player usually needs to multicast a message to their peers, making the message complexity of this protocol very high and thus inapplicable to large networks.

Additionally, in traditional Distributed Systems, the set of participants is not fully open in order to defend against Sybil attacks where an adversary create many identities to take control of the network.

Iii-D Decentralization, incentives and security

Because the security of most decentralized systems, like cryptocurrencies, is linked not only to the security of the protocols, but also to having a majority of participants following the rules, decentralization and incentives have to be considered.

The ideal system does not depend in any way on any single party, which requires it to be decentralized. Troncoso et al. [32] give an overview of decentralized systems, defining a decentralized system as “a distributed system in which multiple authorities control different components and no single authority is fully trusted by all others”. This highlights the fact that every component of the system should be decentralized, and in particular a single authority distributing its own system (or component) is not decentralized. This can be hard to achieve in practice, and the level of decentralization of a system should always be looked at critically. A decentralized system where all important parties are independent but under the jurisdiction of a single government may not truly be decentralized. All these independent parties may also depend on a very few hardware manufacturers (or other service providers).

Incentives are key to achieving an honest majority. Azouvi et al. [33] give an overview of the role incentives play in security protocols, including cryptocurrencies. This highlights the fact that achieving guarantees of equilibria on paper may not be meaningful in practice when the wrong assumptions and models are used.

What does security mean in this context? Clearly, protocols that are cryptographically secure, and that achieve safety and liveness guarantees are needed, otherwise everything else would not work. But if the security of the system also depends on achieving a high enough degree of decentralization, more than standard security properties is required. In particular, decentralization relates to the participants and their behavior rather than solely the protocol. Any decentralized protocol can always be ran in a centralized manner, so it is not enough to design a system that can be used in a decentralized manner. Rather, the requirement is to design a system that is advantageous to use in decentralized manner.

Doing this naturally requires a better understanding of why users would want to be decentralized rather than try to gain more individual control of the system for themselves, so security is no longer just about the protocol itself, but also about how it can be used and how it is used.

Bitcoin and cryptocurrencies

Bitcoin represents an important innovation from classical consensus protocols as it is fully open and decentralized. In the Bitcoin consensus protocol (sometimes called Nakamoto consensus) participants can join and leave as they wish, and Sybils are handled through the use of Proof-of-Work (PoW).

The data structure that keeps track of the state of the system in Bitcoin is a chain of chronologically ordered blocks i.e., the blockchain, with each block containing a list of transactions. To win the right to append a block (and win the block reward), participants compete to solve a computational puzzle i.e., a PoW. They include in their block the solution to that puzzle, the PoW, such that other players can verify its correctness. This block then initiate a new puzzle to be solved. This process of creating new blocks is called mining and participants in this protocol are called miners.

If two participants find a solution at roughly the same time, two blocks will be created at the same height in the chain, creating a fork

, which is problematic as the two blocks can contain conflicting transactions. To resolve this, each participants will choose one of the two blocks and start creating a new block on top of it by solving the associated PoW. Whenever a chain becomes longer than the other, participants will abandon the shortest one. As it is unlikely that two chains will keep the same length for a long time, participants then reach consensus by following the longest chain rule.

The security of Bitcoin relies on a majority of the mining power (i.e., hashing power) in the network following the protocol, whether because they are honest or simply rational. Taking control of half of the computational power of Bitcoin for only an hour has a considerable cost (around 670k USD [34] as of May 2019), although it is with reach of potential adversaries. This cost depends on the hash rate of the network (i.e., the cost of mining) and the price of Bitcoin in USD as once mining is no longer profitable for some miners they are likely to stop mining, reducing the hashing power required to control a majority of the network.

This is one of the reason why Bitcoin’s security is so tightly linked to incentives, as when mining is no longer worthwhile the security of the network decreases. Participation in the network is also rewarded by financial gain (through block rewards and transaction fees). The more participants there are, the harder it is to attack the network since the cost for mounting a 51% attack (where an adversary takes control of more than half of the computational power) increases. These financial motivations are thus also paramount.

Since Bitcoin’s deployment, many alternative cryptocurrencies that similarly rely on a blockchain have emerged. The most popular of these is Ethereum [35], which differs from Bitcoin in that it provides a more complex scripting language meaning that rather than processing simple transactions, nodes in the system execute a script that allows users to perform multitude of functionalities (so-called smart contracts).

Because PoW consumes a high amount of energy, alternative consensus protocols have been proposed, e.g. Proof-of-Stake (PoS) [36]. Considering PoW as a mechanism that elects a leader based on their computational power (i.e., the participant who solves the PoW first wins), PoS can be thought of as a mechanism that elects a leader based on the amount of stake (i.e., coins) that they have in the system.

Iv Cryptography and Game Theory

Cryptography considers a worst-case adversary. By relaxing this assumption, it is possible to design protocols that bypass impossibility results or achieve better efficiency than existing ones, while maintaining a realistic adversarial model. This section considers work at the intersection of Game Theory and Cryptography. First, in Section IV-A, we introduce the subfield of rational cryptography that considers a rational adversary and incorporates game theoretic notions like utility functions to cryptographic schemes. We particularly focus on the Rational Protocol Design framework of Garay et al. [37]. The approach taken here is to combine the UC framework (introduced in Section III), with some MD notions. Next, in Section IV-B, we look at another approach that consists in adapting game theoretic notions to consider computational aspects in games.

Iv-a Cryptography meets Game Theory: Rational Cryptography

Initiated by Dodis, Rabin and Halevi [38] rational cryptography is a subfield of cryptography that incorporates incentives in cryptographic protocols. In this context, new adversaries and their capabilities have to be defined, as well as how to account for incentives and how protocols can be proven secure for such adversaries.

First, we note that most of this work [39, 40, 41, 42, 43, 44, 45] focuses on multi-party secret sharing or secure function evaluation. Thus, no monetary incentive is usually considered. As pointed out by Dodis and Rabin [45], in a rational cryptographic context, the utilities of the players are usually dependent on cryptographic considerations such as: correctness (a player prefers to compute the function correctly), exclusivity (a player prefers that other players do not learn the value of the function correctly), privacy (a player does not want to leak information to other players), voyeurism (a player wants to learn as much as possible about the other parties).

In addition to the above, other interesting parameters can come to play in the adversary’s utility function. For example, Aumann and Lindell [46] formalized the concept of covert adversaries that may deviate from the protocol but only if they are not caught doing so. As they argue, there are many obvious situations where parties cannot afford the effect of being caught cheating. Security is based on the ideal/real simulation paradigm, with successful cheating defined as behavior that cannot be simulated in the ideal model. This is done by allowing the ideal model simulator to fail, meaning that the output distribution of the protocol in the real world cannot be simulated. If these output distributions can be distinguished with probability , the honest parties will detect a corrupt party cheating with probability at least , where is the deterrence factor. Thus the probability that honest parties will detect cheating is directly related to the probability that the simulator may fail in its simulation. A special cheat instruction that can be sent by the adversary to the third party is added to the ideal model.

Covert adversaries are somehow similar to adding a punishment to the utility function. Rational players do not want to be caught cheating as the punishment decreases their utility. In Aumann and Lindell’s setting the protocol detects the cheating, but in practice we need to incentivize participants to do so.Some work considers adding adversarial behavior together with rational adversaries [47], we consider this further in Section V.

In terms of equilibria, the solution concepts proposed in these works are often extensions of a NE. For example, Halpern and Teague look for a NE that remains after other NE that are weakly dominated (i.e., at best only as good as others) are removed through iterated deletion, where all dominated strategies are removed at each step [39]. Asharov et al. [44] adapt the simulation-based definition to capture game-theoretic notions of (for example) fairness, meaning that one party learns the output of a computation if and only if the other does as well. The approach they take is to add a utility function for each notion of security considered (e.g., for correctness the utility will be if the output is correct and otherwise). They then show an equivalence theorem that states, roughly, that following the protocol is a Nash equilibrium if the protocol correctly computes the multi-party function in the presence of fail-stop adversaries. As their notions are weaker than standard cryptographic definitions, they can be achieved in some settings where impossibility results usually hold in traditional cryptography.

Rational Protocol Design

Following the work presented above, Garay et al. propose Rational Protocol Design (RPD) [37]. In this setting, they define a game between the designer of the protocol and the attacker . The number of parties is known to and . The game is parametrized by a multi-party functionality and consists of two sequential moves. In the first step, sends to the description of the protocol that honest parties are supposed to execute. In the second step, chooses a polynomial-time interactive Turing machine (ITM) to attack the protocol. The corresponding game is noted , where corresponds to the attack model which specifies the functionality, the action sets and utilities. A strategy profile of an attack game is defined as a vector . The game is zero-sum in the original paper, but was later adapted to be a non-zero sum game in the context of Bitcoin [48].

The methodology used to define the attacker’s utility consists of three steps. First, relaxing the functionality to which explicitly allows for some security breaches. Second, defining the payoff of any ideal-world adversary as a function of the view of any ideal evaluation of the relaxed functionality. Third, assigning to each adversarial strategy for a given protocol the expected payoff achieved by the best simulator. The best simulator here is the simulator that successfully emulates while achieving the minimum score (the idea being that the adversary should be rewarded only if it forces the simulator to provoke an event).

In follow-up work [49], notions of fairness are also considered, and provide a mean of comparison between protocols i.e., which protocol is the fairer. Informally, a protocol will be at least as fair as another protocol if the utility of the best adversary attacking (i.e., the adversary which maximizes is no larger than the utility of the best adversary attacking , except for some negligible quantity.

The solution concept introduced within the RPD framework is subgame perfect equilibrium where the parties’ utilities are close to their best response utilities. When it comes to security, the RPD framework defines the notion of attack-payoff security. Informally, attack payoff security states that an adversary has no incentive to deviate from the protocol.

Another concept, incentive compatibility, was introduced in a follow-up work of RPD [48]. Here, the definition is slightly different than the definition usually given within MD where participants achieve the best outcome by revealing their true preferences. Informally, incentive compatibility states that agents gain some utility when participating in the protocol i.e., they choose to play instead of “staying at home”.

To sum up, a protocol is secure in the RPD framework if it UC-emulates the relaxed functionality and an adversary has no incentive in exploiting the attack. In their work on Bitcoin [48], Badertscher et al. state that it is only necessary to consider the real world and not the ideal world in proofs. This makes us wonder whether this framework could be simplified more.

Apart from the recent work of Badertscher, et al. [48], rational cryptography does not consider monetary payment.

One important drawback of RPD is that it does not consider the presence of irrational adversaries despite the fact that in security, we do not always know the motivation of an attacker. RPD uses a relaxed functionality to allow for some defined attacks but this may not cover all attacks, leaving the door open to potential attacks. The UC model is meant to account for everything that could happen in the environment (i.e., the universe of the protocol) but it is a computational model, when they add incentives into their consideration, the UC model does not automatically start accounting for all possible incentives - this is a clear flaw as we know that only arbitrarily considering incentives leads to failures (e.g., failure of considering outside incentives, or in general "soft" incentives like political and other external incentives [33]).

Iv-B Game Theory meets Cryptography: computational games

The classic MD literature largely ignores computational considerations. The challenge is to make mechanisms computationally feasible without sacrificing useful game-theoretic properties, such as efficiency and strategy-proofness.

Rather than starting from a cryptographic setting and incorporating game theoretic notions, as presented in Section IV-A, one can also start from a game theoretic setting and from there move towards cryptographic notions by considering the computational aspects of games. This approach is taken in a body of work by Halpern and Pass that considers Bayesian machine games first introduced in a preprint [50] that has later appeared in different forms [51, 52, 53], primarily venues focused on Economics rather than Security.

A Bayesian machine game (BMG) is defined very similarly to a standard Bayesian game (introduced in Section III), it only differs in that it considers the complexity (in computation, storage cost, time or otherwise) of actions in the game. This is done by having players pick machines (e.g., a TM or ITM) that will execute their actions and defining a complexity function for that machine, which the utility takes into account.

A Nash equilibrium for a BMG is expressed in the usual way, but it now takes into account the machine profile rather than a strategy profile. There is, however, an important distinction to make between a standard Nash equilibrium and a Nash equilibrium in machine games, which is that the latter may not always exist. The necessary conditions for the existence of a Nash equilibrium in a machine game are given by Halpern and Pass [50] to be a finite type space, bounded machines and a computable game. A follow up paper by Halpern et al. [54] discusses the general question of the existence of a Nash equilibrium for resource bounded players.

So far, the discussion of computational games has not yet touched on security related issues, but Halpern and Pass prove an equivalence theorem that relates the idea of universal implementation in a BMG to the standard notion of secure computation in Cryptography [55, 27]. Intuitively, this goes back to the work of Goldreich, Micali and Widgerson [27] that first expressed (to the best of our knowledge) the idea of secure computation as the replacement of a mediator in a game that preserves an equilibrium.

A universal implementation corresponds to the idea that a BMG implements a mediator if whenever a set of players want to truthfully provide their input to the mediator, they also want to run their machine using the same input, preserving the equilibrium and action distribution. There are then multiple equivalence theorems of different strength (up to the information theoretic case), that relate flavors of secure computation to flavors of implementation. The relation is important, as it not only implies that secure computation leads to a form of game theoretic implementation, but also the reverse. This opens up the option that the guarantees of (some flavor of) secure computation could be achieved by considering the Game Theory of a problem, although it is not clear whether this process would be more efficient.

Taking a step back from implementations, there are other aspects of BMG to consider. As outlined in Section III, a Nash equilibrium is not the only solution concept one might look for, and many situations that might be encountered in security can be looked at in the form of incomplete information extensive form games involving sequential actions. The information sets of players then correspond to the histories of computation where a player is in a given state at the end of the history. This means that players choose their information sets, much like they choose their machine. Halpern, Pass and Lior [56] cover the topic in much detail. A useful solution concept to consider in such a setting is that of sequential equilibrium, which for the case of BMGs is considered in yet another paper by Halpern and Pass [57]. They define a sequential equilibrium in computational games in the same way as a traditional sequential equilibrium. The difference is that as in the case of a NE, the utilities are given for machines with respect to a machine profile.

The belief portion of the sequential equilibrium plays an important role. For many systems of interest in, for example, the cryptocurrency community, participation is a voluntary action. Participants in the network can choose to join and leave at any time. They may also choose to be involved simply as a user, by running a full node, or by being a miner. Each of these represents different levels of initial investment and continuous cost that depend on the belief the participant has about the network in its current state, taking into account past information, and its possible future states.

The equilibrium comes in two variants, ex ante and interim. In the ex ante case the player commits to their strategy before the game starts but chooses it such that it remains optimal even off the equilibrium path in the game’s tree, while in the interim case the player can reconsider whether they are doing the right thing at each information set and change accordingly. Both are related by the fact that every interim equilibrium is an ex ante equilibrium, and in a machine game with a local complexity function, the interim and ex ante equilibria coincide.

They are also related to the Nash equilibrium defined above. To start off, as is more intuitive, every ex ante sequential equilibrium, and hence also interim sequential equilibrium, is a Nash equilibrium. In the other direction, a NE can also imply an ex ante equilibrium if it is lean i.e., if for all players playing the machine profile of the equilibrium, the local states of their machines are reached with positive probability. It is then also necessary that the belief assessment be compatible with .

BMG have natural applications to known security problems. For example, dealing with covert adversaries as described by Aumann and Lindell [46] (already introduced in this section) can be done by introducing a (two player, for example) mediated game where the honest strategy is to report your input to the mediator and output its reply (with utility ), and the string punish can be output by a player to ensure the other receives payoff . Then any secure computation with respect to covert adversaries with deterrent (probability of getting caught cheating) is an implementation of the mediator as the expected utility of a cheating player will be , which is the same as that of the honest strategy.

V Distributed Systems and Game Theory

Algorithmic mechanism design (AMD) is concerned with designing games such that self-interested players achieve the game designer’s goals, in the same way that distributed systems designers aim, for example, to achieve agreement in the presence of Byzantine players. AMD was first introduced by Nisan and Ronen [58] who proposed that an algorithm designer should ensure that the interests of participants in a distributed setting are best served by behaving correctly i.e., the algorithm designer should aim for incentive compatibility. The framework of Nisan and Ronen is defined for a centralized computation, but it has been extended to distributed algorithmic mechanism design (DAMD) following work by Feigenbaum et al. on cost sharing algorithms for multicast transactions [59]. This lead to further developments and applications of DAMD to interdomain routing, web caching, peer-to-peer file sharing, application layer overlay networks and distributed task allocation, which are summarized in a review by Feigenbaum and Shenker [60].

In this section, we present the work that is at the intersection of Game Theory and Distributed Systems, looking at concrete problems that have been well studied. For each case we will illustrate important concepts and techniques used.

V-a Public goods, free riding and hidden actions

Public goods, which are produced at a cost but available to use for free, naturally occur in distributed systems. In a public goods game, players choose to contribute a certain amount, with all contributions being combined and distributed among all players. Naturally, this can lead to players rationally deciding to contribute less to maximize their utility. They may even contribute nothing, which is generally referred to as free riding.

Varian first considered modeling the reliability of a system as a public good [61]. The reliability can either depend on the total effort (sum of the efforts exerted by the individuals), on the weakest link (minimum effort) or on the best shot (maximum effort). E.g., if there is a wall defending a city, its reliability can depend on the sum of all the work provided by the builders (total effort), on the lowest height (weakest link) or if we consider several walls, on the highest one (best shot). In the case of total effort the NE corresponds to all players free riding on the player with highest benefit-cost ratio. Moreover, the effort exerted in the NE is always lower than the social optimum i.e., the best outcome across all players.

Peer-to-peer file sharing also provides an interesting case of a networked system that has faced free riding [62]. As explained by Babaioff et al. [63], solutions to this problem could be based on a reputation system, barter or currency. In practice, these solutions are not always implemented in the system itself but rather added in an ad hoc way by users. Another approach, that would not need to keep any long term state information is to replace indirect reciprocity with direct reciprocity. For example, a file in BitTorrent is partitioned into smaller chunks, requiring repeat interactions among peers and enforcing more collaboration between them [64]. In practice, however, this has been shown to not be very effective as it is not robust to strategic agents [65] and induces free riding [66]. There is also the problem of dealing with newcomers, as an adversary can create new identities in order to abuse the system. Analyzing the incentives at play, Feldman et al. [67] suggest that penalizing all newcomers may be an effective way of dealing with the problem, as it is not possible to penalize only users abusing the system.

In addition to free-riding, there are many other parameters that a selfish player could abuse in a P2P file sharing system e.g., when to join or leave, who to connect to, untruthful sharing information, and so on. This is the problem of hidden actions i.e., how peers selfishly behave when their actions are hidden from the rest of the network. In order to analyze the degradation due to hidden actions, Babaioff et al. [63] apply the principal-agent framework, due to the similarity of the hidden action problem with that of moral hazard. This framework is used in economics when one entity, the principal, employs a set of agents to take actions on its behalf. The principal will pay each agent to reward them for their effort, based on each observable action. In order to capture the efficiency of a system in that framework, they define the Price of Unaccountability of a technology as the worst ratio between the principal’s utility in the observable-actions case and the hidden actions case. Dealing with observable and hidden actions relates to the transparency of the system, which can be approached from a cryptographic point of view to ensure that agents all see the same set of actions (assuming that they are logged) [68].

Another solution, Karma [69], proposes a system for peer-to-peer resource sharing that avoids free riding, based on a combination of reputation system and consensus protocols. It can been seen as a precursor of Bitcoin as it considers both a version of PoW and the idea of rewarding peers in the system for their effort.

Another important problem in Distributed Systems where rationality can cause problems is routing [70]. The problem is to find a path that minimizes the latency between a source and a target. One of the difficulties in doing so is that in a decentralized communication network it is not always possible to impose some routing strategy to nodes in order to, for example, regulate the load on a route. as highlighted in Section III, nodes usually act according to their own interests, which can be orthogonal to the overall optimal equilibrium. A game theoretic measure used by Roughgarden and Tardos in the context of routing is the Price of Anarchy [70], which quantifies how much a system degrades due to selfish behavior. More formally, assuming we have a measure of the efficiency of each outcome, the Price of Anarchy is the ratio between the equilibrium and the optimal outcome.

Inspired by this measure, Grossklags et al. [71] introduce the Price of Uncertainty, which measures the cost of incomplete information compared to that of complete information. An important observation is that assuming fixed possible losses, which is reasonable in the case of mining where one can at most lose the fixed cost hardware (and electricity) or stake, the more players are in the network, the less information matters. This also ties in to the value of information i.e., the possible change in utility from gaining information, which is defined for a computational setting by Halpern and Pass [72].

V-B Consensus

We now look at the example of Byzantine Agreement or in other words, consensus. The approach here is to use incentives to bypass impossibility results on Byzantine Agreement or improve on existing constructions. In order to apply GT to DS, additional adjustments have to be made. For example, traditional GT considers deviation from only one agent (as in a NE) while in practice agents form coalition. In addition, in a DS it is important to consider multiple types of failures (e.g., processors may crash) that are not considered in GT.

In order to account for both of these requirements, the BAR model defined by Aiyer et al. [73] introduces three different types of players: Byzantine, altruistic (players that simply follow the rules) and rational players. In this case, the expected utility of a rational player is usually defined by considering the worst configuration of Byzantine players and the worst set of strategies that those Byzantine players could take, assuming all other non-Byzantine players obey the specified strategy profile. The goal of the BAR model is to provide guarantees similar to those of Byzantine fault tolerance to all rational and altruistic nodes, as opposed to all correct nodes. Two classes of protocols meet this goal, Incentive-Compatible Byzantine Fault Tolerant (IC-BFT) protocols and Byzantine Altruistic Rational Tolerant (BART) protocols. IC-BFT protocols, which are a subset of BART protocols, ensure that the protocol satisfies security and is the optimal one for rational nodes, while a BART protocol simply ensures security properties.

Groce et al. [74] introduce similar notions, perfect and statistical security, which state that in the presence of a rational adversary, the protocol still satisfies the security properties (e.g., consistency and correctness for consensus). They show feasibility results of information-theoretic (both perfect and statistical) Byzantine Agreement, assuming a rational adversary and complete or partial knowledge of the adversary preferences. Their protocols are also more efficient than traditional Byzantine Agreement protocols.

In the DAMD setting [75], participants are split into obedient, faulty, strategic and adversarial nodes of the network. Obedient nodes are correctly functioning machines that have no strategic goals and thus simply perform what they are programmed to do. Faulty nodes are incorrectly functioning nodes that also do not have strategic goals but suffer from bugs or misconfiguration. Strategic nodes are selfish agents that aim to maximize their utility. Adversarial nodes are adversaries in the security sense of the word, ranging from honest but curious to Byzantine in their strategic goals.

The split follows the same lines as that of the BAR model, but separates the adversarial nodes from those that are faulty with no strategic goal. Computational restrictions here are expressed with regards to the solution concepts rather than the agents. This ties into topics in computational Game Theory, as a solution to a DAMD problem requires not only that incentive compatibility is achieved, but also that the solution be computationally tractable, which is not always the case. (The tractability of computing Nash equilibria, or approximations, is out of the scope of this paper.) The takeaway is that many solutions on paper are not straightforwardly obtained in an algorithmic setting, whether centralized or decentralized, and even approximations may not be enough.

Robustness

When it comes to adapting a NE to consider coalitions and irrational players Abraham et al. [76] extend the work of Halpern and Teague [77] to consider multiple players. They introduce the concept of robustness that encompasses two notions, resilience and immunity. Resilience captures the fact that a coalition of players has no incentive to deviate from the protocol, and is similar to the concept of collusion-proof NE [9]. Immunity captures the fact that even if some irrational players are present in the system, the utilities of the other players are not affected. An equilibrium that is both resilient to coalitions of up to players, and immune to up to irrational players is then said to be -robust

Robustness is a very strong property, but it is hard to achieve in practice. Clement et al. [78] show that no protocol is -robust if any node may crash and communication is necessary and costly. When designing cryptocurrencies, however, it is not unusual to consider that communication is free.

As discussed in Section IV-A with covert adversaries, it can be helpful to add a form of punishment to enforce correct behavior by rational players. Halpern et al. [76] define a punishment strategy such that for any coalition of at most players and up to irrational players, as long as more than players use the punishment strategy and the remaining players play the equilibrium strategy, then if up to players collude, they will be worse off than they would have been if the rational players had played the equilibrium strategy. The idea is that by having more than players use the punishment strategy is enough to stop players colluding and deviating from the equilibrium strategy.

Price of Malice

As systems realistically involve rational and irrational players, it is important to consider how rational players react to the presence of irrational players. Moscibroda et al. [79] do this by considering a system with only rational and Byzantine players. They differentiate between an oblivious and non oblivious model i.e., whether selfish players know the existence of Byzantine players or not. They define a Byzantine Nash equilibria that extends NE in the case where irrational players are present. In a Byzantine Nash equilibria no selfish player can reduce their perceived expected cost, which depends on their information, by changing their strategy, given that the strategies of all other players are fixed.

In GT and MD, a concept very often discussed is the Price of Anarchy [11], which was introduced in the case of selfish routing. Moscibroda et al. [79] extend this to their setting, by defining the Byzantine Price of Anarchy that quantifies how much an optimal system degrades due to selfish behavior, when malicious players are introduced. More formally, it is the ratio between the worst social cost of a Byzantine Nash equilibrium and the minimal social cost, where the social cost of a strategy profile is the sum of all individual costs i.e., the optimality of each outcome.

The price of Malice is used to see how a system of purely selfish players degrades in the presence of malicious irrational players. More formally it is the ratio between the worst Byzantine Nash Equilibrium with malicious players and the Price of Anarchy in a purely selfish system.

Moscibroda et al. [79] also introduced the idea that Byzantine players can improve the overall system, which they called the fear factor. The intuition is that the rational players will adapt their strategies by fear of the actions of irrational players, rendering the overall system better. The example they introduce where this can be observed is virus inoculation. Based on the assumption that some players are irrational and will not get vaccinated, rational players will be incentivized to get vaccinated. In the case where everyone is rational, there is no equilibrium since as long as enough people get vaccinated, the rest of the population is safe. Thus irrational players here can make the overall system better.

Vi Blockchains

We now consider blockchain based cryptocurrencies, which are an important example of systems involving aspects of both traditional security and game theoretic aspects.

Incentives here are explicit as they are monetary and coupled with security. Participants are incentivized to participate in the network (i.e., become a miner) due to the financial reward associated with it. Moreover, participants are incentivized to “follow the rules” due to the cost of creating blocks and risk of financial loss with deviating from the rules.

Despite this, and probably due to their recent apparition the community has not agreed on a formal model that fully incorporates incentives and security, although some work is being done in this direction [80, 81, 82] to various degrees of success. Existing work has highlighted failures in incentive models e.g., selfish mining [3], the verifier’s dilemma [4] and the miner’s dilemma [5].

In this section we review the work that has been done by the security and distributed systems communities on blockchains that consider game-theoretic notions and pinpoint where those failures can be seen. We also highlight new concepts of interest that have been introduced in this field as well as how they relate to what we have previously discussed in this paper.

Well many incentive related papers present attacks i.e., deviations to the protocol that rational agents may follow [5, 3, 83, 4]), with fewer papers focusing on proving security while incorporating incentives [48]. This general approach of proposing an attack and a patch to said attack is similar to the one taken in Cryptography before provable security existed.

Vi-a Blockchain Consensus Protocols

Nakamoto’s original Bitcoin paper [84] provided only informal security arguments but several papers have since formally argued the security of Bitcoin in different models [85, 86, 87], usually based in the simulation setting presented in Section III, but without a consideration of incentives.

In early work in this area Kroll et al. [88] show that there is a NE in which all players behave consistently with Bitcoin’s reference implementation, along with infinitely many equilibria in which they behave otherwise e.g., where they all agree to change a rule. Attacks like selfish mining [3, 83, 89] put this into question, showing that their model did not encompass behavior that could realistically occur.

More recently Garay et al. [48] proved the security of Bitcoin in the RPD framework that was introduced in Section IV-A. Their approach is based on the observation that Bitcoin works despite its flaws, and they prove that Bitcoin is secure by relying on the rationality of players rather than an honest majority. This model inherits the flaws discussed in Section IV-A e.g., they do not consider fully malicious players. Their model also does not encompass attacks on Bitcoin’s incentive structure that we now describe.

Selfish mining [3] involves a rational miner increasing their expected utility by withholding their blocks instead of broadcasting them to the rest of network, giving them an advantage in solving the new proof-of-work and making the rest of the network waste computation by mining on a block that is not the top of the chain. Inspired by techniques introduced by Gervais et al. [90], Sapirshtein et al. [83]

use Markov Decision Processes (MDP) to find the optimal strategy when doing selfish mining. (MDP are used to help make decisions in a discrete state space where outcomes are partially random.) They show that with this strategy, an adversary could mount a 51% attack with less than 25% of the computational power.

Another attack, the verifier dilemma [4] shows that miners are not incentivized to verify the content of blocks, especially when this incurs an important computation on their end.

Mining gaps are another type of attack on incentives [91, 92] where the time between the creation of blocks increases because miners wait to include enough transactions (in order to get the transaction fees). Both papers use simulations to quantify the attacks, using techniques such as no-regret learning where miners update their strategy at every “stage” of the game in a way to do as close to the best strategy as possible, had it be known from the beginning or MDP.

Bribery attacks are another family of attacks, which are often thought of as an example of the tragedy of the commons that describes a situation when individuals acting selfishly affect the common good [93]. In our context, it captures the fact that miners have to balance their aim to maximize their profit with the risk of affecting the long term health of the cryptocurrency they mine, potentially reducing its price and their profit.

Bonneau [94] first proposed that an adversary could mount a attack at a much reduced cost by renting the necessary hardware for the length of the attack rather than purchasing it. More generally, a briber could pay existing miners to mine in a certain way, without ever needing to acquire any hardware. This lead to a series of papers [95, 96, 97, 98, 99] that show it is possible to introduce new incentives to an existing cryptocurrency, internally or externally, in ways that do not require trust between miners and briber.

Mechanisms like Ethereum’s uncle reward, which allows blocks that were mined but not appended to the blockchain to later be referenced in another block for a reward can be used to subsidize the cost of bribery attacks [98] and selfish mining [100, 101]. This is unfortunate, as uncle rewards were originally introduced to aid decentralization [102] but have now been found to introduce incentives that work against this, by reducing the mining power required to perform certain attacks.

This puts into question the value of saying that a cryptocurrency is incentive compatible if new incentives can later be added. A cryptocurrency also does not exist in a vacuum, and external incentives can always manifest in adversarial ways. Goldfinger attacks, proposed by Kroll et al. [88], involve an adversary paying miners of a cryptocurrency to sabotage it by mining empty blocks. In some cases, even the threat of this type of attack can be enough to kill off a cryptocurrency, as users would not want their investments to disappear if the attack happens, and thus would not invest. As a Goldfinger attack can be implemented through a smart contract in another cryptocurrency [98], it is not inconceivable that this could be attempted in practice. This clearly shows that incentives from outside the cryptocurrency itself must be considered.

Budish [103] proposes an economic analysis of 51% attack and double spending and shows that the Nakamoto consensus has inherent economic limitations. In particular, he shows from a strictly economic point of view that the security of the blockchain relies scarce, non-repurposable resources (i.e., ASICs) used by miners as opposed to Nakamoto’s vision of “one-CPU-one-vote”, and that the blockchain is vulnerable to sabotage at a cost linear in the amount of specialized computational equipment devoted to its maintenance.

Systems based on BlockDAGs rather than blockchains have been proposed to address the outlined incentive failures [104, 105, 106, 107, 82, 108]. In this model, the data structure is a Directed Acyclic Graph (DAG) of blocks, meaning that each block can have more than one parent block. When creating a new block, a miner points to all the blocks that they are aware of, revealing their view of the blockchain. This exposes more of the decision making of the players and relates to the idea of hidden actions discussed in Section V. Players could still however lie and pretend not to be aware of blocks but this can be disincentivized by the protocol (i.e., a block that references more blocks will have a bigger reward), thus incentivizing to reveal the truth, which addresses the hidden action problem.

Due to some additional inherent flaws in Bitcoin e.g., scalability and energy consumption, new design papers are constantly proposed by both the academic community and industry, but many leave the treatment of incentives as future work. In particular, very few papers propose an incentive scheme associated with their consensus protocols [109, 110, 111, 82]. Moreover, the solution concepts considered in these papers are often overly-simplistic; e.g., some coalition proof NE that does not consider the impact of irrational players [109, 110, 112, 111]. Only Solidus [81] and Fantômette [82] consider robustness (introduced in Section V), although Solidus leaves formal proof as future work. In his draft work about incentives in Casper [113], Buterin introduces the griefing factor which is the ratio of the penalty incurred to the victim of an attack and the penalty incurred by the attacker. The idea of a griefing factor intuitively makes sense, as disputes in the real world can be resolved by fining a party according to the damages caused, and from a modelling point of view gives a quantifiable punishment that can be explicitly taken into account when computing equilibria. He also proves that following the protocol in Casper is a NE as long as no player holds more than a third of the deposit at stake.

Due to the lack of formal model, it can be expected that more incentive related attacks will be proposed. For example, attacks on cryptocurrencies using PoS are now already appearing [114, 115, 116], further highlighting the need for better models.

Additionally to the consensus rules, another route to improving the incentivization of cryptocurrencies is through their transaction fees market. As pointed out by Lavi et al. [117] “competition in the fee market is what keeps the rational behavior of Bitcoin’s users (partially) aligned with the goal of buying enough security for the entire system” and is thus crucial for its security. This problem is related to that of auction theory [118] and some of the literature of that field could be used here.

Vi-B Incentivizing decentralization

Looking back at the initial success of Bitcoin, it has evolved to become different in many ways from the intended design and the idea of “one-CPU-one-vote” envisioned by Nakamoto. Because the price of mining has increased exponentially with the popularity of Bitcoin, miners have started forming mining pools, where they join their resources to mine, together, more blocks. This is opposite to the decentralization envisioned in the original Bitcoin paper. The reason for doing so is that by pooling their resources miners will be able to mine more blocks, they will then share their gain accordingly, meaning that their average payoff is theoretically the same but their variance is reduced. Due to the depreciation of the hardware this actually means that they increase their payoff by doing so. Mining hardware itself has changed, as ASICs have become more popular, and essential for profitable mining.

This is obviously a big threat to the security of cryptocurrencies as this could enable 51% attacks, which have already happened to other cryptocurrencies. As of January 2019, the most important attack has targeted Ethereum Classic, which is the 16th largest cryptocurrency by market cap [119]. As such attacks have gradually targeted cryptocurrencies that are ranked higher on the list, it can be expected that this trend will continue, in particular when prices of such attacks are favored by market downturns.

The centralization of cryptocurrencies’ has been empirically analyzed by Gencer et al. [120] who measured how decentralized Bitcoin’s and Ethereum’s network are. They found that three or four mining pools control more than half of the hash power of the network. This highlights the need for further research studying this occurrence of centralization and how decentralization can be maintained in practice.

Several papers propose a game-theoretic analysis of the mining pools. Arnosti et al. [121] model hardware investments from miners as a a game, Leonardos et al. [122] model mining as an oceanic game, used to analyze decision making in settings with small numbers of “big” players and large numbers of individually insignificant players. Lewenberg et al. [123] model the mining game as a transferable utility coalitional game

, which allows players to form coalitions and to divide their payoffs amongst themselves. This game is defined by a set of players and a characteristic function that specifies the monetary value that any coalition can achieve when cooperating. As a solution concept, they use the

core, the set of feasible allocations that cannot be improved upon by a coalition, which describes stability in coalitional games. It captures the condition under which the agents would want to form coalitions rather than not i.e., whether there exist any subcoalition where agents could have gained more on their own. This concept is often opposed to the Shapley value in Game Theory, which defines a fair way to divide the payment among the members of a coalition based on their respective contribution, but without any consideration for stability, unlike the core. Lewenberg et al. additionally define the defection function that, intuitively, captures the fact that not every agent subset can collaborate and form a new coalition. They focus on defection functions that allow for one coalition to merge with a subset of another coalition, or for a subset of coalition to split from its coalition. They show that mining pools are generally unstable, no matter how the revenue is shared, some miners would be incentivized to switch to a different pool

Eyal [5] also studies the stability of mining pools and proposes an attack where pools infiltrate other pools to sabotage them by joining the pool and earning rewards, but without actually contributing i.e., not revealing when they find a PoW solution. There exists configurations in which this attack constitutes a NE and an example of a tragedy of the commons.

Another way mining pools could attack each other is through distributed denial of service (DDoS) attacks, with the aim of lowering the expected success of a competing pool (large ones in particular), rather than increasing their own computational power [124]. Mining pools being subject to denial of service often happens. Over a two year period, Vasek et al. [125] found that 62.5% of mining pools accounting for more than 5% of the Bitcoin network power had been targeted, while only 17.1% of the smaller pools had been targeted. This has general implications for the mining ecosystem, as a peaceful equilibrium would require an increase to the cost of attacks and to the miner migration rate (i.e., miners switching pools), with no pool being significantly more attractive than others [126].

Brünjes et al. [127] introduce and study reward sharing schemes that promote the fair formation of stake pools in a PoS blockchain. They argue that a NE only considers myopic players, i.e., players who ignore the responses to their own actions. As a result, they consider the notion of non-myopic Nash equilibrium (based on previous work by Fiat et al.  [128]), which captures the effects that a certain move will incur anticipating a strategic response from the other players.

Luu et al. [80] use smart contracts to decentralize mining by incuring mining fees lower than centralized mining pools. Miller et al. [129] present several definitions and constructions for “non-outsourceable” puzzles. Both of these papers use informal arguments to justify their construction as opposed to a formal model.

This once more highlights the trend of attack-then-patch and is another argument for having a strong incentive model that deal with (de)centralization.

Vi-C Payment Channels

In order to overcome the scalability issues of Bitcoin, a new concept, referred to as layer 2 or payment channels has been proposed [130]. The idea is that since the network cannot handle enough transactions, participants can take some transactions off-chain i.e., outside the main network, by opening a channel between themselves. This is done by locking a deposit on the blockchain, opening the channel and transacting on the channel, then settling the overall balance of all transactions on-chain so that the blockchain will see only two transactions (locking the funds and settling the balance).

Several designs have been proposed to achieve this [130, 131, 132]. The high level idea is that participants will create evidence of each of their transactions (e.g., using signatures) so that whenever someone tries to cheat the other party can prove it and receive the cheating party’s deposit as compensation. For example, if Alice pays 1 bitcoin to Bob who then pays 2 bitcoin to Alice, Bob could try to cheat by broadcasting the first transactions to the blockchain, with the obsolete balances, but Alice could broadcast to the blockchain the transaction signed by Bob to prove cheating has occurred.

In this setting, the security relies on the fact that cheating is easily detectable due to cryptographic evidence and on the financial punishment associated with it. So again, incentives are tightly linked to security. A few papers [131, 133] present formal models to analyze the security of these payment channels. They are based on the UC-model presented in Section III but do not consider utilities although it is an important part of the security of the system.

In order to facilitate payment channels, a routing solution has been proposed [134]. The idea is that if Alice wants to open a channel with Bob and both Bob and Alice already have an existing channel open with Charlie, then Charlie can act as a router between Alice and Bob, without them needing to open a new channel. There are usually difficulties in this case, due to the need for collaterals to be locked by everyone on the routing path. This work is related to the one on selfish routing discussed in Section V.

A problem with payment channels is the requirement for participants to be online to detect cheating i.e., the cheater broadcasting an old balance to the blockchain. McCorry et al. [135] propose delegating this task to a third party but it is unclear how incentives should be designed with regards to that third party.

Vi-D Fairness

Fairness in cryptocurrencies is implicitely captured by the notion of chain quality introduced by Garay et al. [87], which informally states that an adversary should not contribute more blocks to the blockchain than what they are supposed to i.e., proportionally to their computational power in the PoW setting.

In the PoS setting, Fanti et al. [114] define the notion of equitability that corresponds to how much a node’s initial investment (i.e., stake) can grow or shrink, and address the problem of the “rich get richer” in PoS cryptocurrencies (which arguably also exists in PoW cryptocurrencies). They propose a geometric reward function that they prove is more equitable i.e., the distribution of stake stays more stable. In general, the problem of compounding of wealth is reinforced by the fact that early adopters of a cryptocurrency have a significant advantage, benefiting from the ease of mining (or staking) and greatly cheaper coin prices in the early days. Dealing with this is more of a macroeconomic problem that to the best of our knowledge has not yet received any attention.

Vi-E Summary

We see that the blockchain community has not made much use of the concepts introduced in the Sections IV and V. Most of the work done in this space either considers only rational adversaries [48], or considers only cryptographic properties and thus fully malicious adversaries but without considering any incentives [87, 85, 86], or considers solution concepts such as NE that are too basic for this setting [88]. Consequently, new attacks on incentives are being discovered at a high rate.

Finally, we note that all of the above analysis holds assuming that Bitcoin (or the underlying cryptocurrency considered) indeed has a monetary value. The question of what confers that value to a cryptocurrency is an interesting open problem given the volatility of cryptocurrencies exchange rates.

Vii Discussion

We now discuss the ways in which the concepts presented in the previous sections could be used.

First, we have already argued that NE are too weak as a solution concepts for blockchain protocols because they do not consider coalitions or fully malicious players. We argue that robustness, presented in Section V, is a better fitted solution concept. So far this concept has only been used in Fantômette [82] and a preliminary version of Solidus [81].

Even though it does not consider malicious adversaries, the RPD framework would still work well in the case of payment channels. In that case, even if an adversary is fully malicious, they could cheat but if they do so the other party will then be able to claim all of their deposit. In that case the non-cheating party would end up in a better situation, as they would earn even more. The RPD model is then perfectly adapted as even an irrational adversary cannot harm an honest player. Moreover, as it is clear how the adversary can cheat (by broadcasting an old transaction), RPD is useful since it provides a relaxed functionality that specifically allows bad behaviour. A cost will be associated with this bad behavior e.g., in the case of payment channels the whole deposit of the cheating player will be taken away. Furthermore, formal models that have been proposed for payment channels are already based on the UC model [131, 133], thus extending them to consider the RPD would not be as hard as the tedious part is very often the UC emulation.

Alongside RPD, we also presented work on BMG. This could have very fruitful applications when applied to cryptocurrencies given that many situations are straightforwardly modeled as games involving computations e.g., mining. The equivalence theorems between secure computation and game theoretic implementations also give a direct connection to standard notions of security, although they are given for specific variants of implementations and secure computation that may not match those required in practice. Further results in this domain could however prove very useful. As the cryptographic side is founded on the ideas that also form the basis of simulation based Cryptography and UC model it may be possible to connect both frameworks. The given example of a game that deals with covert adversaries also shows that this can be used in practice, even if it is not clear that applying this method will produce more efficient solutions to problems.

In Bitcoin, the blockchain takes up a few hundred gigabytes of space and most users do not run a full node that requires storing the whole blockchain and verifying every block. Instead they use light clients (i.e., simplified payment verification, SPV) that store only hashes of the blocks and connect to full nodes that they must trust, which is not ideal. This problem can be compared to the problem of free riding in P2P file systems, and ideas from that context could be used to help incentivize users to run full nodes. In the same vein, Bitcoin’s PoW is comparable to total effort and best shots public goods, discussed in Section V. The security of the system depends both on the total hashrate of the network (it is less expensive to attack a network with a smaller hashrate) and on the maximum hashrate of a single player or pool (as a single player detaining half or close of the total hash power represents a threat for the system).

The fear factor, introduced in Section V, could also be used. As explained in Section VI, with the verifier’s dilemma [4], verifying the content of blocks is not an equilibrium, however, some users may be motivated to do it by fear that others will not. For example, a user may choose to run a full node rather than an SPV, which would improve the overall system. Under this assumption, an equilibrium may be reachable.

The Price of Malice could also inspire a new measure that quantifies the trade-off between blockchain based system and traditional consensus protocols. Indeed, blockchain-based system are intended to be more scalable as they are meant to handle open participation, compared to classical consensus that requires the many messages to be exchanged, but in the case of Bitcoin this comes at the price of PoW so there is an incurred economic cost.

As differentiating between malicious behavior and genuine latency is hard, especially in PoS systems, the Price of Unaccountability could be a useful measure in evaluating them. Since this problem is related to that of Hidden actions, we could also explore using the principal-agent model there as presented in Section V.

Cryptocurrencies are the first family of distributed systems that includes creation of new money so transfer functions could be used here e.g., by the designer to find the optimal reward scheme of a cryptocurrency. Could we find an equivalent to the revelation principle here, for example by using blockDAGs that reveal more about the decision making of a player?

Another avenue for the design of blockchain based systems could be to investigate implementing direct reciprocity, as opposed to indirect reciprocity, to incentivize good behavior and force miners to cooperate more, as discussed in Section V.

Lastly, something missing from the blockchain literature is a general framework for comparing and evaluating consensus protocols. Most of them operate under very different assumptions and the outcome would be greatly different when changing those. For example, Algorand does not consider rational players, providing stronger assumptions than other protocols and it is not clear how rational players could behave in this setting. Others do not consider the centralization problem or compounding of wealth.

Viii Conclusion

Security researchers and cryptographers have been interested in incorporating game theoretic notions to their models for many years. In this work, we have highlighted existing concepts and explained how and where they could be used for specific applications.

The approach taken in most of the papers that we described here is to extend a field by for example incorporating utility functions (Rational Cryptography) or computation (Bayesian Machine Games). No completely new theory has appeared and it would be interesting to see a new theory built from the ground up to address considerations of incentives at all stages of the design process, rather than adapting existing models. We hope that this paper will give some inspiration towards new formal models.

Acknowledgment

Alexander Hicks is supported by OneSpan111https://www.onespan.com and UCL through an EPSRC Research Studentship.

References

  • [1] R. Anderson, “Why information security is hard-an economic perspective,” in Computer security applications conference, 2001. acsac 2001. proceedings 17th annual.   IEEE, 2001, pp. 358–365.
  • [2] R. Anderson and T. Moore, “The economics of information security,” Science, vol. 314, no. 5799, pp. 610–613, 2006.
  • [3] I. Eyal and E. G. Sirer, “Majority is not enough: Bitcoin mining is vulnerable,” Commun. ACM, vol. 61, no. 7, pp. 95–102, jun 2018. [Online]. Available: http://doi.acm.org/10.1145/3212998
  • [4] L. Luu, J. Teutsch, R. Kulkarni, and P. Saxena, “Demystifying incentives in the consensus computer,” in Proceedings of the 22Nd ACM SIGSAC Conference on Computer and Communications Security, ser. CCS ’15.   New York, NY, USA: ACM, 2015, pp. 706–719. [Online]. Available: http://doi.acm.org/10.1145/2810103.2813659
  • [5] I. Eyal, “The miner’s dilemma,” in Proceedings of the 2015 IEEE Symposium on Security and Privacy, ser. SP ’15.   Washington, DC, USA: IEEE Computer Society, 2015, pp. 89–103. [Online]. Available: https://doi.org/10.1109/SP.2015.13
  • [6] N. Linial, “Games computers play: Game-theoretic aspects of computing,” 01 2002.
  • [7] J. Katz, “Bridging game theory and cryptography: Recent results and future directions,” in Proceedings of the 5th Conference on Theory of Cryptography, ser. TCC’08.   Berlin, Heidelberg: Springer-Verlag, 2008, pp. 251–272. [Online]. Available: http://dl.acm.org/citation.cfm?id=1802614.1802635
  • [8] J. Y. Halpern, “Computer science and game theory: A brief survey,” CoRR, vol. abs/cs/0703148, 2007. [Online]. Available: http://arxiv.org/abs/cs/0703148
  • [9] ——, “Beyond Nash equilibrium: Solution concepts for the 21st century,” CoRR, vol. abs/0806.2139, 2008. [Online]. Available: http://arxiv.org/abs/0806.2139
  • [10] Y. Shoham, “Computer science and game theory,” Communications of the ACM, vol. 51, no. 8, pp. 74–79, 2008.
  • [11] T. Roughgarden, “Algorithmic game theory,” Communications of the ACM, vol. 53, no. 7, pp. 78–86, 2010.
  • [12] N. Nisan, T. Roughgarden, E. Tardos, and V. V. Vazirani, Algorithmic game theory.   Cambridge University Press, 2007.
  • [13] M. Tambe, Security and game theory: algorithms, deployed systems, lessons learned.   Cambridge university press, 2011.
  • [14] J. Grossklags, N. Christin, and J. Chuang, “Secure or insure?: A game-theoretic analysis of information security games,” in Proceedings of the 17th International Conference on World Wide Web, ser. WWW ’08.   New York, NY, USA: ACM, 2008, pp. 209–218. [Online]. Available: http://doi.acm.org/10.1145/1367497.1367526
  • [15] S. Bano, A. Sonnino, M. Al-Bassam, S. Azouvi, P. McCorry, S. Meiklejohn, and G. Danezis, “Consensus in the age of blockchains,” arXiv preprint arXiv:1711.03936, 2017.
  • [16] J. Garay and A. Kiayias, “Sok: A consensus taxonomy in the blockchain era.”
  • [17] W. Wang, D. T. Hoang, Z. Xiong, D. Niyato, P. Wang, P. Hu, and Y. Wen, “A survey on consensus mechanisms and mining management in blockchain networks,” arXiv preprint arXiv:1805.02707, 2018.
  • [18] N. Stifter, A. Judmayer, P. Schindler, A. Zamyatin, and E. Weippl, “Agreement with satoshi–on the formalization of nakamoto consensus.”
  • [19] J. Bonneau, A. Miller, J. Clark, A. Narayanan, J. A. Kroll, and E. W. Felten, “SoK: Research perspectives and challenges for bitcoin and cryptocurrencies,” in 2015 IEEE Symposium on Security and Privacy, May 2015, pp. 104–121.
  • [20] C. Cachin and M. Vukolić, “Blockchains consensus protocols in the wild,” arXiv preprint arXiv:1707.01873, 2017.
  • [21] Z. Liu, N. C. Luong, W. Wang, D. Niyato, P. Wang, Y.-C. Liang, and D. I. Kim, “A survey on blockchain: A game theoretical perspective,” IEEE Access, vol. 7, pp. 47 615–47 643, 2019.
  • [22] G. Owen, Game theory.   Saunders, 1968. [Online]. Available: https://books.google.co.uk/books?id=v6lMAAAAMAAJ
  • [23] M. O. Jackson, “Mechanism theory,” California Institute of Technology, Survey, 2003.
  • [24] C. Daskalakis, P. W. Goldberg, and C. H. Papadimitriou, “The complexity of computing a nash equilibrium,” SIAM Journal on Computing, vol. 39, no. 1, pp. 195–259, 2009.
  • [25] S. Kuhn, “Prisoner’s dilemma,” in The Stanford Encyclopedia of Philosophy, spring 2017 ed., E. N. Zalta, Ed.   Metaphysics Research Lab, Stanford University, 2017.
  • [26] U. Gneezy and A. Rustichini, “A fine is a price,” The Journal of Legal Studies, vol. 29, no. 1, pp. 1–17, 2000.
  • [27] O. Goldreich, S. Micali, and A. Wigderson, “How to play any mental game,” in

    Proceedings of the Nineteenth Annual ACM Symposium on Theory of Computing

    , ser. STOC ’87.   New York, NY, USA: ACM, 1987, pp. 218–229. [Online]. Available: http://doi.acm.org/10.1145/28395.28420
  • [28] S. Micali and P. Rogaway, “Secure computation,” in Annual International Cryptology Conference.   Springer, 1991, pp. 392–404.
  • [29] Y. Lindell, “How to simulate it - A tutorial on the simulation proof technique,” Cryptology ePrint Archive, Report 2016/046, 2016, http://eprint.iacr.org/2016/046.
  • [30] R. Canetti, “Universally composable security: A new paradigm for cryptographic protocols,” 2001, pp. 136–145.
  • [31] M. Castro, B. Liskov et al., “Practical byzantine fault tolerance,” in OSDI, vol. 99, 1999, pp. 173–186.
  • [32] C. Troncoso, M. Isaakidis, G. Danezis, and H. Halpin, “Systematizing decentralization and privacy: Lessons from 15 years of research and deployments,” Proceedings on Privacy Enhancing Technologies, vol. 2017, no. 4, pp. 404 – 426, 2017. [Online]. Available: https://content.sciendo.com/view/journals/popets/2017/4/article-p404.xml
  • [33] S. Azouvi, A. Hicks, and S. J. Murdoch, “Incentives in security protocols,” in Security Protocols XXVI, V. Matyáš, P. Švenda, F. Stajano, B. Christianson, and J. Anderson, Eds.   Cham: Springer International Publishing, 2018, pp. 132–141.
  • [34] “Crypto51,” https://www.crypto51.app/.
  • [35] “Ethereum,” https://www.ethereum.org/.
  • [36] https://bitcointalk.org/index.php?topic=27787.0, 2011.
  • [37] J. Garay, J. Katz, U. Maurer, B. Tackmann, and V. Zikas, “Rational protocol design: Cryptography against incentive-driven adversaries,” Cryptology ePrint Archive, Report 2013/496, 2013, http://eprint.iacr.org/2013/496.
  • [38] Y. Dodis, S. Halevi, and T. Rabin, “A cryptographic solution to a game theoretic problem,” in Advances in Cryptology — CRYPTO 2000, M. Bellare, Ed.   Berlin, Heidelberg: Springer Berlin Heidelberg, 2000, pp. 112–130.
  • [39] J. Halpern and V. Teague, “Rational secret sharing and multiparty computation: Extended abstract,” in Proceedings of the Thirty-sixth Annual ACM Symposium on Theory of Computing, ser. STOC ’04.   New York, NY, USA: ACM, 2004, pp. 623–632. [Online]. Available: http://doi.acm.org/10.1145/1007352.1007447
  • [40] G. Kol and M. Naor, “Games for exchanging information,” in Proceedings of the fortieth annual ACM symposium on Theory of computing.   ACM, 2008, pp. 423–432.
  • [41] ——, “Cryptography and game theory: Designing protocols for exchanging information,” in Theory of Cryptography Conference.   Springer, 2008, pp. 320–339.
  • [42] S. D. Gordon and J. Katz, “Rational secret sharing, revisited,” in International Conference on Security and Cryptography for Networks.   Springer, 2006, pp. 229–241.
  • [43] G. Fuchsbauer, J. Katz, and D. Naccache, “Efficient rational secret sharing in standard communication networks,” in Theory of Cryptography Conference.   Springer, 2010, pp. 419–436.
  • [44] G. Asharov, R. Canetti, and C. Hazay, “Towards a game theoretic view of secure computation,” in Advances in Cryptology – EUROCRYPT 2011, K. G. Paterson, Ed.   Berlin, Heidelberg: Springer Berlin Heidelberg, 2011, pp. 426–445.
  • [45] Y. Dodis, T. Rabin et al., “Cryptography and game theory,” Algorithmic Game Theory, pp. 181–207, 2007.
  • [46] Y. Aumann and Y. Lindell, “Security against covert adversaries: Efficient protocols for realistic adversaries,” in Theory of Cryptography Conference.   Springer, 2007, pp. 137–156.
  • [47] A. Lysyanskaya and N. Triandopoulos, “Rationality and adversarial behavior in multi-party computation,” in Advances in Cryptology - CRYPTO 2006, C. Dwork, Ed.   Berlin, Heidelberg: Springer Berlin Heidelberg, 2006, pp. 180–197.
  • [48] C. Badertscher, J. Garay, U. Maurer, D. Tschudi, and V. Zikas, “But why does it work? a rational protocol design treatment of bitcoin,” Cryptology ePrint Archive, Report 2018/138, 2018.
  • [49] J. Garay, J. Katz, B. Tackmann, and V. Zikas, “How fair is your protocol? a utility-based approach to protocol optimality,” Cryptology ePrint Archive, Report 2015/187, 2015, https://eprint.iacr.org/2015/187.
  • [50] J. Y. Halpern and R. Pass, “Game Theory with Costly Computation,” ArXiv e-prints, Aug. 2008.
  • [51] R. Pass and J. Halpern, “Game theory with costly computation: Formulation and application to protocol security,” in Proceedings of the Behavioral and Quantitative Game Theory: Conference on Future Directions, ser. BQGT ’10.   New York, NY, USA: ACM, 2010, pp. 89:1–89:1. [Online]. Available: http://doi.acm.org/10.1145/1807406.1807495
  • [52] J. Y. Halpern and R. Pass, “Algorithmic rationality: Adding cost of computation to game theory,” SIGecom Exch., vol. 10, no. 2, pp. 9–15, Jun. 2011. [Online]. Available: http://doi.acm.org/10.1145/1998549.1998551
  • [53] ——, “Algorithmic rationality: Game theory with costly computation,” Journal of Economic Theory, vol. 156, pp. 246–268, 2015.
  • [54] J. Y. Halpern, R. Pass, and D. Reichman, “On the Non-Existence of Nash Equilibrium in Games with Resource-Bounded Players,” ArXiv e-prints, Jul. 2015.
  • [55] M. Ben-Or, S. Goldwasser, and A. Wigderson, “Completeness theorems for non-cryptographic fault-tolerant distributed computation,” in Proceedings of the Twentieth Annual ACM Symposium on Theory of Computing, ser. STOC ’88.   New York, NY, USA: ACM, 1988, pp. 1–10. [Online]. Available: http://doi.acm.org/10.1145/62212.62213
  • [56] J. Y. Halpern, R. Pass, and L. Seeman, “Computational extensive-form games,” in Proceedings of the 2016 ACM Conference on Economics and Computation, ser. EC ’16.   New York, NY, USA: ACM, 2016, pp. 681–698. [Online]. Available: http://doi.acm.org/10.1145/2940716.2940733
  • [57] J. Y. Halpern and R. Pass, “Sequential equilibrium in computational games.” in IJCAI, 2013, pp. 171–176.
  • [58] N. Nisan and A. Ronen, “Algorithmic mechanism design,” Games and Economic behavior, vol. 35, no. 1-2, pp. 166–196, 2001.
  • [59] J. Feigenbaum, C. H. Papadimitriou, and S. Shenker, “Sharing the cost of multicast transmissions,” Journal of Computer and System Sciences, vol. 63, no. 1, pp. 21–41, 2001.
  • [60] J. Feigenbaum and S. Shenker, “Distributed algorithmic mechanism design: Recent results and future directions,” 2002.
  • [61] H. Varian, “System reliability and free riding,” in Economics of information security.   Springer, 2004, pp. 1–15.
  • [62] M. Feldman and J. Chuang, “Overcoming free-riding behavior in peer-to-peer systems,” ACM sigecom exchanges, vol. 5, no. 4, pp. 41–50, 2005.
  • [63] M. Babaioff, J. Chuang, and M. Feldman, “Incentives in peer-to-peer systems,” Algorithmic Game Theory, pp. 593–611, 2007.
  • [64] B. Cohen, “Incentives build robustness in bittorrent,” in Workshop on Economics of Peer-to-Peer systems, vol. 6, 2003, pp. 68–72.
  • [65] M. Piatek, T. Isdal, T. Anderson, A. Krishnamurthy, and A. Venkataramani, “Do incentives build robustness in bittorrent,” in Proc. of NSDI, vol. 7, 2007.
  • [66] S. Jun and M. Ahamad, “Incentives in bittorrent induce free riding,” in Proceedings of the 2005 ACM SIGCOMM workshop on Economics of peer-to-peer systems.   ACM, 2005, pp. 116–121.
  • [67] M. Feldman, C. Papadimitriou, J. Chuang, and I. Stoica, “Free-riding and whitewashing in peer-to-peer systems,” IEEE Journal on selected areas in communications, vol. 24, no. 5, pp. 1010–1019, 2006.
  • [68] M. Chase and S. Meiklejohn, “Transparency overlays and applications,” in Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security, ser. CCS ’16.   New York, NY, USA: ACM, 2016, pp. 168–179. [Online]. Available: http://doi.acm.org/10.1145/2976749.2978404
  • [69] V. Vishnumurthy, S. Chandrakumar, and E. G. Sirer, “Karma: A secure economic framework for peer-to-peer resource sharing,” in Workshop on Economics of Peer-to-peer Systems, vol. 35, no. 6, 2003.
  • [70] T. Roughgarden and É. Tardos, “How bad is selfish routing?” Journal of the ACM (JACM), vol. 49, no. 2, pp. 236–259, 2002.
  • [71] J. Grossklags, B. Johnson, and N. Christin, “The price of uncertainty in security games,” in Economics of Information Security and Privacy.   Springer, 2010, pp. 9–32.
  • [72] J. Y. Halpern, “I don’t want to think about it now: Decision theory with costly computation,” in Twelfth international conference on the principles of knowledge representation and reasoning, 2010.
  • [73] A. S. Aiyer, L. Alvisi, A. Clement, M. Dahlin, J.-P. Martin, and C. Porth, “BAR Fault Tolerance for cooperative services,” SIGOPS Oper. Syst. Rev., vol. 39, no. 5, pp. 45–58, oct 2005. [Online]. Available: http://doi.acm.org/10.1145/1095809.1095816
  • [74] A. Groce, J. Katz, A. Thiruvengadam, and V. Zikas, “Byzantine agreement with a rational adversary,” in Automata, Languages, and Programming, A. Czumaj, K. Mehlhorn, A. Pitts, and R. Wattenhofer, Eds.   Berlin, Heidelberg: Springer Berlin Heidelberg, 2012, pp. 561–572.
  • [75] J. Feigenbaum, M. Schapira, and S. Shenker, “Distributed algorithmic mechanism design,” Algorithmic Game Theory, pp. 363–384, 2007.
  • [76] I. Abraham, D. Dolev, R. Gonen, and J. Halpern, “Distributed computing meets game theory: Robust mechanisms for rational secret sharing and multiparty computation,” in Proceedings of the Twenty-fifth Annual ACM Symposium on Principles of Distributed Computing, ser. PODC ’06.   New York, NY, USA: ACM, 2006, pp. 53–62. [Online]. Available: http://doi.acm.org/10.1145/1146381.1146393
  • [77] J. Halpern and V. Teague, “Rational secret sharing and multiparty computation: Extended abstract,” in Proceedings of the Thirty-sixth Annual ACM Symposium on Theory of Computing, ser. STOC ’04.   New York, NY, USA: ACM, 2004, pp. 623–632. [Online]. Available: http://doi.acm.org/10.1145/1007352.1007447
  • [78] A. Clement, J. Napper, H. Li, J. Martin, L. Alvisi, and M. Dahlin, “Theory of BAR games,” in Brief Announcements: Proceedings of the Symposium on Principles of Distributed Computing (PODC 2007 ), Aug 2007.
  • [79] T. Moscibroda, S. Schmid, and R. Wattenhofer, “When selfish meets evil: Byzantine players in a virus inoculation game,” in Proceedings of the Twenty-fifth Annual ACM Symposium on Principles of Distributed Computing, ser. PODC ’06.   New York, NY, USA: ACM, 2006, pp. 35–44. [Online]. Available: http://doi.acm.org/10.1145/1146381.1146391
  • [80] L. Luu, Y. Velner, J. Teutsch, and P. Saxena, “Smartpool: Practical decentralized pooled mining,” in 26th USENIX Security Symposium (USENIX Security 17).   Vancouver, BC: USENIX Association, 2017, pp. 1409–1426. [Online]. Available: https://www.usenix.org/conference/usenixsecurity17/technical-sessions/presentation/luu
  • [81] I. Abraham, D. Malkhi, K. Nayak, L. Ren, and A. Spiegelman, “Solidus: An incentive-compatible cryptocurrency based on permissionless Byzantine consensus,” CoRR, vol. abs/1612.02916, 2016. [Online]. Available: http://arxiv.org/abs/1612.02916
  • [82] S. Azouvi, P. McCorry, and S. Meiklejohn, “Betting on blockchain consensus with fantomette,” CoRR, vol. abs/1805.06786, 2018. [Online]. Available: http://arxiv.org/abs/1805.06786
  • [83] A. Sapirshtein, Y. Sompolinsky, and A. Zohar, “Optimal selfish mining strategies in bitcoin,” in Financial Cryptography and Data Security, J. Grossklags and B. Preneel, Eds.   Berlin, Heidelberg: Springer Berlin Heidelberg, 2017, pp. 515–532.
  • [84] S. Nakamoto, “Bitcoin: A peer-to-peer electronic cash system,” 2008, bitcoin.org/bitcoin.pdf.
  • [85] R. Pass, L. Seeman, and A. Shelat, “Analysis of the blockchain protocol in asynchronous networks,” in Annual International Conference on the Theory and Applications of Cryptographic Techniques.   Springer, 2017, pp. 643–673.
  • [86] L. Kiffer, R. Rajaraman, and a. shelat, “A better method to analyze blockchain consistency,” in Proceedings of the 2018 ACM SIGSAC Conference on Computer and Communications Security, ser. CCS ’18.   New York, NY, USA: ACM, 2018, pp. 729–744. [Online]. Available: http://doi.acm.org/10.1145/3243734.3243814
  • [87] J. Garay, A. Kiayias, and N. Leonardos, “The bitcoin backbone protocol: Analysis and applications,” in Advances in Cryptology - EUROCRYPT 2015, E. Oswald and M. Fischlin, Eds.   Berlin, Heidelberg: Springer Berlin Heidelberg, 2015, pp. 281–310.
  • [88] J. A. Kroll, I. C. Davey, and E. W. Felten, “The economics of bitcoin mining , or bitcoin in the presence of adversaries,” 2013.
  • [89] K. Nayak, S. Kumar, A. Miller, and E. Shi, “Stubborn mining: Generalizing selfish mining and combining with an eclipse attack,” in Security and Privacy (EuroS&P), 2016 IEEE European Symposium on.   IEEE, 2016, pp. 305–320.
  • [90] A. Gervais, G. O. Karame, K. Wüst, V. Glykantzis, H. Ritzdorf, and S. Capkun, “On the security and performance of proof of work blockchains,” in Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security, ser. CCS ’16.   New York, NY, USA: ACM, 2016, pp. 3–16. [Online]. Available: http://doi.acm.org/10.1145/2976749.2978341
  • [91] M. Carlsten, H. Kalodner, S. M. Weinberg, and A. Narayanan, “On the instability of bitcoin without the block reward,” in Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security, ser. CCS ’16.   New York, NY, USA: ACM, 2016, pp. 154–167. [Online]. Available: http://doi.acm.org/10.1145/2976749.2978408
  • [92] I. Tsabary and I. Eyal, “The gap game,” in Proceedings of the 2018 ACM SIGSAC Conference on Computer and Communications Security, ser. CCS ’18.   New York, NY, USA: ACM, 2018, pp. 713–728. [Online]. Available: http://doi.acm.org/10.1145/3243734.3243737
  • [93] G. Hardin, “The tragedy of the commons,” science, vol. 162, no. 3859, pp. 1243–1248, 1968.
  • [94] J. Bonneau, “Why buy when you can rent? - Bribery attacks on bitcoin-style consensus,” 2016, pp. 19–26.
  • [95] K. Liao and J. Katz, “Incentivizing blockchain forks via whale transactions,” in International Conference on Financial Cryptography and Data Security.   Springer, 2017, pp. 264–279.
  • [96] Y. Velner, J. Teutsch, and L. Luu, “Smart contracts make bitcoin mining pools vulnerable,” in International Conference on Financial Cryptography and Data Security.   Springer, 2017, pp. 298–316.
  • [97] J. Teutsch, S. Jain, and P. Saxena, “When cryptocurrencies mine their own business,” in International Conference on Financial Cryptography and Data Security.   Springer, 2016, pp. 499–514.
  • [98] P. McCorry, A. Hicks, and S. Meiklejohn, “Smart contracts for bribing miners.” IACR Cryptology ePrint Archive, vol. 2018, p. 581, 2018.
  • [99] J. Bonneau, “Hostile blockchain takeovers (short paper),” in Bitcoin’18: Proceedings of the 5th Workshop on Bitcoin and Blockchain Research, 2018.
  • [100] F. Ritz and A. Zugenmaier, “The impact of uncle rewards on selfish mining in ethereum,” in 2018 IEEE European Symposium on Security and Privacy Workshops (EuroS PW), April 2018, pp. 50–57.
  • [101] J. Niu and C. Feng, “Selfish mining in ethereum,” arXiv preprint arXiv:1901.04620, 2019.
  • [102] V. Buterin. Uncle rate and transaction fee analysis. Ethereum Foundation. [Online]. Available: https://blog.ethereum.org/2016/10/31/uncle-rate-transaction-fee-analysis/
  • [103] E. Budish, “The economic limits of bitcoin and the blockchain,” National Bureau of Economic Research, Tech. Rep., 2018.
  • [104] Y. Sompolinsky and A. Zohar, “Secure high-rate transaction processing in bitcoin,” in International Conference on Financial Cryptography and Data Security.   Springer, 2015, pp. 507–527.
  • [105] Y. Sompolinsky, Y. Lewenberg, and A. Zohar, “Spectre: A fast and scalable cryptocurrency protocol,” Cryptology ePrint Archive, Report 2016/1159, 2016, https://eprint.iacr.org/2016/1159.
  • [106] Y. Sompolinsky and A. Zohar, “Phantom: A scalable blockdag protocol,” Cryptology ePrint Archive, Report 2018/104, 2018, https://eprint.iacr.org/2018/104.
  • [107] “Snowflake to Avalanche: A Novel Metastable Consensus Protocol Family for Crypto,” https://ipfs.io/ipfs/QmUy4jh5mGNZvLkjies1RWM4YuvJh5o2FYopNPVYwrRVGV.
  • [108] G. Danezis and D. Hrycyszyn, “Blockmania: from block dags to consensus,” CoRR, vol. abs/1809.01620, 2018. [Online]. Available: http://arxiv.org/abs/1809.01620
  • [109] A. Kiayias, A. Russell, B. David, and R. Oliynykov, “Ouroboros: A provably secure Proof of stake blockchain protocol,” in Annual International Cryptology Conference.   Springer, 2017, pp. 357–388.
  • [110] R. Pass and E. Shi, “Fruitchains: A fair blockchain,” in Proceedings of the ACM Symposium on Principles of Distributed Computing.   ACM, 2017, pp. 315–324.
  • [111] S. Park, K. Pietrzak, A. Kwon, J. Alwen, G. Fuchsbauer, and P. Gaži, “SpaceMint: A cryptocurrency based on proofs of space,” Cryptology ePrint Archive, Report 2015/528, 2015, https://eprint.iacr.org/2015/528.
  • [112] I. Bentov, R. Pass, and E. Shi, “Snow White: Provably secure proofs of stake,” IACR Cryptology ePrint Archive, vol. 2016, p. 919, 2016.
  • [113] V. Buterin, “Incentives in casper the friendly finality gadget,” 2017, https://github.com/ethereum/research/blob/master/papers/casper-economics/casper_economics_basic.pdf.
  • [114] G. Fanti, L. Kogan, S. Oh, K. Ruan, P. Viswanath, and G. Wang, “Compounding of wealth in proof-of-stake cryptocurrencies,” arXiv preprint arXiv:1809.07468, 2018.
  • [115] S. Kanjalkar, J. Kuo, Y. Li, and A. Miller, “Short paper: I can’t believe it’s not stake! resource exhaustion attacks on pos,” in International Conference on Financial Cryptography and Data Security.   Springer, 2019.
  • [116] J. Brown-Cohen, A. Narayanan, C.-A. Psomas, and S. M. Weinberg, “Formal barriers to longest-chain proof-of-stake protocols,” arXiv preprint arXiv:1809.06528, 2018.
  • [117] R. Lavi, O. Sattath, and A. Zohar, “Redesigning bitcoin’s fee market,” arXiv preprint arXiv:1709.08881, 2017.
  • [118] R. Lavi, “Computationally efficient approximation mechanisms,” Algorithmic Game Theory, pp. 301–329, 2007.
  • [119] “Coin market cap,” https://www.coinmarketcap.com/coins.
  • [120] A. E. Gencer, S. Basu, R. v. R. Ittay Eyal and, and E. G. Sirer, “Decentralization in bitcoin and ethereum networks,” in Financial Cryptography and Data Security.   Springer Berlin Heidelberg, 2018.
  • [121] N. Arnosti and S. M. Weinberg, “Bitcoin: A natural oligopoly,” arXiv preprint arXiv:1811.08572, 2018.
  • [122] N. Leonardos, S. Leonardos, and G. Piliouras, “Oceanic games: Centralization risks and incentives in blockchain mining,” arXiv preprint arXiv:1904.02368, 2019.
  • [123] Y. Lewenberg, Y. Bachrach, Y. Sompolinsky, A. Zohar, and J. S. Rosenschein, “Bitcoin mining pools: A cooperative game theoretic analysis,” in Proceedings of the 2015 International Conference on Autonomous Agents and Multiagent Systems, ser. AAMAS ’15.   Richland, SC: International Foundation for Autonomous Agents and Multiagent Systems, 2015, pp. 919–927. [Online]. Available: http://dl.acm.org/citation.cfm?id=2772879.2773270
  • [124] B. Johnson, A. Laszka, J. Grossklags, M. Vasek, and T. Moore, “Game-theoretic analysis of ddos attacks against bitcoin mining pools,” in International Conference on Financial Cryptography and Data Security.   Springer, 2014, pp. 72–86.
  • [125] M. Vasek, M. Thornton, and T. Moore, “Empirical analysis of denial-of-service attacks in the bitcoin ecosystem,” in International conference on financial cryptography and data security.   Springer, 2014, pp. 57–71.
  • [126] A. Laszka, B. Johnson, and J. Grossklags, “When bitcoin mining pools run dry,” in International Conference on Financial Cryptography and Data Security.   Springer, 2015, pp. 63–77.
  • [127] L. Brünjes, A. Kiayias, E. Koutsoupias, and A.-P. Stouka, “Reward sharing schemes for stake pools,” CoRR, vol. abs/1807.11218, 2018.
  • [128] A. Fiat, E. Koutsoupias, K. Ligett, Y. Mansour, and S. Olonetsky, “Beyond myopic best response (in cournot competition),” Games and Economic Behavior, 2013.
  • [129] A. Miller, A. Kosba, J. Katz, and E. Shi, “Nonoutsourceable scratch-off puzzles to discourage bitcoin mining coalitions,” in Proceedings of the 22Nd ACM SIGSAC Conference on Computer and Communications Security, ser. CCS ’15.   New York, NY, USA: ACM, 2015, pp. 680–691. [Online]. Available: http://doi.acm.org/10.1145/2810103.2813621
  • [130] P. Mccorry, M. Möser, S. F. Shahandasti, and F. Hao, “Towards bitcoin payment networks,” in Proceedings, Part I, of the 21st Australasian Conference on Information Security and Privacy - Volume 9722.   New York, NY, USA: Springer-Verlag New York, Inc., 2016, pp. 57–76. [Online]. Available: http://dx.doi.org/10.1007/978-3-319-40253-6_4
  • [131] A. Miller, I. Bentov, R. Kumaresan, and P. McCorry, “Sprites: Payment channels that go faster than lightning,” CoRR, vol. abs/1702.05812, 2017. [Online]. Available: http://arxiv.org/abs/1702.05812
  • [132] S. Dziembowski, L. Eckey, S. Faust, and D. Malinowski, “Perun: Virtual payment hubs over cryptocurrencies,” in Perun: Virtual Payment Hubs over Cryptocurrencies.   IEEE, 2017, p. 0.
  • [133] S. Dziembowski, S. Faust, and K. Hostáková, “General state channel networks,” in Proceedings of the 25th ACM SIGSAC Conference on Computer and Communications Security, ser. CCS ’18.   ACM, 2018. [Online]. Available: https://eprint.iacr.org/2018/320
  • [134] J. Poon and T. Dryja, “The bitcoin lightning network:scalable off-chain instant payments,” https://lightning.network/lightning-network-paper.pdf, 2016.
  • [135] P. McCorry, S. Bakshi, I. Bentov, A. Miller, and S. Meiklejohn, “Pisa: Arbitration outsourcing for state channels,” Cryptology ePrint Archive, Report 2018/582, 2018, https://eprint.iacr.org/2018/582.

Appendix A Glossary

In this appendix, we provide formal definitions for some of the concepts presented in the main body of the paper that are not formally defined.

Game Theory

To start off, we introduce the standard definitions for Bayesian games and mechanisms.

Bayesian game setting

A Bayesian game setting is a tuple , where:

  • N is a finite set of players;

  • O is a set of outcomes;

  • is a set of possible joint type vectors

  • is a (common prior) probability distribution on

    ; and

  • , where is the utility function for each player

Mechanism for a Bayesian game setting

A mechanism for a Bayesian game setting is a pair (A,M), where

  • , where is the set of actions available to agent

  • maps each action profile to a distribution over outcomes

Game Theory and Cryptography

We now move on to concepts presented in Section IV.

subgame perfect equilibrium [37]

Let be an attack game. A staretgy profile is an subgame perfect equilibrium in if: (1) for any , , and (2) for any , .

Attack-payoff security [37]

Let be an attack model and let be a protocol that realizes functionality . is attack-payoff secure in if where is the “dummy” hybrid protocol (i.e.,the protocol that forwards all inputs and outputs from the functionality , see Section III) and is the maximized ideal expected payoff of an adversary.

Incentive compatibility [48]

Let be a protocol and be a set of PT protocols that have access to the same hybrids as . We say that is incentive compatible in the attack model if and only if for some is a subgame perfect equilibrium in the attack game defined by .

Bayesian Machine Game [50]

A Bayesian machine game G is described by a tuple where:

  • is the set of players, is the set of possible machines

  • is the set of type profiles where the st element in the profile corresponds to nature’s type

  • is a distribution on

  • is a complexity function

  • is player i’s utility function.

Given a Bayesian machine game G, a machine profile , and , is an -best response to (the tuple consisting of all machines in other than ) if, for every ,

(1)

is an -Nash equilibrium of G if, for all players i, is an -best response to . A Nash equilibrium is a -Nash equilibrium.

Universal implementation [50]

Suppose that is a set of n-player canonical games, is a subsets of , and are mediators, are interactive machines, and . is a -universal implementation of with error if, for all , all games with input length n and all if is a -robust -safe -NE in the mediated machine game then

  1. (Preserving equilibrium) is a -safe -NE in the mediated machine game

  2. (Preserving Action Distributions) For each type profile , the action profile induced by in is identically distributed to then action profile induced by M in .

Sequential equilibrium in computational games [57]

A pair consisting of a machine profile and a belief system is called a belief assessment. A belief assessment is an interim (resp. ex ante) sequential equilibrium in a machine game if is compatible with and for all players , states of , and machines compatible with and such that (the set of possible machines) (resp. is a local variant of ), we have

(2)

Game Theory and Distributed Design

Finally, we give definitions for concepts presented in Section V.

Incentive-Compatible Byzantine Fault Tolerant (IC-BFT) protocols [73]

A protocol is IC-BFT if it guarantees the specified set of safety and liveness properties and if it is in the best interest of all rational nodes to follow the protocol exactly.

Byzantine Altruistic Rational Tolerant (BART) protocols [73]

A protocol is BART if it guarantees the specified set of safety and liveness properties in the presence of all rational deviations from the protocol.

Perfect security [74]

A protocol for broadcast or consensus is perfectly secure against rational adversaries controlling t players with utility U if for every t-adversary there is a strategy S such that for any choice of input for honest players 1. (S is tolerable): S induces a distribution of final outputs D in which no security condition is violated with nonzero probability, and 2. (S is Nash): For any strategy with induced output distribution D’ : .

Statistical Security [74]

A protocol for broadcast or consensus is statistically secure against rational adversaries controlling t players with utility U if for every t-adversary there is a strategy S such that for any choice of input for honest players S induces a distribution of final outputs when the security parameter is k and the following properties hold: 1. (S is tolerable): no security condition is violated with nonzero probability in Dk for any k, and 2. (S is statistical Nash): for any strategy with induced output distributions there is a negligible function such that .

(k,t)-robustness [76]

A strategy profile is a (k,t)-robust equilibrium if for all we have:

(k,t)-punishment [76]

A joint strategy is a (k, t)-punishment strategy with respect to if for all such that C, T, P are disjoint, , for all , for all , for all we have .