DeepAI
Log In Sign Up

A Survey of Decision Making in Adversarial Games

07/16/2022
by   Xiuxian Li, et al.
IEEE
10

Game theory has by now found numerous applications in various fields, including economics, industry, jurisprudence, and artificial intelligence, where each player only cares about its own interest in a noncooperative or cooperative manner, but without obvious malice to other players. However, in many practical applications, such as poker, chess, evader pursuing, drug interdiction, coast guard, cyber-security, and national defense, players often have apparently adversarial stances, that is, selfish actions of each player inevitably or intentionally inflict loss or wreak havoc on other players. Along this line, this paper provides a systematic survey on three main game models widely employed in adversarial games, i.e., zero-sum normal-form and extensive-form games, Stackelberg (security) games, zero-sum differential games, from an array of perspectives, including basic knowledge of game models, (approximate) equilibrium concepts, problem classifications, research frontiers, (approximate) optimal strategy seeking techniques, prevailing algorithms, and practical applications. Finally, promising future research directions are also discussed for relevant adversarial games.

READ FULL TEXT VIEW PDF

page 3

page 4

page 5

page 6

page 13

page 17

page 18

page 19

12/10/2019

ColosseumRL: A Framework for Multiagent Reinforcement Learning in N-Player Games

Much of recent success in multiagent reinforcement learning has been in ...
11/11/2017

Practical Scalability for Stackelberg Security Games

Stackelberg Security Games (SSGs) have been adopted widely for modeling ...
03/03/2021

Information Security Games: A Survey

We introduce some preliminaries about game theory and information securi...
12/10/2020

Hindsight and Sequential Rationality of Correlated Play

Driven by recent successes in two-player, zero-sum game solving and play...
09/11/2021

Discovery and Equilibrium in Games with Unawareness

Equilibrium notions for games with unawareness in the literature cannot ...
10/23/2021

Strategically revealing intentions in General Lotto games

Strategic decision-making in uncertain and adversarial environments is c...
02/27/2020

Learning to Resolve Alliance Dilemmas in Many-Player Zero-Sum Games

Zero-sum games have long guided artificial intelligence research, since ...

I Introduction

Game theory has long been a powerful and conventional paradigm for modeling complex and intelligent interactions among a group of players and improving decision making for selfish players, since the seminal work [1, 2, 3] by John von Neumann, John Nash, and others. Hitherto, it has found a vast range of real-world applications in a variety of domains, including economics, biology, finance, computer science, politics, and so forth, where each individual player is only concerned with its own interest [4, 5, 6]. It played an extremely important role even during the Cold War in the 60s, and has been employed by many national institutions in defense, such as United States Agencies for security control [7].

Fig. 1: A general framework of adversarial games with simultaneous or sequential moves, perfect or imperfect information, and symmetric or asymmetric information, where triangles denote players and there exist teams, within which team members play in a cooperative manner, while the play among teams is adversarial and usually zero-sum, i.e., for all strategies with the subscript representing the -th player in -th team whose strategy and utility function are denoted as and , respectively. And is the strategy profile of all players except the -th player in team .

Adversarial games are a class of particularly important game models, where players deliberately compete with each other while simultaneously achieving their own utility maximization. To date, adversarial games have been an orthodox framework for shaping high-efficient decision making in numerous realistic applications, such as poker, chess, evader pursuing, drug interdiction, coast guard, cyber-security, and national defense, etc. For example, in Texas Hold’em poker, which has been one of primary competitions as a benchmark for testing researchers’ proposed algorithms in game theory and artificial intelligence (AI) held by international well-known conferences such as AAAI, multiple players compete against each other to win the game by seeking sophisticated strategy techniques [8]. Generally speaking, adversarial games enjoy several main features as follows: 1) hardness of efficient and fast algorithms design with limited computing resources and/or samples; 2) imperfect information for many practical problems, that is, some information is private to one or more players which, however, is hidden from other players, such as the card game of poker; 3) large models, including large action spaces and information sets, for example, the adversary space in the road networks security problem is of the order [9]; 4) incomplete information for a multitude of real-life applications, that is, one or more agents do not know what game is being played (e.g., the number of players, the strategies available to each player, and the payoff for each strategy), and in this case, the very game being played is generally represented with players’ uncertainties, like uncertain payoff functions with uncertain parameters; and 5) possible dynamic trait, i.e., the played game is sometimes time-varying, instead of static, for example, a poacher may have different poaching strategies in a wildlife park as the environment varies with seasons. It is worth pointing out that incomplete information is understood distinctly from imperfect information here, as distinguished by some researchers, although they are interchangeably used in some literature. In addition, other possible characteristics include bounded rationality, where players may be not fully rational, such as arbitrarily random lone wolf attacks by terrorists. However, it is noteworthy that not all adversarial games are with imperfect and/or incomplete information, for example, the game of Go has both perfect and complete information, since it has explicit game rules and all chess pieces’ positions are visible to both players at all times as well as the actions of the opponent, which has been well solved by well-known AI agents, such as AlphaGo and AlphaZero [10, 11, 12].

As the competitive feature is ubiquitous in a large number of real-world applications, adversarial games have been extensively investigated until now [13, 14, 15, 16, 17, 18]. For example, the authors in [13] provided a broad survey of technical advances in Stackelberg security games (SSG) in 2018, the authors in [14] reviewed some main Nash equilibrium (NE) computing algorithms for extensive-form games with imperfect information based on counterfactual regret minimization (CFR) methods, the authors in [15] reviewed a combined use of game theory and optimization algorithms along with a new categorization for researches conducted in this area, the authors in [16] reviewed distributed online optimization, federated optimization from the perspective of privacy-preserving mechanisms, and cooperative/non-cooperative games from two facets, i.e., minimizing global costs and minimizing individual costs, and the authors in [17] surveyed recent advances of decentralized online learning, including decentralized online optimization and online game, from the perspectives of problem classifications, performance metrics, state-of-the-art performance results, and potential research directions in future. Additionally, in consideration of the importance of game theory in national defense, some reviews of game theory in defense applications were succinctly provided in [18, 19]

, and a survey of defensive deception based on game theory and machine learning (ML) approaches was presented in

[20]. Nonetheless, a thorough overview for adversarial games from the perspectives of the basic models’ knowledge, equilibrium concepts, optimal strategy seeking techniques, research frontiers, and prevailing algorithms is still lacking.

Motivated by the above facts, this survey aims to provide a systematic review on adversarial games from several dimensions, including the models of three main models frequently employed in adversarial games (i.e., zero-sum normal-form and extensive-form games, Stackelberg (security) games, and zero-sum differential games), (approximate) optimal strategy concepts (i.e., NE, correlated equilibrium, coarse-correlated equilibrium, strong Stackelberg equilibrium, team-maxmin equilibrium, and corresponding approximate ones), (approximate) optimal strategy computing techniques (e.g., CFR methods, AI methods), state-of-the-art results, prevailing algorithms, potential applications, and promising future research directions. To the best of our knowledge, this survey is the first systematic overview on adversarial games, generally providing an orthogonal and complementary component for the aforementioned survey papers, which may aid researchers and practitioners in relevant domains. Please note that the three game models are not mutually exclusive, but may overlap for the same game from different viewpoints. For example, Stackelberg games and differential games can also be zero-sum games, etc. In addition, there actually exist other models leveraged for adversarial games, such as Bayesian games, Markov games (or stochastic games), signaling games, behavioral game theory and evolutionary game theory. However, we are not ambitious to review all of them in this survey, since each of them is of independent interest and pretty abundant in existing diverse materials.

The structure of this survey is organized as follows. The detailed game models and solution concepts are introduced in Section II, the existing main literature is reviewed along with state-of-the-art results in Section III, some prevailing algorithms are expounded in Section IV, an array of applications are presented in Section V, promising future research directions are discussed in Section VI, and finally the conclusion is drawn in Section VII.

Notations: Define be the set of positive numbers for an integer . Denote by , , and the sets of real numbers,

-dimensional real vectors, and nonnegative

-dimensional real vectors, respectively. For a finite set with elements, define (i.e., the simplex of dimension ), and be the cardinality of . Let and

denote the mathematical probability and expectation, respectively. Let

denote the transpose of , and be the inner product. and denote vectors or matrices of all entries and with compatible dimension in the context, respectively, sometimes with explicit subscript being the dimension.

Ii Models of Adversarial Games

This section provides three main models for adversarial games, i.e., zero-sum normal-form and extensive-form games, Stackelberg (security) games and differential games, along with solution concepts in these game models, and a general framework of adversarial games is illustrated in Fig. 1.

Ii-a Zero-Sum Normal-Form and Extensive-Form Games

Normal-form and extensive-form games are two widely employed game models, accounting for simultaneous or sequential actions committed by the players in a game.

Normal-Form Games (NFGs). A normal-form (or strategic-form) game is denoted by a tuple [4], where is a finite set of players. In the meantime, is the action profile set for all players, where is the set of pure actions or strategies available to player , and is a joint action profile. Moreover, , where is a real-valued utility (or payoff) function for player . Also, a mixed strategy/policy for player

is a probability distribution over its action set

, denoted by , and denotes the probability for player to commit an action . The expected utility of player can be expressed as , where is the joint (mixed) action profile and denotes the joint action profile of all players except player . Similarly, let be the joint (pure) action profile of all players except player , and denote by for manifesting the dependency of a joint pure action profile. The social welfare is defined as for a pure action profile , whose mixed strategy correspondence is given as . In addition, the game is called constant-sum if for any action profile , it holds that for a constant , and called zero-sum if , as an illustration in Fig. 2.

Fig. 2: A schematic illustration of zero-sum games with players.

Note that for the case with continuous action sets, generally assumed closed and convex, they are usually called continuous games.

In what follows, the extensive-form games with imperfect information are introduced, which can reduce to ones with perfect information when information sets of each player is a singleton [5].

Imperfect-Information Extensive-Form Games (II-EFGs). An II-EFG is a tuple , where is a finite set of players, is a set of histories (i.e., nodes), representing the possible sequence of actions, and denotes the set of terminal nodes, which have no further actions and award a value to each player. Outside of , a different “player” exists, denoted , representing chance decisions. Moreover, the empty sequence is included in , standing for a unique root node. At a nonterminal node , is the action function assigning a set of available actions at (here is different from in normal-form games, which should be clear from the context), and is the player function assigning a player to the node who takes an action at that node with if chance determines the action at . And means that is led to by a sequence of actions, i.e., is a prefix of . is the set of utility functions, where is the utility function of player . If there is a constant such that for all , then the game is called a constant-sum game, and a zero-sum game when .

The main feature “imperfect information” is represented by information sets (infosets) for all players. Specifically, is the set of information sets, where is a partition of satisfying that and for any for some . That is, all nodes in the same infoset of are indistinguishable to player . Note that each node is only in one infoset for each player. When all players can remember all historical information, it is called perfect recall. Formally, let be histories such that , and then perfect recall means that if and do not share an infoset and each is not a prefix of the other, then and also do not share an infoset.

A normal-form plan (or pure strategy) of player is a tuple , which assigns an action to each infoset of player . A normal-form strategy means a probability distribution over , i.e., . A behavioral strategy (or simply, strategy) is a probability distribution over for each infoset of player . A joint strategy profile is composed of all players’ strategies , i.e., , with representing all the strategies except . Denote by (or ) the probability of a specific action at infoset , and the reach probability of history if all the players select their actions according to . For a strategy profile, player has its total expected payoff as . Denote by the set of all possible strategies for player .

A best response for player to is a strategy . In a two-player zero-sum game, the exploitability of a strategy , defined as , where is a Nash equilibrium, as defined later. In multi-player games, the total exploitability (or NashConv) of a strategy profile is defined as [21] , and the average exploitability (or simply exploitability) is defined as , which is leveraged to measure how much can be gained by unilaterally deviating to their best response, generally interpreted as a distance from a Nash equilibrium.

Note that besides the above normal-form and extensive-form games, other classes of games may be conducive as well in adversarial games, such as Markov games (or stochastic games) [22], where the game state changes according to a transition probability based on the current game state and players’ actions, Bayesian games [23], which models game uncertainties with incomplete information, and so forth.

In what follows, some solution concepts for related games are introduced.

The Nash equilibrium is the most widely adopted notion in the literature [2].

Definition 1 (-Nash Equilibrium (-Ne)).

For both normal-form and extensive-form games, a strategy is called an -NE for a constant if

(1)

that is, the gain is at most if any player changes its own strategy solely. Moreover, it is called an NE when , that is, is a best response of for any player , i.e., .

It is well known that there exists at least one NE in mixed strategies for games with finite number of players and finite number of pure strategies for each player [2].

Even though NE may exist for many games and it is computationally efficient for two-player zero-sum games, it is well known by complexity theory that approximating an NE in -player () zero-sum games and even two-player nonzero-sum games is computationally hard, that is, it is PPAD-complete for general games [24, 25, 26]. As an alternative, (coarse) correlated equilibrium is often considered for normal-form games in the literature, which is efficiently computable in all normal-form games, as defined in the following [27].

Definition 2 (-Correlated Equilibrium (-Ce)).

For a normal-form game , an -CE is a probability distribution over if for each player and any swap function (usually called strategy modification),

(2)

That is, no player can gain more payoff by unilaterally deviating its action privately informed by a coordinator who samples a joint action from that distribution. Furthermore, another relevant notion is defined below [28].

Definition 3 (-Coarse Correlated Equilibrium (-Cce)).

For a normal-form game , an -CCE is a probability distribution over if for each player and all actions ,

(3)

The above condition looks like almost the same as that for -CE, except the removal of the conditioning on the action , by arbitrarily selecting an action on their own, instead of following the action advised by the coordinator. For NE, CE, and CCE, it is known that they are payoff equivalent to each other in two-player zero-sum games by the minimax theorem [29]. Recently, the notions of CE and CCE have been extended to extensive-form games in [30, 31], which however have been less studied by now.

In an II-EFG, let us consider the case where all the players in are cooperative, thus forming a team, who take actions independently and play against an adversary , and and , called a zero-sum single-team single-adversary extensive-form team game (or simply zero-sum team game (TG)) [32]. Before introducing the notion of team-maxmin equilibrium, it is necessary to first prepare some essentials. Let denote the set of action sequences of player , where an action sequence of player , defined by a node , is the ordered set of actions of player that are on the path from the root to . Let be the dummy sequence to the root. A realization plan is a function mapping each action sequence to a probability, satisfying

(4)

where denotes the action sequence leading to .

With the above preparations, the team-maxmin equilibrium, first introduced in [33], is defined as follows [32].

Definition 4 (Team-Maxmin Equilibrium (TME)).

A TME is defined as

(5)

where stands for the team’s utility defined by if at least one terminal node is achieved by the joint plan (i.e., is nonempty) with the chance determined by chance nodes, and otherwise.

A TME is generally unique and it is an NE which maximizes the team’s utility. In addition, the concept of -TME can be similarly defined, at which both the team and the adversary can gain at most if any player unilaterally changes its strategy.

Besides the aforementioned optimal strategy concepts, it is worth noting that there are other notions as well, such as subgame perfect NE [4] and -rank [34], which however are beyond this survey.

Ii-B Stackelberg Games

Stackelberg games (SGs, or leader-follower games) can date back to Stackelberg competition introduced in [35] to model a strategic game between two firms, the leader and the follower, where the leader can take actions first. SGs, as games with sequential actions and asymmetric information, have many practical applications, for example, PROTECT, a system that the United States Coast Guard utilizes to assign patrols in Boston, New York, and Los Angeles [36], and ARMOR, an assistant deployed in Los Angeles International Airport in 2007 for randomly scheduling checkpoints on the roadways entering the airport. In what follows, general Stackelberg games and Stackelberg security games [37] are introduced, where the second one is an important special case of general SGs.

Fig. 3: A schematic illustration of general Stackelberg games, where directed edges mean that the leader commits an action first and then the followers play actions in response to the leader’s action.

General Stackelberg Game (GSG). A GSG consists of a leader, who commits an action first, and followers, who can observe and learn the leader’s strategy and then take actions in response to the leader’s strategy, see Fig. 3. Denote by , , the sets of followers, the leader’s pure strategies, and each follower’s pure strategies, respectively. The leader knows the probability of facing follower , denoted as . Denote by the mixed strategy of the leader, where the -th component represents the probability of choosing the -th pure strategy by the leader. Let denote the decision of follower to take a pure strategy such that for all . Note that it is enough for the rational followers to only consider pure strategies [38]. For the leader and each follower , the utilities (or payoffs, rewards) of the leader and the follower are captured by a pair of matrices , where is the utility matrix of the leader when facing follower , and is the utility matrix of follower . Then, the expected utilities of the leader and follower can be, respectively, given as

(6)
(7)

where and for each .

Stackelberg Security Game (SSG). In SSG, as a specific case of GSG, the leader and followers are viewed as the defender and attackers, where the defender aims to schedule a limited number of security resources to protect (or cover) a subset of targets from the attackers’ attacks, with . The notations are the same defined as in the above GSG. Noting that in this case, the leader’s pure strategy set is now composed of all possible subsets of at most targets that can be safeguarded simultaneously, and indicates whether attacker attacks target . Let be the probability of coverage of target such that , where connotes that the target is covered by pure strategy . When facing attacker who attacks target , the defender’s utility is if the target is covered or protected, or if the target is uncovered or unprotected. The utility of attacker is when attacking target that is covered, or when attacking target that is uncovered. It is generally assumed that and , which are in line with the common sense. The expected utilities for the defender and attacker is, respectively, expressed as

(8)
(9)

A most widely adopted solution for GSG and SSG is the so-called strong Stackelberg equilibrium, which always exists in all Stackelberg games [39, 37]. Recall that it is enough for each follower to play pure strategies.

Definition 5 (Strong Stackelberg Equilibrium (SSE)).

A strategy profile for a GSG forms an SSE, if

  1. is optimal for the leader:

    where denotes the attacker ’s best response against .

  2. Each follower always plays a best-response, i.e.,

  3. Each follower breaks ties in favor of the leader:

The tie-breaking rule is reasonable in cases of indifference since the leader can often induce the favorable equilibrium by choosing a strategy arbitrarily close to the equilibrium that makes the follower prefer the desired strategy [40]. When the tie-breaking rule is in favor of the followers, then the equilibrium is called weak Stackelberg equilibrium (WSE), which however does not always exist [41]. Moreover, the concept of SSE can be similarly defined for SSGs.

Ii-C Zero-Sum Differential Games

Differential games (DGs), also known as dynamic games [41], are a natural extension of sequential games to continuous-time scenario, which are expressed by differential equations and first introduced by Isaacs [42]. DGs can be regarded as an extension of optimal control [43], which usually has a single decision maker with a single objective function, while multiple players are involved in a DG with noncooperative objectives. Since this survey is concerned with adversarial games, zero-sum DGs (mostly involving two players in the literature) are considered here, although many other types of DGs emerge in the literature, including nonzero-sum differential games, mean-field games, differential graphical games, Dynkin games, and so on [44, 45].

A two-player zero-sum differential game (TP-ZS-DG) is described by a dynamical system as

(10)

where is the state vector at time , is the initial time, is the initial state, , are control constraints for players and , respectively, and are control actions (or signals) for player and , respectively, and is the dynamics, as illustrated in Fig. 4.

Fig. 4: A schematic illustration of two-player differential games.

For different setups in the literature, distinct cost functions are generally employed, most of which, however, are either based on or variants of an essential and important cost function, as given below:

(11)

where is the running cost (or stage cost) and is the terminal cost (or final cost).

With (11), the goal of DG (10) is for player to minimize the cost , while player aims at maximizing it, i.e.,

(12)

For (12), the optimal cost of is called the value of the game, expressed as a value function . Moreover, the solution notion is still the NE as in normal-form and extensive-form zero-sum games, also called minimax equilibrium (or minimax point, saddle point) in the literature since the studied problem is in fact a saddle point game (or saddle point problem/optimization).

Note that dynamics (10) is deterministic. In the meantime, stochastic DGs have also been addressed in the literature, described by stochastic differential equations with the standard Brownian motion [44]. It is also noteworthy that the above DGs are usually studied under a set of assumptions, such as the compactness of , and the Lipschitz continuity of , among others [45].

Finally, the main features of the aforementioned games are summarized in Table I.

Game models Player numbers Action order Information Dynamics
Zero-sum NFG mostly simultaneous symmetric
discrete-time
continuous-time
Zero-sum EFG
(perfect information)
mostly (imperfect information)
sequential symmetric mostly discrete-time
GSG and SSG
mostly one-leader
-follower
sequential asymmetric mostly discrete-time
Zero-sum DG mostly mostly simultaneous mostly symmetric continuous-time
TABLE I: Main features of various adversarial games.

Iii Research Classification and Frontiers

This section aims to succinctly summarize the relevant literature for zero-sum games, GSGs, SSGs, and TP-ZS-DGs along with the emerging state-of-the-art research. However, the relevant literature on adversarial games is too vast to cover it all, and thus only the literature of our interest is reviewed throughout this survey.

Iii-a Zero-Sum Games (ZSGs)

Both normal-form and extensive-form ZSGs studied in the literature can be generally categorized into the following main aspects: bilinear games, saddle point problems, multi-player ZSGs, team games, and imperfect-information ZSGs, as discussed below.

  1. Bilinear Games. Bilinear games are simple models for delineating two-player games, generally in normal-form as [46]: maximizing utilities and for players and , respectively, where and are payoff matrices, subject to strategy sets and with some and . A bilinear game is usually denoted by the payoff matrices pair , which is zero-sum when , and as an important notion, the rank of a game is defined as the rank of matrix . Several interesting games can be viewed as special cases of bilinear games, such as bimatrix games [47, 48, 49], where and , imitation games (a special case of bimatrix games with ) [50], and the Colonel Blotto game (i.e., two colonels simultaneously allocate their troops across different battlefields) [51]. In addition, multi-player polymatrix games [52] can also be equivalently transformed to bilinear games [46]. Generally speaking, the existing literature mainly focuses on the computational complexity and polynomial-time algorithm design for approximating NE of bilinear games [53], bimatrix games [54], polymatrix games [55], and the Colonel Blotto game [56]. Recently, it is shown that NE computation in two-player nonzero-sum games with rank is PPAD-hard [57, 58]. And computing a -approximate NE is PPAD-hard even for imitation games for any [50], where is the number of moves available to the players, and a polynomial-time algorithm was developed for finding an approximate NE in [50]. Also, computing an NE in a tree polymatrix game with twenty actions per player is PPAD-hard [55], and a polynomial-time algorithm for -approximate NE in bimatrix games was proposed in [54], which is the state-of-the-art in the literature. For the Colonel Blotto game, efficient and simple algorithms have been recently provided in [59, 60, 61], and meanwhile, various scenarios have been extended for this game, including dynamic Colonel Blotto game [62], generalized Colonel Blotto and generalized lottery Blotto games [63], and multi-player cases [64, 61]. Furthermore, bilinear games are generalized to hidden bilinear games in [65], where the inputs controlled by players are first processed by a smooth function, i.e., a hidden layer, before coming into the conventional bilinear games.

  2. Saddle Point Problems (SPPs). SPPs are also called saddle point optimization, min-max/minimax games, or min-max/minimax optimization in the literature. The formulation of a general SPP [66] is given as , where and are closed and convex, possibly the entire Euclidean spaces or their compact subsets. For general SPPs, besides zero-sum bilinear games, other two types have been extensively considered, that is, non-bilinear SPPs and bilinear SPPs. A non-bilinear SSP [67, 68] is expressed as , where is a general coupling function, and as a special case, when with , the game is called a bilinear SPP [69, 70, 71] due to the bilinear coupling. The existing research mainly centers on equilibrium existence, computational and sampling complexity, and efficient algorithm design, for instance, as done in the aforementioned recent works. Meanwhile, various scenarios have been investigated in the literature, including projection-free methods by applying the Frank-Wolfe algorithm [72, 73], nonconvex-nonconcave general SPPs [74, 75], linear last-iterate convergence [76], SSPs with adversarial bandits and delays [77], periodic zero-sum bimatrix games with continuous strategy spaces [78], compositional SSPs [79], decentralized setup [80], and hidden general SPPs [81], where the controlled inputs are first fed into smooth functions whose outputs are then treated as inputs for the traditional general SPPs. Finally, it is noteworthy that the general SPPs with sequential actions have also been studied, called min-max Stackelberg games, for example, the recent work [82] with dependent feasible sets.

  3. Multi-Player Zero-Sum Games (MP-ZSGs). The above discussed games usually involve two players. It is well known that approximating an NE in multi-player zero-sum games and even two-player nonzero-sum games is PPAD-complete [24, 25, 26]. Moreover, it is known that multi-player symmetric zero-sum games might have only asymmetric equilibria, which is consistent with that of two-player and multi-player symmetric nonzero-sum games, but in contrast with the case in two-player symmetric zero-sum games that always have symmetric equilibria (if equilibria exist) [83]. In the literature, most of works focus on multi-player zero-sum polymatrix games (also called network matrix games in some works), where the utility of each player is composed of the sum of utilities gained by playing with its neighbors in an undirected graph [52]. The authors in [84] generalized von Neumann’s minimax theorem to multi-player zero-sum polymatrix games, thus, implying convexity of equilibria, polynomial-time tractability, and convergence of no-regret learning algorithms to NEs, and last-iterate convergence was studied in [85] for multi-player polymatrix zero-sum games. time-average convergence was established by using alternating gradient descent in [86], where is the time horizon. Moreover, it is shown that for continuous-time algorithms, time-average convergence may fail even in a simple periodic multi-player zero-sum polymatrix game or replicator dynamics, but being Poincaré recurrent in [87, 88]. What’s more, it is realized that mutual cooperations among players may benefit more than pursuing selfish exploitability, and in this case, team/alliance formation is also studied in the literature, for example, [89], where it was demonstrated that team formation may be seen as a social dilemma. Additionally, other pertinent research encompasses multi-player general-sum games [90, 91, 92] and machine learning based studies [93], etc.

  4. Team Games (TGs).

    Generically, team games refer to those games where at least one team exists with the cooperation of team members with communications either before the play, or during the play, or simultaneously before and during the play, or without any communications. In general, team games in the literature can be classified from two perspectives. One perspective depends upon the team number, i.e., one-team games (or adversarial team games)

    [94], where players in the team enjoying the same utility function play against an adversary independently, and two-team games [95] consisting of two teams in a game. The other perspective is on perfect-information and imperfect-information games. For team games, TME is an important solution concept, for which it is known that computing a TME is FNP-hard and inapproximable in additive sense [96, 97]. Even though, efficient algorithms for computing a TME in perfect-information zero-sum NFGs have been developed until now, e.g., [94]. Meanwhile, a class of zero-sum two-team games in perfect-information normal-form was studied in [95], where finding an NE is shown to be CLS-hard, i.e., unlikely to have a polynomial-time NE computing algorithm. Moreover, as two-team games, two-network zero-sum games are also addressed, where each network is thought of as a team [98, 99, 100]. For imperfect-information zero-sum team games, the researchers have investigated a variety of scenarios centering around the computational complexity and efficient algorithms, such as one-team games [32, 101, 102], one-team game with two members in the team [103], the computation of team correlated equilibrium in two-team games [104].

  5. Imperfect-Information ZSGs (II-ZSGs). Unlike perfect-information games, such as Chess, Go and backgammon, II-ZSGs, involving individual players’ primate information that is hidden to other players, are more challenging due to information privacy and uncertainty, especially for large games with large action spaces and/or infosets. For example, the game of heads-up (i.e., two-player) limit Texas Hold’em poker, with over infosets, has been a challenging problem for AI for over years, before essentially solved by Cepheus [105], the first computer program for handling imperfect information games that are played competitively by humans. Also, the game of no-limit Texas Hold’em poker has more than infosets, for which DeepStack [106] and Libratus [107] are the first line of AI agents/algorithms to defeat professional humans in heads-up no-limit Texas Hold’em poker. As such, most of research focuses on the computing of NEs in two-player II-ZSGs in the literature [108, 109], aiming to develop efficient superhuman AI agents in face of the challenges of imperfect information, large models and uncertainties. To handle large games with imperfect information, several techniques have been successively proposed, for exmaple, pruning, abstraction, and search [110, 111, 112]. Roughly speaking, pruning aims to avoid traversing the whole game tree for an algorithm while simultaneously ensuring the same convergence, including regret-based pruning, dynamic thresholding, best-response pruning, and so on [113]. Abstraction aims to generate a smaller version of the original game by bucketing similar infosets or actions, while maintaining as much as possible the strategic features of the original game [114], mainly including information abstraction and action abstraction. Meanwhile, search tries to improve upon the (approximate) solution of a game abstraction, which may be far from the true solution of the original game, by seeking a more precise equilibrium solution for a faced subgame, such as depth-limited search [111, 115]. Moreover, it has been shown recently that some two-player poker games can be represented as perfect-information sequential Bayesian extensive games with efficient implementation [116]. The authors in [117] recently bridged several standing gaps between NFG and EFG learning by directly transferring desirable properties in NFGs to EFGs, guaranteeing simultaneously last-iterate convergence, lower dependence on the game size, and constant regret in games. Besides, bandit feedback is of practical importance in real-world applications for II-ZSGs [118, 119], where only the interactive trajectory and the payoff of the reached terminal node can be observed without prior knowledge of the game, such as the tree structure, the observation/state space, and transition probabilities (for Markov games) [120]. On the other hand, multi-player II-ZSGs are more challenging and thus have been less researched except for a handful of works, for example, Pluribus [121], the first multi-player poker agent, has defeated top humans in six-player no-limit Texas Hold’em poker (the most prevalent poker in the world) [122], and other endeavors [123, 124, 125, 119, 126]

    . Aside from deterministic methods, AI approaches have achieved great success in II-SSGs based on reinforcement learning, deep neural networks and so on

    [127, 106, 128, 129, 130, 131, 132, 120, 133, 134, 135, 136, 137, 138], for instance, AlphaGo (the first AI agent to achieve superhuman level in Go) [10], AlphaZero (with initial training independent of human data and Go-specific features, reaching state-of-the-art performance in Go, Chess and Shogi with minimal domain knowledge) [11], and DeepStack [106], to name a few. More details can refer to a recent survey for AI in games [139]. Note that other closely related research subsumes imperfect-information general-sum games with full and bandit feedback [140, 141, 142], two-player zero-sum Markov games [143], and multi-player general-sum Markov games [144].

It should be noted that incomplete information is also important in adversarial games, mainly comprising of Bayesian games (cf. a recent survey [145]).

Iii-B Stackelberg Games

Stackelberg games are roughly summarized from four perspectives, i.e., GSGs, SSGs, continuous Stackelberg games, and incomplete-information Stackelberg games.

  1. GSGs. The research on GSGs mainly lies in three aspects, i.e., computational complexity, solution methods, and their applications. For computational complexity, when only having one follower in GSGs, it is known that the problem can be solved in polynomial time, while it is NP-hard for the multiple followers case [38]

    . Regarding solution methods, there are an array of proposed methods in the literature, but primarily depending upon approaches for coping with linear programming (LP) and mixed integer linear programming (MILP), including cutting plane methods, enumerative methods, and hybrid methods, among others

    [146, 147]. Note that both GSGs and SSGs can be formulated as bilevel optimization problems [146, 147], where bilevel optimization has a hierarchical structure with two level optimizations, one lower level optimization (follower) nested in another upper level optimization (leader) as constraints, which is an active research area unto itself [148]. As for practical applications, a multitude of real-world problems have been tackled using Stackelberg games, such as economics [149], smart grid [150, 151], wireless networks [152], dynamic inspection problems [153], industrial internet of things [154], etc. It should be noted that other relevant cases have also been studied in the literature, such as multi-leader cases [155, 156, 157, 158, 159], the case with bounded rationality [160], and general-sum games [161], etc.

  2. SSGs. In general, SSGs can be classified by the functionality of security resources. To be specific, when every resource is capable of protecting every target, it is called homogeneous resources, and when resources are restricted to protecting only some subset of targets, it is called heterogeneous resources. Meanwhile, resources can also be distinguished by how many targets they are able to cover simultaneously, and in this case, a notion, called schedule, is assigned to a resource with the size of the schedule being defined to be the number of targets that can be simultaneously covered by the resource, including the case with size [162] and greater than [163]. For these scenarios, the computational complexity was addressed in [164] when existing a single attacker, as shown in Table II. With regard to solution methods, similar methods for solving GSGs can be applied to handle SSGs. Moreover, the practical applications of SSGs encompass wildlife protection [165], passenger screening at airports [166], crime prevention [167], cyber-security [168], information security [169], border patrol [170, 171], and so forth. In the meantime, there are other scenarios addressed in the literature, like multi-defender cases [172, 173], Bayesian generalizations [174], and the case with bounded rationality [175] and ambiguous information [176], etc.

    SSGs Size of schedule
    Homogeneous resources P P NP-hard
    Heterogeneous resources P NP-hard NP-hard
    TABLE II: Complexity results with a single attacker [164].
  3. Continuous Stackelberg games. This sort of games mean Stackelberg games with continuous strategy spaces. In general, there exist two players, a leader and a follower, who have cost functions and with , respectively, where and are closed convex and possibly compact strategy sets for the leader and the follower, respectively. Then, the problem can be formally written as

    (13)

    where the follower still takes actions in response to the leader after the leader makes its decision first. In this case, a strategy of the leader is called a Stackelberg equilibrium strategy [177] if

    (14)

    where is the best response of the follower against . Along this line, a hierarchical Stackelberg v/s Stackelberg game was studied in [178], where the first general existence result for the games’ equilibria is established without positing single-valuedness assumption on the equilibrium of the follower-level game. Furthermore, the connections between the NE and Stackelberg equilibrium were addressed in [177], where convergent learning dynamics are also proposed by using Stackelberg gradient dynamics that can be regarded as a sequential variant of the conventional gradient descent algorithm, and both zero-sum and general-sum games are considered therein. Additionally, as a special case of the above game (13), min-max Stackelberg games are paid attention to as well, where the problem is of the form with being the cost function. This problem has been investigated in the literature, especially for the case with dependent strategy set [82, 179], i.e., inequality constraints are imposed for the follower for some function , for which the prominent minimax theorem [29] does not hold any more.

  4. Incomplete-Information Stackelberg Games. Incomplete information means that the leader can only access partial information or cannot access any information of the followers’ utility functions, moves, or behaviors. This is in contrast with the traditional Stackelberg games, where the followers’ information is available to the leader. This weak scenario has been extensively considered in recent years motivated by practical applications. For example, the authors in [180] studied situations in which only partial information on the attacker behavior can be observed by the leader. And a single-leader-multiple-followers SSG was considered in [181] with two types of misinformed information, i.e., misperception and deception, for which a stability criterion is provided for both strategic stability and cognitive stability of equilibria based on hyper NE. Additionally, one of interesting directions is information deceptions of the follower, that is, the follower is inclined to deceive the follower by sending misinformation, such as fake payoffs, to the leader in order to benefit itself as much as possible, while, at the same time, the leader needs to distinguish the deception information for minimizing its loss incurred by the deception. Recently, an interesting result on the nexus between the follower’s deception and the leader’s maximin utility is obtained for optimally deceiving the leader in [182], that is, through deception, almost any (fake) Stackelberg equilibrium can be induced by the follower if and only if the leader procures at least their maximin utility at this equilibrium.

Iii-C Zero-Sum Differential Games

According to the existing literature, zero-sum DGs are categorized by five main dimensions, which however are not mutually exclusive, but from different angles of studied problems, i.e., linear-quadratic DGs, DGs with nonlinear dynamical systems, Stackelberg DGs, stochastic DGs, and terminal time and state constraint.

  1. Linear-Quadratic DGs. This relatively simple model has been widely studied for DGs, where dynamical systems are linear differential equations and cost functions are quadratic [183, 184]. In general, linear-quadratic DGs are analytically and numerically solvable, which can find a variety of applications in reality, such as pursuit-evasion problem [185, 186]. Recently, singular linear-quadratic DGs were studied in [187], which cannot be handled either using the Isaacs MinMax principle or the Bellman-Isaacs equation approach, and to solve this problem, an interception differential game was introduced with appropriate regularized cost functional and dual representation. The authors in [188] studied a linear-quadratic-Gaussian asset defending differential game where the state information of the attacker and the defender is not accessible to each other, but the trajectory of a moving asset is known by them. Meanwhile, a two-player linear-quadratic-Gaussian pursuit-evasion DG was investigated in [189] with partial information and selected observations, where the state of one player can be observed any time preferred by the other player and the cost function of each player consists of the direct cost of observing and the implicit cost of exposing his state. A linear-quadratic DG with two defenders and two attackers against a stationary target was considered in [190]. Two-player mean-field linear-quadratic stochastic DGs in an infinite horizon was investigated in [191], where the existence of both open-loop and closed-loop saddle points is studied by resorting to coupled generalized algebraic Riccati equations.

  2. Nonlinear DGs. The DGs with nonlinear state dynamics have also been taken into account in the literature, given that many practical applications cannot be dealt with by linear-quadratic DGs. For example, the authors in [192] considered a class of nonlinear TP-ZS-DGs by appealing to an adaptive dynamic programming. TP-ZS-DGs were addressed in [193] by proposing an approximate optimal critic learning algorithm based on policy iteration of a single neural network. Nonlinear DGs were also considered with time delays [194, 195, 196] and fractional-order systems [197], and then were studied in [198] with the dynamical system depending on the system’s distribution and the random initial condition. Besides two players, multi-player zero-sum DGs with uncertain nonlinear dynamics were considered and tackled using a new iterative adaptive dynamic programming algorithm in [199].

  3. Stackelberg DGs. Motivated by the fact of sequential actions in some practical applications, like Stackelberg games, DGs with sequential actions, called Stackelberg DGs, have been broadly addressed in the literature. For instance, a linear-quadratic Stackelberg DG was considered in [200] with mixed deterministic and stochastic controls, where the follower can select adapted random processes as its controller. The Stackelberg DG was employed to fight terrorism in [201]. Then, the authors in [202] investigated two classes of state-constrained Stackelberg DGs with a nonzero running cost and state constraint, for which Hamilton-Jacobi equations are established.

  4. Stochastic DGs. In many realistic problems, the dynamics of a concerned system may not be completely modelled, but undergoing some uncertainties and/or noises, and thereby, stochastic differential equations have been leveraged to model the system dynamics in stochastic DGs [203, 204]. In this respect, the authors in [205] considered two-person zero-sum stochastic linear-quadratic DGs, along with the investigation of the open-loop saddle point and the open-loop lower and upper values. A class of stochastic DGs with ergodic payoff were studied in [206], where it is not necessary for the diffusion system to be non-degenerate. In addition, linear-quadratic stochastic Stackelberg DGs were taken into consideration in [207] with asymmetric roles for players, [208] for jump-diffusion systems, [209] without the solvability assumption on the associated Riccati equations, and [210] with model uncertainty. And a Stackelberg stochastic DG with nonlinear dynamics and asymmetric noisy observation was addressed in [211].

  5. Terminal Time and State Constraint. A basic classification of zero-sum DGs can be made based on terminal time and state constraint, that is, whether the terminal time is finite (including two cases, i.e., a fixed constant or a variable to be specified) or infinite, and whether the system state is unconstrained or constrained. Along this line, the case with fixed terminal time and unconstrained state was first addressed [212], and the state-constrained case with fixed terminal time was also studied [213]. Meanwhile, the case with the terminal time being a variable was investigated in the literature, such as [214] without state constraints and [215, 216] in presence of state constraints but with zero running-cost. Recently, the case with nonzero state constraint and underdetermined terminal time was investigated in [202]. Besides the above finite horizon cases, the infinite horizon case has also been considered in the literature, e.g., [191, 217].

Last, it is worth pointing out that other possible forms of zero-sum DGs exist in the literature, such as the case with continuous and/or impulse controls [217], mean-field DGs [218, 219], risk-sensitive zero-sum DGs [204] and so forth.

Iv Prevailing Algorithms and Approaches

This section aims at encapsulating some main efficient algorithms and approaches for handling the reviewed adversary games as discussed in Section II.

Iv-a Zero-Sum Normal- and Extensive-Form Games

The bundle of algorithms can be roughly divided into two parts according to their applicabilities to normal-form games or imperfect-information extensive-form games.

For normal-form games, a large number of algorithms have so far been proposed, e.g., regret matching (RM for short, first proposed by Hart and Mas-Colell in 2000 [220]), RM+ [221], fictitious play [222, 223], double oracle [224], online double oracle [49], and among others. Wherein, the most prevalent algorithms are based on regret learning, usually called no-regret (or sublinear) learning algorithms, depending external and internal regrets in general, as defined below.

The external regret and internal regret [225] for each player are, respectively, defined as

(15)
(16)

where the superscript stands for the iteration number, is the time horizon, and is the indicator function with an event . Generally speaking, the external regret measures the greatest regret for not playing actions ’s, and the internal regret indicates the greatest regret for not swapping to action when each time actually playing action . Note that weighted external and internal regrets are also defined by adding a weight at each time [226], and other regrets are considered as well in the literature, including swap regret [91] and several dynamic/static NE-based regrets [227, 228, 229, 230, 17].

With regrets at hand, it is now ready to present two of most widely employed algorithms, i.e., optimistic (or predictive) follow the regularized leader (Optimistic FTRL for brevity) and optimistic mirror descent (OMD for short) [85], which are, respectively, given as

(17)

and

(18)

where is a generic closed convex constraint set, is the stepsize, is a subgradient of a function returned by the environment after the player commits an action at time , is a subgradient prediction, often assuming in the literature, and is a strongly convex function, serving as the base function for defining the Bregman divergence for any .

Note that many widely employed algorithms, such as optimistic gradient descent ascent (OGDA) [76] and optimistic multiplicative weights update (OMWU, or optimistic hedge) [231], are special cases or variants of optimistic FTRL and OMD, and other different efficient algorithms also exist such as optimistic dual averaging (OptDA) [232], greedy weights [226], and so forth.

For imperfect-information games, the most popular algorithms are counterfactual regret minimization (CFR) [233], whose details are introduced as follows, with the same notations as in extensive-form games in Section II-A.

Recalling that denotes the reach probability of history with strategy profile . For an infoset , let denote the probability of reaching the infoset via all possible histories in , i.e., . And denote by the reach probability of infoset for player according to the strategy , i.e., , and the counterfactual reach probability of infoset , i.e., the probability of reaching with strategy profile except that the probability of reaching is treated as by the current actions of player , i.e., without the contribution of player to reach . Meanwhile, denotes the probability of going from history to a nonterminal node . Then, for player , the counterfactual value at a nonterminal history is defined as

(19)

the counterfactual value of an infoset is defined as

(20)

and the counterfactual value of an action is defined as

(21)

The instantaneous regret at iteration and counterfactual regret at iteration for action in infoset are, respectively, defined as

(22)
(23)

where is the joint strategy profile leveraged at iteration .

By defining , applying regret matching by Hart and Mas-Colell [220] can generate the strategy update as

(24)

with , and (24) is the essential CFR method for player ’s strategy selection. Moreover, it is known that the CFR method can guarantee the convergence to NEs for the average strategy of players, i.e.,

(25)

Hitherto, various famous variants of CFR have been developed with superior performance, including CFR+ [221, 234], discounted CFR (DCFR) [235], linear CFR (LCFR) [236], exponential CFR (ECFR) [237], AutoCFR [238], etc. More details can be found in [239, 14, 112].

Meanwhile, lots of AI methods have been brought forward in the literature [93], such as policy space response oracles (PSRO) [21, 240], neural fictitious self-play [127], deep CFR [236], single deep CFR [241], unified deep equilibrium finding (UDEF) [136], player of games (PoG) [133], neural auto-curricula (NAC) [137], and so forth. Among these methods, PSRO has been an effective approach in recent years, which unifies fictitious play and double oracle algorithms. Nonetheless, UDEF provides a unified framework of PSRO and CFR, which are generally considered independently with their own advantages, and thus UDEF are superior to both PSRO and CFR as demonstrated by experiments on Leduc poker [136]. The recently-developed PoG algorithm has unified several previous approaches by integrating guided search, self-play learning, and game-theoretic reasoning, and demonstrated theoretically and experimentally the achievement of strong empirical performance in large perfect and imperfect information games, which defeats state-of-the-art in heads-up no-limit Texas Hold’em poker (Slumbot) [133]. Moreover, NAC, as a meta-learning algorithm proposed recently in [137], provides a potential future direction to develop general multi-agent reinforcement learning (MARL) algorithms solely from data, since it can learn its own objective solely from the interactions with environment, without the need of human-designed knowledge about game theoretic principles, and it can decide by itself what the meta-solution, i.e., who to compete with, should be during training. Furthermore, it is shown that NAC is comparable or even superior to the state-of-the-art population-based game solvers, such as PSRO, on a series of games, like Games of Skill, differentiable Lotto, non-transitive Mixture Games, Iterated Matching Pennies, and Kuhn poker [137].

Finally, it is worth pointing out that by CFR methods, it can guarantee the convergence to NEs in the sense of the empirical distribution (i.e., time-average) of play, but generally failing to converge for the day-to-day play (i.e., the last-iterate convergence) [242, 243], although it does converge in the sense of last-iterate in two-player zero-sum games [85]. In this respect, the last-iterate convergence is of also importance to be explored as demonstrated in economics, and so on [244, 245, 246, 76, 85].

Iv-B Stackelberg Games

GSGs and SSGs can be expressed as bilevel linear programming (BLP) or mixed integer linear programming (MILP), which can be further transformed or relaxed as linear programming (LP) [146]. As mentioned in Section III-B, solving GSGs and SSGs is generally NP-hard, and most existing solution methods are variants of solution approaches for MILP and LP, including cutting plane methods, enumerative methods, hybrid methods, and so on [147]. Some of most widely used approaches in the literature are introduced in the sequel.

  1. Multiple LP Approach. This approach is proposed in [38], most widely employed for those easy problems that can be solved in polynomial time, including the case with a single follower type for GSGs [38], further improved upon in [247] by merging LPs into a single MILP. And this approach has also been improved to deal with SSGs in [164], generally pretty efficient in the case with size of the schedule and the case with size of the schedule but for homogeneous resources, as shown in Table II.

  2. Benders Decomposition. Benders decomposition method is developed in [248], which is effective to handle general MILP problems. The crux of this method is to divide the original problem into two other problems, that is, one is called master problem by relaxing some constraints and the other is called subproblem, along with a separation problem that is the dual of the subproblem. Then, the solution seeking procedure involves the solving of the master problem firstly, followed by solving the separation problem, and finally checking the feasibility and optimality conditions for the subproblem with different contingent operations. Moreover, this approach can be improved upon by combining with other techniques, such as Farkas’ lemma [249] and normalized cut [250], leading to a recent efficient algorithm, called normalized Benders decomposition [147], etc.

  3. Branch and Cut. Branch & cut method, as a hybrid methods, combines the cutting plane method [251] with the branch and bound method [252]. This approach is pretty effective for resolving various (mixed) integer programming problems while still ensuring the optimality. In general, branch and cut algorithm is in the same spirit of the branch and bound scheme, but appending new constraints when necessary in each node by resorting to cutting plane approaches [147].

  4. Cut and Branch. This method is similar to the branch and cut approach, and the difference lies in that the extra cuts are only added in the root node. Meanwhile, only the branching constraints are added to the other nodes. It is found in [147] that with variables in in master problem and stabilization, cut and branch is superior to other methods in some sense.

  5. Gradient Descent Ascent. Gradient descent ascent, i.e., the classical gradient descent and ascent algorithm [253], is the most noticeable algorithm for solving continuous Stackelberg games, where descent and ascent operations are, respectively, performed for the leader and the follower, but in a sequential order, and other methods mostly rest on this algorithm [82, 177]. For example, the max-oracle gradient-descent algorithm [82] is a variant of gradient descent ascent, where the ascent operation in the follower is directly replaced with an approximate best response provided by a max-oracle.

Finally, it is worth pointing out that AI methods have also been leveraged to cope with Stackelberg games, e.g., [254] and a survey [255] for reference.

Iv-C Zero-Sum Differential Games

Among the methods for solving zero-sum DGs, the viscosity solution approach is the most widely exploited one, for which it is known that a value function is the solution of the Hamilton-Jacobi-Isaacs (HJI) equations. In the sequel, this approach is introduced for DGs (10) and (11), and other detailed cases can be found in [45, 256].

For DGs (10) and (11), the Hamiltonian is defined as

(26)

and the HJI equation is given as

(27)

where the second condition is called the terminal condition, is a function, and represent the subgradients with respect to , respectively.

Fig. 5: A schematic illustration of applications of adversarial games.

Let denote the set of functions satisfying the continuity condition in and the Lipschitz condition on every bounded subset of in . From [195], it is known that if a function is coinvariantly differentiable at each point , satisfies HJI equation (27), and , then is the value function of differential game (10) and (11), and the optimal control strategies for two players are given as

(28)

where

(29)

Moreover, it should be noted that AI methods have also been applied to solve differential games, for example, reinforcement learning was employed to deal with multi-player nonlinear differential games [257], where a novel two-level value iteration-based integral reinforcement learning algorithm was proposed only depending upon partial information of system dynamics.

V Applications

This section provides some practical applications for adversarial games. As a matter of fact, adversarial games have been leveraged to solve a large volume of realistic problems in the literature, as illustrated in Fig. 5, including poker [133], StarCraft [258], politics [259], infrastructure security [13], pursuit-evasion problems [186], border defense [170, 260, 19], national defense [18], communication scheduling [261], autonomous driving [262], homeland security [263], etc. In what follows, we provide three well known examples to illustrate applications.

Example 1 (Radar Jamming).

Radar jamming is one of widely studied applications of zero-sum games in modern electronic warfare [264, 265]. In radar jamming, there exist two players, one radar who aims to detect a target in a probability as high as possible, and one jammer who aims at minimizing the radar’s detection by jamming it. Therefore, the two players are diametrically opposed, and the scenario forms a two-player zero-sum game (cf. Fig. 6 for a schematic illustration). Usually, according to the type of the target, some kinds of utility functions can be constructed in distinct scenarios of jamming, and some constraints can be described mathematically relying on physical limitations, such as jammer power, spatial extent of jamming, and threshold parameter and reference window size for the radar. For example, a Swerling Type II target is assumed in [266] in presence of Rayleigh distributed clutter, for which certain utility functions are built for cell averaging and order-statistic constant false alarm rate (CFAR) processors in three scenarios of jamming, i.e., ungated range noise, range-gated noise, and false-target jamming.

Fig. 6: A schematic illustration of radar jamming.
Example 2 (Border Patrols).

It is an important task for a country to secure national borders to avoid illicit behaviors of drugs, contraband, and stowaway, etc. In this spirit, border patrols are introduced here as one application of SSGs, which is proposed by Carabineros de Chile [170, 171], to thwart drug trafficking, contraband and illegal entry. To this end, both day and night shift patrols along the border are arranged by Carabineros according to distinct requirements.

The night shift patrols are specially focused on. To make it practically implementable, the region is partitioned into some police precincts, some of which are paired up when scheduling the patrol, because of the vast expanses and harsh landscape at the border and the manpower limitation. In addition, a set of vantage locations have been identified by Carabineros along the border of the region, which are suited for conducting surveillance with high-tech equipments, like heat sensors and night goggles. A night shift action means the deployment of a joint detail with personnel from two paired precincts to carry out vigilance overnight at the vantage locations within the realm of the paired precincts. Meanwhile, in consideration of logistical constraints, a joint detail is deployed for every precinct pair to a surveillance location once a week. Fig. 7 illustrates the case with pairings, precincts and locations.

Fig. 7: Feasible schedule for a week, where stars and squares mean precinct headquarters and border outposts, respectively, cited from [170].
Example 3 (Pursuit-Evasion Problems).

Pursuit-evasion problems are one of prevalent applications of zero-sum DGs, which have been widely applied to many practical problems, such as surveillance and navigation, in robotics and aerospace and so forth. In pursuit-evasion problems, there usually exist a collection of pursuers and evaders (one pursuer and one evader in the simplest case) possibly with a moving target or stationary target set/area, and the pursuers aim to capture or intercept the evaders who have opposed objectives [186]. As a concrete example, consider a case where there exists one pursuer (or defender) which protects a maritime coastline or border from the attacking by two slower aircraft (or evaders). The pursuer needs to sequentially pursue the evaders and strives to intercept them as far as possible from the coastline. Meanwhile, the two evaders can collaborate and strive to minimize their combined distance to the coastline before they are intercepted. For this problem, a regular solution was provided for the differential game in [267].

Vi Possible Future Directions

In view of some challenges in adversary games, this section attempts to present potential research directions in future, as discussed in the sequel.

  • Efficient Algorithms Design. Even though a wide range of algorithms have been proposed in the literature, as introduced above, efficient, fast and optimal algorithms with limited computing, storage, and memory capabilities are still the overarching research directions in (adversarial) games and artificial intelligence, which are far from fully explored, including a plethora of scenarios, e.g., equilibrium computation [226], real-time strategy (RTS) making [268], exploiting suboptimal opponents [269], attack resiliency [270], and so forth.

  • Last-Iterate Convergence. In general, no-regret learning can guarantee the convergence of the empirical distribution of play (i.e., time-average convergence) for each player to the set of NEs. However, the last-iterate convergence fails in general [242, 243], although restricted classes of games indeed have the last-iterate convergence by no-regret learning algorithms, such as two-player zero-sum games [85]

    . Note that the last-iterate convergence is important in many practical applications, for example, generative adversarial networks (GANs)

    [271] and economics [231], which have been receiving a growing interest in recent years [272].

  • Imperfect Information. Imperfect information, as a possible main feature of many practical adversarial games, inflicts a major challenge in adversarial games, which is still being actively under explored, although an array of works have focused on it, e.g., [273, 115].

  • Large Games. For adversarial games with large action spaces and/or infosets, practical limitations, such as limited computing resources, impose the need of efficient algorithms design amenable to implementation with limited computation, storage and even communication [274].

  • Incomplete Information. Incomplete information is another main hallmark of many adversarial games, which is one of challenge sources. Generally speaking, game uncertainties, such as parameter uncertainties, action outcome uncertainty, underlying world state uncertainty, can be subsumed in the category of incomplete information, and the main studied models are Bayesian and interval models [275, 145, 276].

  • Bounded Rationality. Completely rational players are often assumed in the study of games. Nonetheless, irrational players naturally appear in practice, which has triggered an increasing interest in games with bounded rationality, e.g., behavior models such as lens-QR models, prospect theory inspired models and quantal response models [277, 278, 279].

  • Dynamic Environments. Most of games have been investigated as static ones, i.e., with time-invariant game rules. However, due to possible dynamic characteristics of the environment within which players compete, online game (or time-varying game) is imperative for further attention in future, where each player’s utility function is time-varying or even adversarial without any distribution assumptions [227, 228, 229, 230, 17].

  • Hybrid Games. It is known that many realistic adversarial games involve both continuous and discrete physical dynamics that govern players’ motion or changing rules, which can be framed in the framework of hybrid games [280, 281]. In this respect, how to combine the game theory with control dynamics is an important yet challenging research area.

  • AI in Games. Recent years have witnessed great progress in the success of AI methods applied in games, which can integrate some advanced approaches of reinforcement learning, neural networks, meta-learning, and so on [282, 283, 135, 284]. With the advent of modern high-tech and big-data complex missions, AI methods provide an effective manner to commit real-time strategies by solely exploiting offline or real-time streaming data [139].

Vii Conclusion

Adversarial games play a significant role in practical applications, for which this survey provided a systematic overview on it from three main categories, i.e., zero-sum normal- and extensive-form games, Stackelberg (security) games, and zero-sum differential games. To this end, several distinct angles have been employed to anatomize adversarial games, ranging from game models, solution concepts, problem classification, research frontiers, prevailing algorithms and real-world applications to potential future directions. In general, this survey has attempted to summarize the past research in an intact manner, although the existing references are too vast to cover in its entirety. To our best knowledge, this survey is the first to present a systematic overview on adversarial games. Finally, future possible directions have been also discussed.

References

  • [1] J. von Neumann and O. Morgenstern, Theory of Games and Economic Behavior, 2nd ed.   Princeton University Press, 1947.
  • [2] J. F. Nash, “Equilibrium points in -person games,” Proceedings of the National Academy of Sciences, vol. 36, no. 1, pp. 48–49, 1950.
  • [3] J. Nash, “Non-cooperative games,” Annals of Mathematics, vol. 54, no. 2, pp. 286–295, 1951.
  • [4] D. Fudenberg and J. Tirole, Game Theory.   MIT Press, 1991.
  • [5] M. J. Osborne and A. Rubinstein, A Course in Game Theory.   MIT Press, 1994.
  • [6] T. Başar and G. Zaccour, Handbook of Dynamic Game Theory.   Springer International Publishing, 2018.
  • [7] R. J. Aumann, M. Maschler, and R. E. Stearns, Repeated Games with Incomplete Information.   MIT Press, 1995.
  • [8] N. Bard, J. Hawkin, J. Rubin, and M. Zinkevich, “The annual computer poker competition,” AI Magazine, vol. 34, no. 2, pp. 112–112, 2013.
  • [9] T. H. Nguyen, D. Kar, M. Brown, A. Sinha, A. X. Jiang, and M. Tambe, “Towards a science of security games,” in Mathematical Sciences with Multidisciplinary Applications, 2016, pp. 347–381.
  • [10] D. Silver, A. Huang, C. J. Maddison, A. Guez, L. Sifre, G. Van Den Driessche, J. Schrittwieser, I. Antonoglou, V. Panneershelvam, M. Lanctot et al., “Mastering the game of Go with deep neural networks and tree search,” Nature, vol. 529, no. 7587, pp. 484–489, 2016.
  • [11] D. Silver, J. Schrittwieser, K. Simonyan, I. Antonoglou, A. Huang, A. Guez, T. Hubert, L. Baker, M. Lai, A. Bolton et al., “Mastering the game of Go without human knowledge,” Nature, vol. 550, no. 7676, pp. 354–359, 2017.
  • [12] D. Silver, T. Hubert, J. Schrittwieser, I. Antonoglou, M. Lai, A. Guez, M. Lanctot, L. Sifre, D. Kumaran, T. Graepel et al., “A general reinforcement learning algorithm that masters chess, shogi, and Go through self-play,” Science, vol. 362, no. 6419, pp. 1140–1144, 2018.
  • [13] A. Sinha, F. Fang, B. An, C. Kiekintveld, and M. Tambe, “Stackelberg security games: Looking beyond a decade of success,” in International Joint Conference on Artificial Intelligence (IJCAI), Stockholm, Sweden, 2018, pp. 5494–5501.
  • [14] H. Li, X. Wang, F. Jia, Y. Li, and Q. Chen, “A survey of Nash equilibrium strategy solving based on CFR,” Archives of Computational Methods in Engineering, vol. 28, no. 4, pp. 2749–2760, 2021.
  • [15] M. K. Sohrabi and H. Azgomi, “A survey on the combined use of optimization methods and game theory,” Archives of Computational Methods in Engineering, vol. 27, no. 1, pp. 59–80, 2020.
  • [16] J. Wang, Y. Hong, J. Wang, J. Xu, Y. Tang, Q.-L. Han, and J. Kurths, “Cooperative and competitive multi-agent systems: From optimization to games,” IEEE/CAA Journal of Automatica Sinica, vol. 9, no. 5, pp. 763–783, 2022.
  • [17] X. Li, L. Xie, and N. Li, “A survey of decentralized online learning,” arXiv preprint arXiv:2205.00473, 2022.
  • [18] E. Ho, A. Rajagopalan, A. Skvortsov, S. Arulampalam, and M. Piraveenan, “Game theory in defence applications: A review,” Sensors, vol. 22, no. 3, p. 1032, 2022.
  • [19] D. Shishika and V. Kumar, “A review of multi-agent perimeter defense games,” in International Conference on Decision and Game Theory for Security, College Park, USA, 2020, pp. 472–485.
  • [20] M. Zhu, A. H. Anwar, Z. Wan, J.-H. Cho, C. A. Kamhoua, and M. P. Singh, “A survey of defensive deception: Approaches using game theory and machine learning,” IEEE Communications Surveys & Tutorials, vol. 23, no. 4, pp. 2460–2493, 2021.
  • [21] M. Lanctot, V. Zambaldi, A. Gruslys, A. Lazaridou, K. Tuyls, J. Pérolat, D. Silver, and T. Graepel, “A unified game-theoretic approach to multiagent reinforcement learning,” in Advances in Neural Information Processing Systems, vol. 30, Long Beach, CA, USA, 2017.
  • [22] M. L. Littman, “Markov games as a framework for multi-agent reinforcement learning,” in Machine Learning Proceedings, 1994, pp. 157–163.
  • [23] S. Zamir et al., “Bayesian games: Games with incomplete information,” Tech. Rep., 2008.
  • [24] X. Chen, X. Deng, and S.-H. Teng, “Settling the complexity of computing two-player Nash equilibria,” Journal of the ACM (JACM), vol. 56, no. 3, pp. 1–57, 2009.
  • [25] C. Daskalakis, P. W. Goldberg, and C. H. Papadimitriou, “The complexity of computing a Nash equilibrium,” SIAM Journal on Computing, vol. 39, no. 1, pp. 195–259, 2009.
  • [26] A. Rubinstein, Hardness of Approximation Between P and NP.   Morgan & Claypool, 2019.
  • [27] R. J. Aumann, “Subjectivity and correlation in randomized strategies,” Journal of Mathematical Economics, vol. 1, no. 1, pp. 67–96, 1974.
  • [28] J. Hannan, “Approximation to Bayes risk in repeated play,” Contributions to the Theory of Games, vol. 3, no. 2, pp. 97–139, 1957.
  • [29] J. V. Neumann, “Zur theorie der gesellschaftsspiele,” Mathematische Annalen, vol. 100, no. 1, pp. 295–320, 1928.
  • [30] G. Farina, T. Bianchi, and T. Sandholm, “Coarse correlation in extensive-form games,” in AAAI Conference on Artificial Intelligence, vol. 34, no. 2, 2020, pp. 1934–1941.
  • [31] A. Celli, S. Coniglio, and N. Gatti, “Computing optimal coarse correlated equilibria in sequential games,” arXiv preprint arXiv:1901.06221, 2019.
  • [32] A. Celli and N. Gatti, “Computational results for extensive-form adversarial team games,” in AAAI Conference on Artificial Intelligence, vol. 32, no. 1, 2018.
  • [33] B. von Stengel and D. Koller, “Team-maxmin equilibria,” Games and Economic Behavior, vol. 21, no. 1-2, pp. 309–321, 1997.
  • [34] S. Omidshafiei, C. Papadimitriou, G. Piliouras, K. Tuyls, M. Rowland, J.-B. Lespiau, W. M. Czarnecki, M. Lanctot, J. Perolat, and R. Munos, “-rank: Multi-agent evaluation by evolution,” Scientific Reports, vol. 9, no. 1, pp. 1–29, 2019.
  • [35] H. Von Stackelberg, Marktform und gleichgewicht.   Springer-Verlag, Berlin, 1934.
  • [36] B. An, F. Ordóñez, M. Tambe, E. Shieh, R. Yang, C. Baldwin, J. DiRenzo III, K. Moretti, B. Maule, and G. Meyer, “A deployed quantal response-based patrol planning system for the U.S. coast guard,” Interfaces, vol. 43, no. 5, pp. 400–420, 2013.
  • [37] C. Casorrán, B. Fortz, M. Labbé, and F. Ordóñez, “A study of general and security Stackelberg game formulations,” European Journal of Operational Research, vol. 278, no. 3, pp. 855–868, 2019.
  • [38] V. Conitzer and T. Sandholm, “Computing the optimal strategy to commit to,” in Proceedings of the 7th ACM conference on Electronic Commerce, Michigan, USA, 2006, pp. 82–90.
  • [39] G. Leitmann, “On generalized Stackelberg strategies,” Journal of Optimization Theory and Applications, vol. 26, no. 4, pp. 637–643, 1978.
  • [40] H. von Stackelberg, Market Structure and Equilibrium.   Springer Science & Business Media, 2011.
  • [41] T. Başar and G. J. Olsder, Dynamic Noncooperative Game Theory.   SIAM, 1998.
  • [42] R. Isaacs, Differential Games.   Wiley, New York, 1965.
  • [43] F. L. Lewis, D. Vrabie, and V. L. Syrmos, Optimal Control.   John Wiley & Sons, 2012.
  • [44] R. Buckdahn, P. Cardaliaguet, and M. Quincampoix, “Some recent aspects of differential game theory,” Dynamic Games and Applications, vol. 1, no. 1, pp. 74–114, 2011.
  • [45] A. Friedman, Differential Games.   Courier Corporation, 2013.
  • [46] J. Garg, A. X. Jiang, and R. Mehta, “Bilinear games: Polynomial time algorithms for rank based subclasses,” in International Workshop on Internet and Network Economics, Singapore, 2011, pp. 399–407.
  • [47] C. E. Lemke and J. T. Howson, Jr, “Equilibrium points of bimatrix games,” Journal of the Society for Industrial and Applied Mathematics, vol. 12, no. 2, pp. 413–423, 1964.
  • [48] I. Anagnostides and P. Penna, “Solving zero-sum games through alternating projections,” arXiv preprint arXiv:2010.00109, 2021.
  • [49] L. C. Dinh, Y. Yang, Z. Tian, N. P. Nieves, O. Slumbers, D. H. Mguni, H. B. Ammar, and J. Wang, “Online double oracle,” arXiv preprint arXiv:2103.07780, 2021.
  • [50] A. Murhekar, “Approximate Nash equilibria of imitation games: Algorithms and complexity,” in International Conference on Autonomous Agents and Multiagent Systems, 2020, pp. 887–894.
  • [51] E. Borel, “La théorie du jeu et les équations intégralesa noyau symétrique,” Comptes rendus de l’Académie des Sciences, vol. 173, no. 1304-1308, p. 58, 1921.
  • [52] J. T. Howson Jr, “Equilibria of polymatrix games,” Management Science, vol. 18, no. 5-part-1, pp. 312–318, 1972.
  • [53] G. Sengodan and C. Arumugasamy, “Linear complementarity problems and bilinear games,” Applications of Mathematics, vol. 65, no. 5, pp. 665–675, 2020.
  • [54] A. Deligkas, M. Fasoulakis, and E. Markakis, “A polynomial-time algorithm for -approximate Nash equilibria in bimatrix games,” arXiv preprint arXiv:2204.11525, 2022.
  • [55] A. Deligkas, J. Fearnley, and R. Savani, “Tree polymatrix games are PPAD-hard,” arXiv preprint arXiv:2002.12119, 2020.
  • [56] S. Seddighin, “Campaigning via LPs: Solving Blotto and Beyond,” Ph.D. dissertation, University of Maryland, College Park, 2019.
  • [57] R. Mehta, “Constant rank two-player games are PPAD-hard,” SIAM Journal on Computing, vol. 47, no. 5, pp. 1858–1887, 2018.
  • [58] S. Boodaghians, J. Brakensiek, S. B. Hopkins, and A. Rubinstein, “Smoothed complexity of -player Nash equilibria,” in Annual Symposium on Foundations of Computer Science, 2020, pp. 271–282.
  • [59] S. Behnezhad, A. Blum, M. Derakhshan, M. Hajiaghayi, C. H. Papadimitriou, and S. Seddighin, “Optimal strategies of Blotto games: Beyond convexity,” in Proceedings of ACM Conference on Economics and Computation, Phoenix, AZ, USA, 2019, pp. 597–616.
  • [60] S. Behnezhad, S. Dehghani, M. Derakhshan, M. Hajiaghayi, and S. Seddighin, “Fast and simple solutions of Blotto games,” Operations Research, DOI: 10.1287/opre.2022.2261, 2022.
  • [61] D. Beaglehole, “An efficient approximation algorithm for the Colonel Blotto game,” arXiv preprint arXiv:2201.10758, 2022.
  • [62] V. Leon and S. R. Etesami, “Bandit learning for dynamic Colonel Blotto game with a budget constraint,” arXiv preprint arXiv:2103.12833, 2021.
  • [63] D. Q. Vu, P. Loiseau, and A. Silva, “Approximate equilibria in generalized Colonel Blotto and generalized Lottery Blotto games,” arXiv preprint arXiv:1910.06559, 2019.
  • [64] E. Boix-Adserà, B. L. Edelman, and S. Jayanti, “The multiplayer Colonel Blotto game,” Games and Economic Behavior, vol. 129, pp. 15–31, 2021.
  • [65] E.-V. Vlatakis-Gkaragkounis, L. Flokas, and G. Piliouras, “Poincaré recurrence, cycles and spurious equilibria in gradient-descent-ascent for non-convex non-concave zero-sum games,” in Advances in Neural Information Processing Systems, vol. 32, Vancouver, BC, Canada, 2019, pp. 1–12.
  • [66] G. Zhang, Y. Wang, L. Lessard, and R. B. Grosse, “Near-optimal local convergence of alternating gradient descent-ascent for minimax optimization,” in International Conference on Artificial Intelligence and Statistics, 2022, pp. 7659–7679.
  • [67] E. Y. Hamedani and N. S. Aybat, “A primal-dual algorithm with line search for general convex-concave saddle point problems,” SIAM Journal on Optimization, vol. 31, no. 2, pp. 1299–1329, 2021.
  • [68] V. Tominin, Y. Tominin, E. Borodich, D. Kovalev, A. Gasnikov, and P. Dvurechensky, “On accelerated methods for saddle-point problems with composite structure,” arXiv preprint arXiv:2103.09344, 2021.
  • [69] G. Xie, Y. Han, and Z. Zhang, “DIPPA: An improved method for bilinear saddle point problems,” arXiv preprint arXiv:2103.08270, 2021.
  • [70] D. Kovalev, A. Gasnikov, and P. Richtárik, “Accelerated primal-dual gradient method for smooth and convex-concave saddle-point problems with bilinear coupling,” arXiv preprint arXiv:2112.15199, 2021.
  • [71] K. K. Thekumparampil, N. He, and S. Oh, “Lifted primal-dual method for bilinearly coupled smooth minimax optimization,” arXiv preprint arXiv:2201.07427, 2022.
  • [72] G. Gidel, T. Jebara, and S. Lacoste-Julien, “Frank-Wolfe algorithms for saddle point problems,” in International Conference on Artificial Intelligence and Statistics, Florida, USA, 2017, pp. 362–371.
  • [73] C. Chen, L. Luo, W. Zhang, and Y. Yu, “Efficient projection-free algorithms for saddle point problems,” in Advances in Neural Information Processing Systems, vol. 33, 2020, pp. 10 799–10 808.
  • [74] H. Li, Y. Tian, J. Zhang, and A. Jadbabaie, “Complexity lower bounds for nonconvex-strongly-concave min-max optimization,” in Advances in Neural Information Processing Systems, vol. 34, 2021, pp. 1–13.
  • [75] Y.-P. Hsieh, P. Mertikopoulos, and V. Cevher, “The limits of min-max optimization algorithms: Convergence to spurious non-critical sets,” in International Conference on Machine Learning, 2021, pp. 4337–4348.
  • [76] C.-Y. Wei, C.-W. Lee, M. Zhang, and H. Luo, “Linear last-iterate convergence in constrained saddle-point optimization,” in International Conference on Learning Representations, 2021, pp. 1–12.
  • [77] I. Bistritz, Z. Zhou, X. Chen, N. Bambos, and J. Blanchet, “No weighted-regret learning in adversarial bandits with delays,” Journal of Machine Learning Research, vol. 23, pp. 1–43, 2022.
  • [78] T. Fiez, R. Sim, S. Skoulakis, G. Piliouras, and L. Ratliff, “Online learning in periodic zero-sum games,” vol. 34, 2021, pp. 1–13.
  • [79] H. Gao, X. Wang, L. Luo, and X. Shi, “On the convergence of stochastic compositional gradient descent ascent method,” in International Joint Conference on Artificial Intelligence, 2021, pp. 1–7.
  • [80] A. Beznosikov, G. Scutari, A. Rogozin, and A. Gasnikov, “Distributed saddle-point problems under data similarity,” vol. 34, 2021.
  • [81] E.-V. Vlatakis-Gkaragkounis, L. Flokas, and G. Piliouras, “Solving min-max optimization with hidden structure via gradient descent ascent,” in Advances in Neural Information Processing Systems, vol. 34, 2021, pp. 1–14.
  • [82] D. Goktas and A. Greenwald, “Convex-concave min-max Stackelberg games,” in Advances in Neural Information Processing Systems, vol. 34, 2021.
  • [83] D. Xefteris, “Symmetric zero-sum games with only asymmetric equilibria,” Games and Economic Behavior, vol. 89, pp. 122–125, 2015.
  • [84] Y. Cai and C. Daskalakis, “On minmax theorems for multiplayer games,” in Proceedings of Annual ACM-SIAM Symposium on Discrete Algorithms, San Francisco, California, 2011, pp. 217–234.
  • [85] I. Anagnostides, I. Panageas, G. Farina, and T. Sandholm, “On last-iterate convergence beyond zero-sum games,” arXiv preprint arXiv:2203.12056, 2022.
  • [86] J. P. Bailey, “ time-average convergence in a generalization of multiagent zero-sum games,” arXiv preprint arXiv:2110.02482, 2021.
  • [87] T. Fiez, R. Sim, S. Skoulakis, G. Piliouras, and L. Ratliff, “Online learning in periodic zero-sum games: von Neumann vs Poincaré.”
  • [88] S. Skoulakis, T. Fiez, R. Sim, G. Piliouras, and L. Ratliff, “Evolutionary game theory squared: Evolving agents in endogenously evolving zero-sum games,” in AAAI Conference on Artificial Intelligence, 2021, pp. 1–9.
  • [89] E. Hughes, T. W. Anthony, T. Eccles, J. Z. Leibo, D. Balduzzi, and Y. Bachrach, “Learning to resolve alliance dilemmas in many-player zero-sum games,” arXiv preprint arXiv:2003.00799, 2020.
  • [90] S. Ganzfried, “Fast complete algorithm for multiplayer Nash equilibrium,” arXiv preprint arXiv:2002.04734, 2020.
  • [91] I. Anagnostides, C. Daskalakis, G. Farina, M. Fishelson, N. Golowich, and T. Sandholm, “Near-optimal no-regret learning for correlated equilibria in multi-player general-sum games,” arXiv preprint arXiv:2111.06008, 2021.
  • [92] I. Anagnostides, G. Farina, C. Kroer, A. Celli, and T. Sandholm, “Faster no-regret learning dynamics for extensive-form correlated and coarse correlated equilibria,” arXiv preprint arXiv:2202.05446, 2022.
  • [93] G. Gidel, “Multi-player games in the era of machine learning,” Ph.D. dissertation, Université de Montréal, 2020.
  • [94] Y. Zhang and B. An, “Converging to team-maxmin equilibria in zero-sum multiplayer games,” in International Conference on Machine Learning, 2020, pp. 11 033–11 043.
  • [95] F. Kalogiannis, E.-V. Vlatakis-Gkaragkounis, and I. Panageas, “Teamwork makes von Neumann work: Min-max optimization in two-team zero-sum games,” arXiv preprint arXiv:2111.04178, 2021.
  • [96] K. A. Hansen, T. D. Hansen, P. B. Miltersen, and T. B. Sørensen, “Approximability and parameterized complexity of minmax values,” in International Workshop on Internet and Network Economics, 2008, pp. 684–695.
  • [97] C. Borgs, J. Chayes, N. Immorlica, A. T. Kalai, V. Mirrokni, and C. Papadimitriou, “The myth of the folk theorem,” Games and Economic Behavior, vol. 70, no. 1, pp. 34–43, 2010.
  • [98] B. Gharesifard and J. Cortés, “Distributed convergence to Nash equilibria in two-network zero-sum games,” Automatica, vol. 49, no. 6, pp. 1683–1692, 2013.