Nash Games Among Stackelberg Leaders

10/14/2019 ∙ by Margarida Carvalho, et al. ∙ 0

We analyze Nash games played among leaders of Stackelberg games (NASP). We show it is Σ^p_2-hard to decide if the game has a mixed-strategy Nash equilibrium (MNE), even when there are only two leaders and each leader has one follower. We provide a finite time algorithm with a running time bounded by O(2^2^n) which computes MNEs for NASP when it exists and returns infeasibility if no MNE exists. We also provide two ways to improve the algorithm which involves constructing a series of inner approximations (alternatively, outer approximations) to the leaders' feasible region that will provably obtain the required MNE. Finally, we test our algorithms on a range of NASPs arising out of a game in the energy market, where countries act as Stackelberg leaders who play a Nash game, and the domestic producers act as the followers.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 3

page 15

Code Repositories

EPECsolve

Code to compute mixed-equilibrium in linear EPECs


view repo
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Game theoretical frameworks are powerful tools to model complex interactions between multiple strategic agents, where the each agent solve an optimization problem that is affected by the remaining agents’ decision. Typically, the agents (otherwise referred to as the players) have conflicting objectives and each agent’s decision could affect the feasible set and/or the objective of other agents. In non-cooperative game theory

, these agents try and optimize their decisions to maximize their utility in a non-cooperative way and this is studied extensively in the literature. For a general view on game theory we refer the reader to the classic books of 

Fudenberg and Tirole (1991), Owen (1985), and for algorithmic oriented books, Nisan et al. (2007), Shoham and Leyton-Brown (2009).

In this paper, we exclusively consider the setting where the agents don’t cooperate with each other - a setting commonly referred to as non-cooperative game. We also work under the standard assumption that all the players have complete information about every other player’s optimization problem. Such games are further studied as Nash games or Stackelberg games, depending upon if the agents decide simultaneously or sequentially.

In Nash games, typically a finite set of players simultaneously makes decisions, with an objective, and an understanding about the other players in the game and their influence on their objective. Nash games gained popularity since the Nobel prize-winning paper (Nash, 1951, 1950) which proved the existence of an equilibrium (which then got to be known as Nash equilibrium) for problems where each player has finitely many strategies. The concept then got extended to more general games where players could have a continuum of strategies.

Such games are now extensively used in modeling interactions between agents in various markets. For example, the world gas model (Egging et al., 2010), the North American natural gas model (Feijoo et al., 2016, Sankaranarayanan et al., 2018, Feijoo et al., 2018), the European gas models (Holz et al., 2008, Egging et al., 2008) and the competitive transportation model (Stein and Sudermann-Merx, 2018) solve Nash games where each player simultaneously solves a convex optimization problem parameterized in other players’ variables. There are also examples of such games where player’s optimization problems are non-convex (due to integer decision variables), for instance, the cross-border kidney exchange program model (Carvalho et al., 2017), the competitive lot-sizing models (Li and Meissner, 2011, Carvalho et al., 2018), the fixed charge transportation models (Sudermann-Merx et al., 2018).

In sequential games, there is a strict ordering of the players in terms of who decides first. Sequential games are known at least since the seminal results of Jeroslow (Jeroslow, 1985) which prove that the computational complexity of solving sequential games goes one level up in the polynomial hierarchy for every additional level (understood as round) in the sequential game. This strong negative result, restricted the number of levels to go beyond two very rarely in practice. Sequential games with exactly two levels are referred to as Stackelberg games or bilevel games and there is an extensive analysis of them. These games have a leader who decides first, keeping in mind that one or more players called the followers will make decisions, based upon her decision. These are relevant in a setting where a country (the leader) might want to set taxes, perhaps in order to maximize tax revenue or stabilize economy, and the indigenous producers (the followers) react to the leader’s decision, and themselves decide the quantity to produce, in order to maximize their profit or welfare. For example, bilevel formulations are used by Bard et al. (1998, 2000) to determine the government’s (leader) optimal tax credits for biofuel production which impact the agricultural sector (followers) behavior, in the context of pricing problems where the leader sets prices to activities and the followers react by selecting subsets of activities (Labbé and Violin, 2013), on network pricing problems (Brotcorne et al., 2008) where the leader sets tariffs for a subset of arcs of a multicommodity transportation network. Such models are also used in the context of pricing and environmental policies for electricity markets where power generators are normally considered as the leader and the network operator (also known as the Independent System Operator, ISO) as the follower (Hobbs et al., 2000, Gabriel and Leuthold, 2010, Feijoo and Das, 2014).

In this paper, we discuss a combination of themes that we refer to as a Nash game Among Stackelberg Players (NASP). It refers to a setting where the leaders of two or more Stackelberg games are engaged in a Nash game. An example of such problems, which will be referred to as a trivial NASP, is shown below. equationparentequation

subject to

equationparentequation

subject to

NASP is a subset of games referred to as equilibrium problems with equilibrium constraints (EPECs). Their use is immense in electricity markets (Hu and Ralph, 2007, Ralph and Smeers, 2006, Neetzow et al., 2019, Feijoo and Das, 2014). Hu and Ralph (2007) provide sufficient conditions when a pure-strategy Nash equilibrium (defined formally later) exists for a given EPEC. Then for general problems, they provide algorithms to obtain other weaker equilibrium points they call as local Nash and Nash-stationary equilibria. Kim and Ferris (2019) recently introduced a computational framework to model EPECs, also addressing issues related to their complementarity reformulations.

The motivation for our paper is international energy trade amidst climate change. We model governments of countries interacting with each other in energy trade as Nash game. Further, the governments in the country are also involved in Stackelberg games with their domestic energy producers. The government decides the tax for producers in a country or alternatively a carbon tax based on how much emissions the domestic producers cost. The government is obliged to ensure energy sufficiency in the country by trading energy internationally and imposing sufficiently low taxes so that the domestic producers produce more while operating on an objective to minimize emissions, thus motivated to impose higher taxes on producers emitting more greenhouse gases. Given the tax policy of the country, the domestic producers who act as followers of the Stackelberg game decide the quantity of energy to produce, in order to maximize their profits. A schematic representation of the interaction between the players are shown in Figure 1.

Figure 1: Schematic representation of NASP

In an attempt to solve this problem, we have two concrete contributions.

  1. To the best of our knowledge, we provide the first exact algorithm to solve NASP, even under the restricted settings where the objectives and constraints of all players are linear, except the followers’ objectives which could be a convex quadratic. Without further assumption on compactness of the domain or without any limits on the type of interaction between the players (like a perfect competition assumption or Cournot competition assumption ), we solve the problem to obtain mixed-strategy Nash equilibria. We are also precise in identifying only the Nash equilibria and never a relaxed version of equilibria. Conversely, we always identify a Nash equilibrium whenever it exists. Our algorithm performs elementary operations (where is the number of bits required to represent the problem) to identify a mixed-strategy Nash equilibrium for NASP.

  2. The number of elementary operations required to solve NASP is doubly exponential in the binary representation of the problem. This might make the algorithm look inefficient and one might wonder if the problem could be solved faster. We show that without strong consequences in the complexity theory, like the collapse of the polyhedral hierarchy, one cannot asymptotically produce algorithms faster than the one we provide in this paper, except for lower-order factors. We show that this result holds, irrespective of if one is interested in identifying or asserting the existence of pure-strategy Nash equilibria or mixed-strategy Nash equilibria for NASP.

In addition, we provide a computational framework to model energy trade games between countries as stated earlier. We use instances of this type of problems to test our algorithms, and provide computational results.

We organize the manuscript as follows. Section 2 is devoted to definitions and previously known results that are used in this paper. Section 3 reduces the SUBSET SUM INTERVAL problem to NASP to state and prove some computational complexity results on NASP. Section 4 presents an algorithm to find MNE for NASP, proves its finiteness and correctness and extends it with a primal-dual approach for acceleration.

2 Preliminaries

This section is devoted to defining the concepts used in the rest of the manuscript. In the following subsection, we define concepts corresponding to Nash games, bilevel games and recall relevant known results in game theory.

2.1 Nash game

Nash game corresponds to problems where the players of the game decide simultaneously and non-cooperatively, trying to maximize their objectives, given some constraints. We formally define a Nash game and some of its sub-types as follow.

Definition 1 (Nash games).
  1. A Nash game is defined as a finite tuple of optimization problems , where each is in the following form.

    where is defined as . We call as the problem of the player. With the same terminology, and are respectevly the objective function and the feasible set of the player.

  2. A Nash game is simple if, for each in Statement 1, the objective is of the form

    (3)

    for some , of appropriate dimensions. If, for each player in Statement 1, , the simple Nash game is said to be linear (i.e., the objective function of each leader is linear).

  3. A facile Nash game is a simple Nash game where, for each in Statement 1, is a polyhedron.

Definition 2 (Pure strategies).

The set of all feasible solutions to the Nash game is called the set of pure strategies and is denoted by .

(4)
Definition 3 (Pure-strategy Nash equilibrium).

Let be a Nash game as in Definition 1. is a pure-strategy Nash equilibrium (PNE) for if for all , solves

A generalization of pure strategies is mixed strategies, a concept that is motivated by the fact that each player can choose randomly from a finite set of pure strategies, say,

with probability

with . We formalize the concept of the equilibrium below.

Definition 4 (Mixed-strategy Nash equilibria).

Let where

is a Borel probability distribution on

with finite support. Then is a mixed-strategy Nash equilibrium (MNE) if for all ,

where and and are defined analogously to .

Under some weak assumptions, an MNE always exists (Nash, 1951, 1950). Computing an MNE could be relatively difficult in practice. Even for games with two players and finitely many strategies each, this problem can be shown to belong to the complexity class PPAD-complete (Chen and Deng, 2006).

2.2 Stackelberg games

In contrast to Nash games, where players take simultaneous decisions, in a bilevel (or more generally, in a multi level) game players take decisions in a given order. This arises naturally from the so-called Stackelberg game, where the leader decides first, optimizing their objective with respect to some constraints. Subsequently, the follower decides, with its objective and constraints now depending upon the leader’s decision (Candler and Townsley, 1982). In this manuscript, we restrict to bilevel games of a specific form. In order to establish a canonical form to model how the leader’s variables affect the follower’s ones, we introduce the concept of simple parameterization.

Definition 5 (Simple parameterization).
  1. An optimization problem in is said to have a simple parameterization with respect to if the problem is of the following form: equationparentequation

    (5a)
    subject to (5b)
    (5c)

    where is some function, ,

    are matrices and vectors of appropriate dimensions, and

    .

    In particular, the optimization problem is a convex quadratic program with simple parameterization with respect to if, for some and , and is a polyhedron.

  2. A Nash game is said to have a simple parameterization with respect to if each optimization problem has a simple parameterization with respect to .

Definition 6 (Stackelberg game).

Let be a Nash game with a simple parameterization with respect to some . Let . Then an optimization problem of the form equationparentequation

subject to (6a)
(6b)
(6c)

is called a Stackelberg game.

Remark 7.

Note that in the optimization problem eq. 6, the follower’s variables appear under the minimization. Thus, the optimistic version of eq. 6 is being considered. In other words, if for a given leader’s decision , the follower has multiple optimal solutions, then the follower will choose among them, the one that benefits the leader the most.

Definition 8 (Simple Stackelberg game).

A Stackelberg game is simple if is a facile Nash game with a simple parameterization with respect to , is a polyhedron and is a linear function.

Remark 9.

Observe that particular structures in Stackelberg games result in the following:

  • If is an optimization problem, eq. 6 reduces to a bilevel programming problem.

  • In eq. 6, if is a polyhedron, is a linear function, and

    is a linear program with a simple parameterization with respect to

    , then we obtain a continuous bilevel linear programming problem, which is known to be NP-complete (Bard, 1991). Note that this is indeed a simple bilevel problem.

  • In eq. 6, if is an intersection of a polyhedron and , is a linear function, and is a mixed integer linear program with a simple parameterization with respect to , we obtain a mixed-integer bilevel linear program. This problem is known to be -hard (Lodi et al., 2014). Therefore, it is very unlikely that these problems have algorithms with asymptotic complexity better than . Note that it can be shown that this is not a simple bilevel problem.

Below, we define linear complementarity problems (LCP) for which the connection between the PNE of facile Nash games will be established in Theorem 14. In particular, LCPs enable us to derive and model first order optimality conditions for quadratic programs (Cottle et al., 2009) and hence reformulate a class of games.

Definition 10 (Linear complementarity problem).

Given , the linear complementarity problem (LCP) is to find such that

(7)

or to show that no such exists. The condition is equivalent to . Also, we denote the set of all satisfying eq. 7 as the feasible set induced by the LCP.

Deciding if an LCP is feasible is NP-complete. But we have efficient algorithms to solve LCPs in practice (Dirkse and Ferris, 1995).

Definition 11 (Nasp).

A Nash game among Stackelberg players (NASP) is a linear simple Nash game where for each , is a simple bilevel problem.

A schematic representation of NASP is in Figure 1. The central problem considered in this paper is to find MNE for NASPs or determine if none exists.

Definition 12 (Trivial NASP).

A trivial NASP is a NASP where , and and are simple bilevel games whose lower levels are linear programs with a simple parametrization.

The additional assumptions holding on a trivial NASP (as of Definition 12) with respect to a general Nash in Definition 11 are seemingly strong. We require that each leader has precisely one follower - as opposed to finitely many followers - and that each follower solves a linear program - as opposed to a quadratic program - with a simple parameterization. However, in Section 3, we show that the trivial NASP is already a computationally complex problem to solve.

Below, in eqs. 9 and 8 we give a canonical form of a trivial NASP. For convenience of notation, we call the first player the Latin player (all parameters and the variables of this player’s problem are shown in Latin letters) and the Greek player (all parameters and the variables of this player’s problem are shown in Greek letters). equationparentequation

subject to (8a)
(8b)
(8c)

equationparentequation

subject to (9a)
(9b)
(9c)

2.3 Known results

First, we summarize some well-known results for solving facile Nash game as LCP.

Theorem 13.

Let be a facile Nash game. Then has a MNE if and only if it has a PNE.

Theorem 14.

Let be a facile Nash game. Then, there exist such that every solution to the LCP defined by is a PNE for and every PNE of solves the LCP.

We refer the reader to standard textbooks on complementarity problems for proofs (Facchinei and Pang, 2015, Cottle, Pang, and Stone, 2009).

Theorem 15 (Basu et al. (2019)).

Let be the feasible set of a simple Stackelberg game. Then is a finite union of polyhedra. Conversely, let be a finite union of polyhedra. Then there exists a simple Stackelberg game with containing exactly player, i.e., a simple bilevel program, such that the feasible region of the simple Stackelberg game provides an extended formulation of .

Theorem 16 (Balas (1985)).

Let be polyhedra such that . Then

(10)

In other words, given a finite union of polyhedra, one can find the closure of the convex hull of their union using the above Theorem 16.

We define SUBSET SUM INTERVAL below, a decision problem which is no harder than NASP as we show in Section 3.

Definition 17 (Subset Sum Interval).

Given , with none of them equal to zero, and , decide the following:

In other words, we seek - within an interval of integers - for a number that cannot be expressed as a sum of a subset of or alternatively show that no such exists. Here, can be chosen as a power of . For instance, such that .

Theorem 18 (Eggermont and Woeginger (2013)).

Given such that , SUBSET SUM INTERVAL is hard.

Theorem 18 claims that, unless the polynomial hierarchy collapses to the second level, there exists no algorithm to solve SUBSET SUM INTERVAL in time.

3 Hardness of finding a Nash equilibrium

We characterize the hardness of finding Nash equilibria of NASP starting in this section. The results claim that - unless the polynomial hierarchy collapses to the second level - one would necessarily use elementary operations to decide the existence and hence find a Nash equilibrium for a trivial NASP. The three theorems are formally presented below.

Theorem 19.

It is -hard to decide if a trivial NASP has a PNE.

Theorem 20.

If the feasible set of each player in a NASP is a bounded set, an MNE exists.

Theorem 21.

It is -hard to decide if a trivial NASP has an MNE.

Proof of Theorem 19.

To show the hardness of NASP, we will rewrite SUBSET SUM INTERVAL as a trivial NASP of comparable size. Then we appeal to Theorem 18 to establish the hardness of trivial NASP. Finally, we claim that NASP is only a generalization of trivial NASP, and hence could not be any easier.

For the sake of clarity, we call one of the Stackelberg games in the trivial NASP the Latin and the other the Greek. The decision variables of the Latin game’s leader is and the Latin game’s follower is . Similarly, the decision variables of the Greek game’s leader is and that of the Greek game’s follower is . For the SUBSET SUM INTERVAL, we keep the notation introduced in Definition 17.

First, we define as the unique -bit binary representation of : for instance,   satisfies . Then, we introduce , , and , which can be computed in polynomial time with respect to the data in SUBSET SUM INTERVAL.

equationparentequation

Latin player

The Latin player is a Stackelberg game leader, whose variables - along with their only follower’s variables - are denoted by Latin alphabets .

(11a)
subject to
(11b)
(11c)
(11d)
(11e)
(11f)
(11g)

Greek player

The Greek player is also a Stackelberg game leader, whose variables - along with their only follower’s variables - are denoted by Greek alphabets below.

(11h)
subject to
(11i)
(11j)
(11k)
(11l)
(11m)
(11n)

We now claim that the game in Section 3 has a pure-strategy Nash equilibrium, if and only if the SUBSET SUM INTERVAL instance has a decision YES.

Claim 1.

The game defined in Section 3 is a trivial NASP.

Proof of Claim. Note that all constraints are linear, and if the variables of the other player is fixed, the objectives are also linear. The follower is simply parameterized in their leader’s variables. Also, the interaction between the two leaders follow the definition of a simple Nash game. Finally, there are precisely two leaders. Hence - by definition - the game in Section 3 is a trivial NASP.

Claim 2.

The region in the space of defined by eqs. 11c and 11g is the Cartesian product of for . Similarly the region in the space of defined by eqs. 11k and 11n is the Cartesian product of for .

Proof of Claim. Notice that the constraints in eq. 11g enforce and since is minimized, it is necessarily chosen to be equal to . However, if this quantity should be non-negative as enforced in eq. 11c, then either or should hold and the claim follows.

Claim 3.

If is a pure-strategy equilibrium for Section 3, then .

Proof of Claim. Note that changing is a feasible deviation which improves the Greek player’s objective, irrespective of the Latin player’s decision.

Claim 4.

If is a pure-strategy equilibrium for Section 3, then .

Proof of Claim. If , then we are done using Claim 3. Suppose and for some , . Observe that the Latin player has no incentive to keep and have an objective value of . Instead, she can choose , and for all and any feasible value for for . One can rapidly check that this is feasible and optimal for the Latin player, given . This also means that the Greek player’s objective is , as each of the summands in their objective vanishes, and makes the first term vanish. Hence, this cannot be a Nash equilibrium since the Greek player has a profitable deviation by setting and for , which is feasible and has an objective value of .

Claim 5.

If SUBSET SUM INTERVAL has decision YES, then Section 3 has a pure-strategy Nash equilibrium.

Proof of Claim. Suppose there exists such that such that for all , . Recall , namely the unique -bit binary representation of . Hence, consider the following strategy. equationparentequation

(12a)
(12b)
(12c)
(12d)
(12e)
(12f)
(12g)
(12h)

It is easy to check that the strategy in eq. 12 is feasible. Given , we observe that the strategy is optimal for the Latin player as follows. Note that due to the choice for , all but the first term of the Latin player vanish. The largest value the first term can take is when the choice of . Leftovers terms do not affect the Latin player’s objective, as long as they are feasible.

For what concerns the Greek player, the current objective is . We show that it is not possible to improve their objective. With , no other deviation is possible. Hence, consider the deviation : with such strategy the first term in the objective vanishes. Let . Observe that , and let . Note that we require for . Otherwise the fifth term in the objective will be a large negative term, and the objective value cannot ever exceed . With such a choice of for , the fifth term in the objective evaluates to , and the fourth term evaluates to . Therefore, the objective value hence results in . However, since it is a YES instance of SUBSET SUM INTERVAL, the deficit in the objective value can never be made up by any choice of for and making the second term equal to . If they are chosen to exceed , then eq. 11m is violated if it is strictly less than , the objective cannot exceed and hence is no longer a valid deviation. Thus eq. 12 is indeed a Nash Equilibrium.

Claim 6.

If SUBSET SUM INTERVAL has decision NO, then Section 3 has no pure-strategy Nash-equilibrium.

Proof of Claim. We prove the result by contraddiction. Assume that the SUBSET SUM INTERVAL instance has an answer NO and there exists a pure-strategy Nash equilibrium for Section 3, with . From Claims 2 and 4, any pure-strategy Nash equilibrium necessarily has . Observe that from eq. 11m, enforces that for , and hence the Greek player has an objective value of . Now, with , observe that the Latin player’s objective is . Thus, we necessarily have . From eq. 11f, we deduce for , while from (11d) we obtain for . The only value of that satisfies this condition along with eq. 11g is for

That only leaves for . Now, we show that for any value of , the Greek player has a profitable deviation, namely she can make their objective strictly greater than .

Let . From eq. 11d, note that . We choose some such that , and for , we set . Since and , the third term in the Greek player’s objective evaluates to . The fourth term is in between and , and the fifth term vanishes. Keeping in mind that , the objective now evaluates to a number between and : in other words the objective is and . Since this is a NO instance of SUBSET SUM INTERVAL, such that . Set if , and if . This is feasible and makes the objective value equal to , which is a positive deviation. Therefore is not a Nash equilibrium. ∎

This implies a corollary about bounded linear integer programming Nash games.

Corollary 22.

Consider a linear Nash Game where each is a mixed-integer linear program. It is -hard to decide if has a PNE.

Proof.

The proof follows from the fact that bounded bilevel programs can be reformulated as bounded integer programs (Basu et al., 2019). The problem defined in Section 3 is a bounded bilevel program with each variable necessarily taking value in and can be reformulated into a mixed-integer linear program of similar size. ∎

Next we show that, under an assumption of boundedness, an MNE always exists.

Proof of Theorem 20.

Let the feasible region of the -th player be , namely a bounded set. Since the objective is linear (given ), there always exists an optimal solution, which is an extreme point of . However, given that are feasible sets of bilevel linear programs, we know that the feasible region of the players is a finite union of polyhedra from Theorem 15. It follows that is a polyhedron. Since we assume also boundedness, the NASP feasible regiion is indeed a polytope. Thus the -th player’s strategy is the set of extreme points of this polytope which is finite in number. Since the same holds for each player, this is a Nash game with finitely many strategies. From Nash (1950, 1951), this game has an MNE. ∎

From Theorem 20, deciding on the existence of a mixed-strategy Nash equilibrium is trivial if each player has a bounded feasible set. We extend this result with Theorem 21, showing that even if the feasible region of one player is unbounded, then deciding on the existence of a MNE is -hard.

Before proving Theorem 21, we introduce the technical Lemma 23. While Theorem 15 shows that any finite union of polyhedra can be written as a feasible region of a bilevel problem in a lifted space, Lemma 23 explicitly provides the description of this set for a given union of two polyhedra.

Lemma 23.

Consider a set defined as the union of two polyhedra, namely

(13)

has an extended formulation as a feasible set of a simple bilevel program.

Proof.

The following bilevel problem gives the necessary extended formulation. Variables are the variables in the lifted space, which can be projected out. equationparentequation

(14a)
(14b)
(14c)
(14d)
(14e)
(14f)
(14g)

From Lemma 23 we can further derive Lemma 24.

Lemma 24.

Suppose and have an extended formulation as bilevel programs. So does .

Proof.

If has an extended formulation given by
, and if has an extended formulation given by
, then the following is an extended formulation of :

With Lemmata 24 and 23, we can prove Theorem 21.

Proof of Theorem 21.

We reduce SUBSET SUM INTERVAL into a problem of deciding the existence of a mixed-strategy Nash equilibrium for a trivial NASP. Let , and keep the same notation previously introduced for a Latin and a Greek player. equationparentequation

Latin player

The Latin player is a Stackelberg game leader. The variables of the leader and the follower are denoted by Latin alphabets and respectively.

(15a)
subject to
(15b)
(15c)
(15d)
(15e)
(15f)
(15g)
(15h)
(15i)

Greek player

The Greek player is - as the Latin one - a Stackelberg game leader, where leader variables and the follower are respectively denoted by Greek alphabets and .

(15j)
subject to
(15k)
(15l)
(15m)
(15n)
(15o)
Claim 7.

The game defined in Section 3 is a trivial NASP.

Proof of Claim. Note that all constraints are linear, and if the variables of the other player is fixed, the objectives are also linear. The constraints eq. 15h are valid due to Lemma 23. Due to Lemma 24, we can have multiple bilevel constraints in eqs. 15i and 15h. The follower is simply parameterized in their leader’s variables. Also, the interaction between the two leaders follow the definition of a simple Nash game. Finally, there are precisely two leaders. Therefore, this is a trivial NASP.

Claim 8.

The region in the space of defined by eqs. 15i and 15c is the Cartesian product of for . Similarly the region in the space of defined by eqs. 15l and 15o is the Cartesian product of for .

Proof of Claim. Analogous to Claim 2.

Claim 9.

takes integer values only.

Proof of Claim. From eq. 15h, each for can take a value of either or , depending upon which of the two polyhedra (in the definition of ) the variable falls in. Moreover, considering that in eq. 15f, RHS is a sum of integers, the LHS, which is is also an integer.

Claim 10.

holds for the Latin player’s feasible set.

Proof of Claim. Consider the set defined in eq. 13. For a point in the first polyhedra, and , so one can write . Similarly, for a point in the second polyhedron and . So again one can write . Thus, the nonlinear equation is valid for the set .

Now multiplying both sides of eq. 15f with , one gets

The second equality follows from eq. 15e, and the third equality from the fact that is valid for and eq. 15h.

Claim 11.

Given some between and , the Latin player has a profitable unilateral deviation for any feasible strategy with .

Proof of Claim. Note that if satisfies the given conditions, then is feasible for the Latin player. Now observe the last two terms of the objective function. From Claim 10, we can rewrite them as . Considering the last two terms in isolation, they reach a maximum value for the feasible choice of . Now, we argue that the player can never be optimal, choosing . As established in Claim 9,