On Tightness of the Tsaknakis-Spirakis Algorithm for Approximate Nash Equilibrium

07/03/2021 ∙ by Zhaohua Chen, et al. ∙ Shanghai Jiao Tong University Peking University 0

Finding the minimum approximate ratio for Nash equilibrium of bi-matrix games has derived a series of studies, started with 3/4, followed by 1/2, 0.38 and 0.36, finally the best approximate ratio of 0.3393 by Tsaknakis and Spirakis (TS algorithm for short). Efforts to improve the results remain not successful in the past 14 years. This work makes the first progress to show that the bound of 0.3393 is indeed tight for the TS algorithm. Next, we characterize all possible tight game instances for the TS algorithm. It allows us to conduct extensive experiments to study the nature of the TS algorithm and to compare it with other algorithms. We find that this lower bound is not smoothed for the TS algorithm in that any perturbation on the initial point may deviate away from this tight bound approximate solution. Other approximate algorithms such as Fictitious Play and Regret Matching also find better approximate solutions. However, the new distributed algorithm for approximate Nash equilibrium by Czumaj et al. performs consistently at the same bound of 0.3393. This proves our lower bound instances generated against the TS algorithm can serve as a benchmark in design and analysis of approximate Nash equilibrium algorithms.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Computing Nash equilibrium is a problem of great importance in a variety of fields, including theoretical computer science, algorithmic game theory and learning theory.

It has been shown that Nash equilibrium computing lies in the complexity class PPAD introduced by Papadimitriou [18]. Its approximate solution has been shown to be PPAD-complete for 3NASH by Daskalakis, Goldberg and Papadimitriou [6], and for 2NASH by Chen, Deng and Teng [3], indicating its computational intractability in general. This leads to a great many efforts to find an -approximate Nash equilibrium in polynomial time for small constant .

Early works by Kontogiannis et al. [12] and Daskalakis et al. [8], introduce simple polynomial-time algorithms to reach an approximation ratio of and , respectively. Their algorithms are based on searching strategies of small supports. Conitzer [4] also shows that the well-known fictitious play algorithm [2] gives a -approximate Nash equilibrium within constant rounds, combining Feder et al.’s result [10]. Subsequently, Daskalakis et al. [7] give an algorithm with an approximation ratio of 0.38 by enumerating arbitrarily large supports. The same result is achieved by Czumaj et al. in 2016 [5] with a totally different approach by solving the Nash equilibrium of two zero-sum games and making a further adjustment. Bosse et al. [1] provide another algorithm based on the previous work by Kontogiannis and Spirakis [13] that reaches a 0.36-approximate Nash equilibrium. Concurrently with them, Tsaknakis and Spirakis [20] establish the currently best-known approximation ratio of .

The original paper proves that the algorithm gives an upper bound of -approximate Nash equilibrium. However, it leaves the problem open that whether is tight for the algorithm. In literature, the experimental performance of the algorithm is far better than  [19]. The worst ratio in an empirical trial by Fearnley et al. shows that there is a game on which the TS algorithm gives a -approximate Nash equilibrium [9].

In this work, we prove that is indeed the tight bound for the TS algorithm [20] by giving a game instance, subsequently solving the open problem regarding the well-followed TS algorithm.

Despite the tightness of for the TS algorithm, our extensive experiment shows that it is rather difficult to find a tight instance in practice by brute-force enumerations. The experiment implies that the practical bound is inconsistent with the theoretical bound. This rather large gap is a result of the instability of both the stationary point111We follow [20] to define a stationary point for a strategy pair of the maximum value of two players’ deviations: It is one where the directional derivatives in all directions are nonnegative. The formal definition is presented in Definition 2. and the descent procedure searching for a stationary point.

Furthermore, we mathematically characterize all game instances able to attain the tight bound. We do a further experiment to explore for which games the ratio becomes tight. Based on it, we identify a region that the games generated are more likely tight instances.

We use the generated game instances to measure the worst-case performances of the Czumaj et al.’s algorithm [5], the regret-matching algorithm in online learning [11] and the fictitious play algorithm [2]. The experiments suggest that the regret-matching algorithm and the fictitious play algorithm perform well. Surprisingly, the algorithm of Czumaj et al. always reaches an approximation ratio of , implying that the tight instance generator for the TS algorithm beats a totally different algorithm.

This paper is organized in the followings. In Section 2, we introduce the basic definitions and notations that we use throughout the paper. In Section 3, we restate the TS algorithm [20] and propose two other auxiliary methods which help to analyze the original algorithm. With all preparations, we prove the existence of a game instance on which the TS algorithm reaches the tight bound by giving an example in Section 4. Further, we characterize all tight game instances and present a generator that outputs tight game instances in Section 5. We conduct extensive experiments to reveal the properties of the TS algorithm, and compare it with other approximate Nash equilibrium algorithms in Section 6. At last, we point out the key following works of this paper by raising several open problems in Section 7.

2 Definitions and Notations

We focus on finding an approximate Nash equilibrium on general 2-player games, where the row player and the column player have and strategies, respectively. Further, we respectively use and to denote the payoff matrices of row player and column player. We suppose that both and are normalized so that all their entries belong to . In fact, concerning Nash equilibrium, any game is equivalent to a normalized game, with appropriate shifting and scaling on both payoff matrices.

For two vectors

and , we say if each entry of is greater than or equal to the corresponding entry of . Meanwhile, let us denote by a -dimension vector with all entries equal to

. We use a probability vector to define either player’s behavior, which describes the probability that a player chooses any pure strategy to play. More specifically, row player’s strategy and column player’s strategy lie in

and respectively, where

For a strategy pair , we call it an -approximate Nash equilibrium, if for any , , the following inequalities hold:

Therefore, a Nash equilibrium is an -approximate Nash equilibrium with .

To simplify our further discussion, for any probability vector , we use

to denote the support of , and also

to denote the index set of all entries equal to the maximum/minimum entry of vector .

At last, we use to denote the value of the maximal entry of vector , and to denote the value of the maximal entry of vector confined in index set .

3 Algorithms

In this section, we first restate the TS algorithm [20], and then propose two auxiliary adjusting methods, which help to analyze the bound of the TS algorithm.

The TS algorithm formulates the approximate Nash equilibrium problem into an optimization problem. Specifically, we define the following functions:

The goal is to minimize over .

The relationship between the above function and approximate Nash equilibrium is as follows. Given strategy pair , and are the respective deviations of row player and column player. By definition, is an -approximate Nash equilibrium if and only if . In other words, as long as we obtain a point with value no greater than , an -approximate Nash equilibrium is reached.

The idea of TS algorithm is to find a stationary point of the objective function by a descent procedure and make a further adjustment on the stationary point222We will see in Remark 1 that finding a stationary point is not enough to reach a good approximation ratio; therefore the adjustment step is necessary.. To give the formal definition of stationary points, we need to define the scaled directional derivative of as follows:

Definition 1.

Given , the scaled directional derivative of in direction is

and are defined similarly with respect to and .

Now we give the definition of stationary points.

Definition 2.

is a stationary point if and only if for any ,

We use a descent procedure to find a stationary point. The descent procedure is presented in Appendix 0.B. It has already been proved that the procedure runs in the polynomial-time of precision to find a nearly stationary point [19].

Now let , . To better deal with , we introduce a new function as follows:

where , , , , .333Throughout the paper, we suppose that , and , , , , . These restrictions are omitted afterward for fluency. One can verify that when (which is a necessary condition for a stationary point as proved in Proposition 3),

Now let

By Definition 2, is a stationary point if and only if . Further, notice that

therefore, we have the following proposition.

Proposition 1.

is a stationary point if and only if

In the following context, we use to denote a stationary point. By von Neumann’s minimax theorem [16], we have

Proposition 2.

and there exist such that

We call the tuple a dual solution

as it can be calculated by dual linear programming.

Nevertheless, a stationary point may not be satisfying (i.e., with an approximation ratio of no less than in the worst case). In this case, we adjust the stationary point to another point lying in the following rectangle:

Different adjustments on derive different algorithms to find an approximate Nash equilibrium. We present three of these methods below, of which the first one is the solution by the TS algorithm, and the other two are for the sake of analysis in Section 4. For writing brevity, we define the following two subsets of the boundary of :

Method in the TS algorithm [20]. The first method is the original adjustment given by [20] (known as the TS algorithm in literature). Define the quantities

The adjusted strategy pair is

Minimum point on . For the second method, define

In a geometric view, our goal is to find the minimum point of on .

The strategy pair given by the second method is

Intersection point of linear bound of and on . As we will see later, always behaves no worse than theoretically. However, it is rather hard to quantitatively analyze the exact approximation ratio of the second method. Therefore, we propose a third adjustment method. Notice that , and are all convex and linear-piecewise functions with either or fixed. Therefore, they are bounded by linear functions on the boundary of . Formally, for , we have

(1)
(2)
(3)
(4)

Taking the minimum of terms on the right hand sides of (1) and (2), (3) and (4) respectively, we derive the following quantities444The denominator of or may be zero. In this case, we simply define or to be .

The adjusted strategy pair is

We remark that the outcome of all these three methods can be calculated in polynomial time of and .

4 A Tight Instance for All Three Methods

We now show the tight bound of the TS algorithm that we present in the previous section, with the help of two auxiliary adjustment methods proposed in Section 3. [20] has shown that the TS algorithm gives an approximation ratio of no greater than . In this section, we construct a game on which the TS algorithm attains the tight bound . In detail, the game is with payoff matrices (5), where is the tight bound, and are the quantities to be defined formally in Lemma 6. The game attains the tight bound at stationary point with dual solution , . Additionally, the bound stays for this game even when we try to find the minimum point of on entire .

(5)

The formal statement of this result is presented in the following Theorem 4.1.

Theorem 4.1.

There exists a game such that for some stationary point with dual solution , holds for any .

The proof of Theorem 4.1 is done by verifying the tight instance (5) above. Nevertheless, some preparations are required to theoretically finish the verification. They also imply the approach that we find the tight instance.

The preparation work consists of three parts. First, we give an equivalent condition of the stationary point in Proposition 3, which makes it easier to construct payoff matrices with a given stationary point and its corresponding dual solution. Second, we will illustrate a panoramic figure of function and on and subsequently reveal the relationship among the three adjusting strategy pairs presented in Section 3

. Finally, we give some estimations over

and show when these estimations are exactly tight. Below we present all propositions and lemmas we need. We leave all the proofs in Appendix 0.C.

The following proposition shows how to construct payoff matrices with given stationary point and dual solution .

Proposition 3.

Let

Then is a stationary point if and only if and there exist such that

(6)
(7)

Now we define the following quantities:

Lemma 1.

If , then .

For the sake of brevity below, we define

Then we have the following lemma:

Lemma 2.

The following two statements hold:

  1. Given , is an increasing, convex and linear-piecewise function of ; is a decreasing and linear function of .

  2. Given , is an increasing and convex, linear-piecewise function of ; is a decreasing and linear function of .

Recall that the second adjustment method yields the strategy pair . We have the following lemma indicating that and are the minimum points on the boundary of .

Lemma 3.

The following two statements hold:

  1. is the minimum point of on .

  2. is the minimum point of on .

Now we are ready to give an analysis on the third adjusting method.

Lemma 4.

The following two statements hold:

  1. is a linear function of if and only if

    (8)
  2. is a linear function of if and only if

    (9)

With all previous results, we can finally give a comparison on the three adjusting methods we present in Section 3.

Proposition 4.

and always hold. Meanwhile, holds if and only if

There is a final step to prepare for the proof of the tight bound. We present the following estimations and inequalities.

Lemma 5.

The following two estimations hold:

  1. If , then

    And symmetrically, when , we have

    Furthermore, if is not a Nash equilibrium, the equality holds if and only if .

  2. .

Remark 1.

Lemma 5 tells us that the worst value of a stationary point could attain is . In fact, . We now give the following game to demonstrate this. Consider the payoff matrices:

One can verify by Proposition 3 that is a stationary point with dual solution and . Therefore, merely a stationary point itself cannot beat a straightforward algorithm given by [8], which always finds a -approximate Nash equilibrium.

Lemma 6 ([20]).

Let

Then , which is attained exactly at and .

Finally, we prove Theorem 4.1 by verifying the tight instance (5) with stationary point and dual solution , .

Proof Sketch.

The verification is divided into 4 steps.

  1. [label=Step 0., fullwidth, listparindent=]

  2. Verify that is a stationary point by Proposition 3.

  3. Verify that and . Then by Proposition 4, and by Lemma 4, is a linear function of .

  4. Verify that , , , and . Then by Lemma 5 and Lemma 6, .

  5. Verify that for any .

The last step needs more elaboration. First, we do a verification similar to Step 2: , and thus is a linear function of . Second, we define and prove that for all , which completes the proof. ∎

From the proof of Theorem 4.1, we obtain the following useful corollaries.

Corollary 1.

Suppose . If either of the following two statements holds:

  1. and ,

  2. and ,

then for any on the boundary of , .

Corollary 2.

Suppose , and . Then for any , .

It is worth noting that the game with payoff matrices (5) has a pure Nash equilibrium with , and the stationary point

is a strictly-dominated strategy pair. However, a Nash equilibrium never supports on dominated strategies! We can also construct bountiful games that are able to attain the tight bound but own distinct characteristics. For instance, we can give a game with no dominated strategies but attains the tight bound. Some examples are listed in Appendix 0.E. Such results suggest that stationary points may not be an optimal concept (in theory) for further approximate Nash equilibrium calculation.

5 Generating Tight Instances

In Section 4, we proved the existence of the tight game instance, and we can do more than that. Specifically, we can mathematically profile all games that are able to attain the tight bound. In this section, we gather properties in the previous sections and post an algorithm that generates games of this kind. Using the generator, we can dig into the previous three approximate Nash equilibrium algorithms and reveal the behavior of these algorithms and even further, the features of stationary points. Algorithm 5 gives the generator of tight instances, in which the inputs are arbitrary . The algorithm outputs games such that is a stationary point and is a corresponding dual solution, or outputs “NO” if there is not such a game.

The main idea of the algorithm is as follows. Proposition 3 shows an easier-to-verify equivalent condition of the stationary point; and all additional equivalence conditions required by a tight instance are stated in Proposition 4, Lemma 5 and Lemma 6. Therefore, if we enumerate each pair of possible pure strategies in and respectively, whether there exists a tight instance solution becomes a linear programming problem.

  Algorithm 1 Tight Instance Generator

 

0:  .
1:  if  or  then
2:     Output “NO”
3:  end if
4:  .
5:  // Enumerate and .
6:  for  do
7:     Solve a feasible from the following LP with no objective function:
8:         // basic requirements.
9:          for ,
10:         , ,
11:         , ,
12:         // ensure is a stationary point.
13:         ,
14:         ,
15:         // ensure .
16:         ,
17:         // ensure .
18:         ,
19:          for , for ,
20:         , ,
21:         // ensure .
22:         .
23:     if LP is feasible then
24:        Output feasible solutions
25:     end if
26:  end for
27:  if LP is infeasible in any round then
28:     Output “No”
29:  end if

 

Proposition 5.

Given