1 Introduction
Computing Nash equilibrium is a problem of great importance in a variety of fields, including theoretical computer science, algorithmic game theory and learning theory.
It has been shown that Nash equilibrium computing lies in the complexity class PPAD introduced by Papadimitriou [18]. Its approximate solution has been shown to be PPADcomplete for 3NASH by Daskalakis, Goldberg and Papadimitriou [6], and for 2NASH by Chen, Deng and Teng [3], indicating its computational intractability in general. This leads to a great many efforts to find an approximate Nash equilibrium in polynomial time for small constant .
Early works by Kontogiannis et al. [12] and Daskalakis et al. [8], introduce simple polynomialtime algorithms to reach an approximation ratio of and , respectively. Their algorithms are based on searching strategies of small supports. Conitzer [4] also shows that the wellknown fictitious play algorithm [2] gives a approximate Nash equilibrium within constant rounds, combining Feder et al.’s result [10]. Subsequently, Daskalakis et al. [7] give an algorithm with an approximation ratio of 0.38 by enumerating arbitrarily large supports. The same result is achieved by Czumaj et al. in 2016 [5] with a totally different approach by solving the Nash equilibrium of two zerosum games and making a further adjustment. Bosse et al. [1] provide another algorithm based on the previous work by Kontogiannis and Spirakis [13] that reaches a 0.36approximate Nash equilibrium. Concurrently with them, Tsaknakis and Spirakis [20] establish the currently bestknown approximation ratio of .
The original paper proves that the algorithm gives an upper bound of approximate Nash equilibrium. However, it leaves the problem open that whether is tight for the algorithm. In literature, the experimental performance of the algorithm is far better than [19]. The worst ratio in an empirical trial by Fearnley et al. shows that there is a game on which the TS algorithm gives a approximate Nash equilibrium [9].
In this work, we prove that is indeed the tight bound for the TS algorithm [20] by giving a game instance, subsequently solving the open problem regarding the wellfollowed TS algorithm.
Despite the tightness of for the TS algorithm, our extensive experiment shows that it is rather difficult to find a tight instance in practice by bruteforce enumerations. The experiment implies that the practical bound is inconsistent with the theoretical bound. This rather large gap is a result of the instability of both the stationary point^{1}^{1}1We follow [20] to define a stationary point for a strategy pair of the maximum value of two players’ deviations: It is one where the directional derivatives in all directions are nonnegative. The formal definition is presented in Definition 2. and the descent procedure searching for a stationary point.
Furthermore, we mathematically characterize all game instances able to attain the tight bound. We do a further experiment to explore for which games the ratio becomes tight. Based on it, we identify a region that the games generated are more likely tight instances.
We use the generated game instances to measure the worstcase performances of the Czumaj et al.’s algorithm [5], the regretmatching algorithm in online learning [11] and the fictitious play algorithm [2]. The experiments suggest that the regretmatching algorithm and the fictitious play algorithm perform well. Surprisingly, the algorithm of Czumaj et al. always reaches an approximation ratio of , implying that the tight instance generator for the TS algorithm beats a totally different algorithm.
This paper is organized in the followings. In Section 2, we introduce the basic definitions and notations that we use throughout the paper. In Section 3, we restate the TS algorithm [20] and propose two other auxiliary methods which help to analyze the original algorithm. With all preparations, we prove the existence of a game instance on which the TS algorithm reaches the tight bound by giving an example in Section 4. Further, we characterize all tight game instances and present a generator that outputs tight game instances in Section 5. We conduct extensive experiments to reveal the properties of the TS algorithm, and compare it with other approximate Nash equilibrium algorithms in Section 6. At last, we point out the key following works of this paper by raising several open problems in Section 7.
2 Definitions and Notations
We focus on finding an approximate Nash equilibrium on general 2player games, where the row player and the column player have and strategies, respectively. Further, we respectively use and to denote the payoff matrices of row player and column player. We suppose that both and are normalized so that all their entries belong to . In fact, concerning Nash equilibrium, any game is equivalent to a normalized game, with appropriate shifting and scaling on both payoff matrices.
For two vectors
and , we say if each entry of is greater than or equal to the corresponding entry of . Meanwhile, let us denote by a dimension vector with all entries equal to. We use a probability vector to define either player’s behavior, which describes the probability that a player chooses any pure strategy to play. More specifically, row player’s strategy and column player’s strategy lie in
and respectively, whereFor a strategy pair , we call it an approximate Nash equilibrium, if for any , , the following inequalities hold:
Therefore, a Nash equilibrium is an approximate Nash equilibrium with .
To simplify our further discussion, for any probability vector , we use
to denote the support of , and also
to denote the index set of all entries equal to the maximum/minimum entry of vector .
At last, we use to denote the value of the maximal entry of vector , and to denote the value of the maximal entry of vector confined in index set .
3 Algorithms
In this section, we first restate the TS algorithm [20], and then propose two auxiliary adjusting methods, which help to analyze the bound of the TS algorithm.
The TS algorithm formulates the approximate Nash equilibrium problem into an optimization problem. Specifically, we define the following functions:
The goal is to minimize over .
The relationship between the above function and approximate Nash equilibrium is as follows. Given strategy pair , and are the respective deviations of row player and column player. By definition, is an approximate Nash equilibrium if and only if . In other words, as long as we obtain a point with value no greater than , an approximate Nash equilibrium is reached.
The idea of TS algorithm is to find a stationary point of the objective function by a descent procedure and make a further adjustment on the stationary point^{2}^{2}2We will see in Remark 1 that finding a stationary point is not enough to reach a good approximation ratio; therefore the adjustment step is necessary.. To give the formal definition of stationary points, we need to define the scaled directional derivative of as follows:
Definition 1.
Given , the scaled directional derivative of in direction is
and are defined similarly with respect to and .
Now we give the definition of stationary points.
Definition 2.
is a stationary point if and only if for any ,
We use a descent procedure to find a stationary point. The descent procedure is presented in Appendix 0.B. It has already been proved that the procedure runs in the polynomialtime of precision to find a nearly stationary point [19].
Now let , . To better deal with , we introduce a new function as follows:
where , , , , .^{3}^{3}3Throughout the paper, we suppose that , and , , , , . These restrictions are omitted afterward for fluency. One can verify that when (which is a necessary condition for a stationary point as proved in Proposition 3),
Now let
By Definition 2, is a stationary point if and only if . Further, notice that
therefore, we have the following proposition.
Proposition 1.
is a stationary point if and only if
In the following context, we use to denote a stationary point. By von Neumann’s minimax theorem [16], we have
Proposition 2.
and there exist such that
We call the tuple a dual solution
as it can be calculated by dual linear programming.
Nevertheless, a stationary point may not be satisfying (i.e., with an approximation ratio of no less than in the worst case). In this case, we adjust the stationary point to another point lying in the following rectangle:
Different adjustments on derive different algorithms to find an approximate Nash equilibrium. We present three of these methods below, of which the first one is the solution by the TS algorithm, and the other two are for the sake of analysis in Section 4. For writing brevity, we define the following two subsets of the boundary of :
Method in the TS algorithm [20]. The first method is the original adjustment given by [20] (known as the TS algorithm in literature). Define the quantities
The adjusted strategy pair is
Minimum point on . For the second method, define
In a geometric view, our goal is to find the minimum point of on .
The strategy pair given by the second method is
Intersection point of linear bound of and on . As we will see later, always behaves no worse than theoretically. However, it is rather hard to quantitatively analyze the exact approximation ratio of the second method. Therefore, we propose a third adjustment method. Notice that , and are all convex and linearpiecewise functions with either or fixed. Therefore, they are bounded by linear functions on the boundary of . Formally, for , we have
(1)  
(2)  
(3)  
(4) 
Taking the minimum of terms on the right hand sides of (1) and (2), (3) and (4) respectively, we derive the following quantities^{4}^{4}4The denominator of or may be zero. In this case, we simply define or to be .
The adjusted strategy pair is
We remark that the outcome of all these three methods can be calculated in polynomial time of and .
4 A Tight Instance for All Three Methods
We now show the tight bound of the TS algorithm that we present in the previous section, with the help of two auxiliary adjustment methods proposed in Section 3. [20] has shown that the TS algorithm gives an approximation ratio of no greater than . In this section, we construct a game on which the TS algorithm attains the tight bound . In detail, the game is with payoff matrices (5), where is the tight bound, and are the quantities to be defined formally in Lemma 6. The game attains the tight bound at stationary point with dual solution , . Additionally, the bound stays for this game even when we try to find the minimum point of on entire .
(5) 
The formal statement of this result is presented in the following Theorem 4.1.
Theorem 4.1.
There exists a game such that for some stationary point with dual solution , holds for any .
The proof of Theorem 4.1 is done by verifying the tight instance (5) above. Nevertheless, some preparations are required to theoretically finish the verification. They also imply the approach that we find the tight instance.
The preparation work consists of three parts. First, we give an equivalent condition of the stationary point in Proposition 3, which makes it easier to construct payoff matrices with a given stationary point and its corresponding dual solution. Second, we will illustrate a panoramic figure of function and on and subsequently reveal the relationship among the three adjusting strategy pairs presented in Section 3
. Finally, we give some estimations over
and show when these estimations are exactly tight. Below we present all propositions and lemmas we need. We leave all the proofs in Appendix 0.C.The following proposition shows how to construct payoff matrices with given stationary point and dual solution .
Proposition 3.
Let
Then is a stationary point if and only if and there exist such that
(6)  
(7) 
Now we define the following quantities:
Lemma 1.
If , then .
For the sake of brevity below, we define
Then we have the following lemma:
Lemma 2.
The following two statements hold:

Given , is an increasing, convex and linearpiecewise function of ; is a decreasing and linear function of .

Given , is an increasing and convex, linearpiecewise function of ; is a decreasing and linear function of .
Recall that the second adjustment method yields the strategy pair . We have the following lemma indicating that and are the minimum points on the boundary of .
Lemma 3.
The following two statements hold:

is the minimum point of on .

is the minimum point of on .
Now we are ready to give an analysis on the third adjusting method.
Lemma 4.
The following two statements hold:

is a linear function of if and only if
(8) 
is a linear function of if and only if
(9)
With all previous results, we can finally give a comparison on the three adjusting methods we present in Section 3.
Proposition 4.
and always hold. Meanwhile, holds if and only if
There is a final step to prepare for the proof of the tight bound. We present the following estimations and inequalities.
Lemma 5.
The following two estimations hold:

If , then
And symmetrically, when , we have
Furthermore, if is not a Nash equilibrium, the equality holds if and only if .

.
Remark 1.
Lemma 5 tells us that the worst value of a stationary point could attain is . In fact, . We now give the following game to demonstrate this. Consider the payoff matrices:
One can verify by Proposition 3 that is a stationary point with dual solution and . Therefore, merely a stationary point itself cannot beat a straightforward algorithm given by [8], which always finds a approximate Nash equilibrium.
Lemma 6 ([20]).
Let
Then , which is attained exactly at and .
Finally, we prove Theorem 4.1 by verifying the tight instance (5) with stationary point and dual solution , .
Proof Sketch.
The verification is divided into 4 steps.

[label=Step 0., fullwidth, listparindent=]

Verify that is a stationary point by Proposition 3.

Verify that for any .
The last step needs more elaboration. First, we do a verification similar to Step 2: , and thus is a linear function of . Second, we define and prove that for all , which completes the proof. ∎
From the proof of Theorem 4.1, we obtain the following useful corollaries.
Corollary 1.
Suppose . If either of the following two statements holds:

and ,

and ,
then for any on the boundary of , .
Corollary 2.
Suppose , and . Then for any , .
It is worth noting that the game with payoff matrices (5) has a pure Nash equilibrium with , and the stationary point
is a strictlydominated strategy pair. However, a Nash equilibrium never supports on dominated strategies! We can also construct bountiful games that are able to attain the tight bound but own distinct characteristics. For instance, we can give a game with no dominated strategies but attains the tight bound. Some examples are listed in Appendix 0.E. Such results suggest that stationary points may not be an optimal concept (in theory) for further approximate Nash equilibrium calculation.
5 Generating Tight Instances
In Section 4, we proved the existence of the tight game instance, and we can do more than that. Specifically, we can mathematically profile all games that are able to attain the tight bound. In this section, we gather properties in the previous sections and post an algorithm that generates games of this kind. Using the generator, we can dig into the previous three approximate Nash equilibrium algorithms and reveal the behavior of these algorithms and even further, the features of stationary points. Algorithm 5 gives the generator of tight instances, in which the inputs are arbitrary . The algorithm outputs games such that is a stationary point and is a corresponding dual solution, or outputs “NO” if there is not such a game.
The main idea of the algorithm is as follows. Proposition 3 shows an easiertoverify equivalent condition of the stationary point; and all additional equivalence conditions required by a tight instance are stated in Proposition 4, Lemma 5 and Lemma 6. Therefore, if we enumerate each pair of possible pure strategies in and respectively, whether there exists a tight instance solution becomes a linear programming problem.
Algorithm 1 Tight Instance Generator
Proposition 5.
Given
Comments
There are no comments yet.