# On Finding Local Nash Equilibria (and Only Local Nash Equilibria) in Zero-Sum Games

We propose a two-timescale algorithm for finding local Nash equilibria in two-player zero-sum games. We first show that previous gradient-based algorithms cannot guarantee convergence to local Nash equilibria due to the existence of non-Nash stationary points. By taking advantage of the differential structure of the game, we construct an algorithm for which the local Nash equilibria are the only attracting fixed points. We also show that the algorithm exhibits no oscillatory behaviors in neighborhoods of equilibria and show that it has the same per-iteration complexity as other recently proposed algorithms. We conclude by validating the algorithm on two numerical examples: a toy example with multiple Nash equilibria and a non-Nash equilibrium, and the training of a small generative adversarial network (GAN).

## Authors

• 1 publication
• 178 publications
• 26 publications
• ### Local Nash Equilibria are Isolated, Strict Local Nash Equilibria in `Almost All' Zero-Sum Continuous Games

We prove that differential Nash equilibria are generic amongst local Nas...
02/03/2020 ∙ by Eric Mazumdar, et al. ∙ 0

• ### Policy Optimization Provably Converges to Nash Equilibria in Zero-Sum Linear Quadratic Games

We study the global convergence of policy optimization for finding the N...
05/31/2019 ∙ by Kaiqing Zhang, et al. ∙ 0

• ### Fast Planning in Stochastic Games

Stochastic games generalize Markov decision processes (MDPs) to a multia...
01/16/2013 ∙ by Michael Kearns, et al. ∙ 0

• ### A mean-field analysis of two-player zero-sum games

Finding Nash equilibria in two-player zero-sum continuous games is a cen...
02/14/2020 ∙ by Carles Domingo Enrich, et al. ∙ 5

• ### A Generalized Training Approach for Multiagent Learning

This paper investigates a population-based training regime based on game...
09/27/2019 ∙ by Paul Müller, et al. ∙ 20

• ### Actor-Critic Algorithms for Learning Nash Equilibria in N-player General-Sum Games

We consider the problem of finding stationary Nash equilibria (NE) in a ...
01/08/2014 ∙ by H. L Prasad, et al. ∙ 0

• ### On positionality of trigger strategies Nash Equilibria in SCAR

We study the positionality of trigger strategies Nash equilibria σ for t...
10/25/2019 ∙ by George Konstantinidis, et al. ∙ 0

##### This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

## 1 Introduction

The classical problem of finding Nash equilibria in multi-player games has been a focus of intense research in computer science, control theory, economics and mathematics (Basar and Olsder, 1998; Nisan et al., 2007; C. Daskalakis, 2009)

. Some connections have been made between this extensive literature and machine learning

(see, e.g., Cesa-Bianchi and Lugosi, 2006; Banerjee and Peng, 2003; Foerster et al., 2017)

, but these connections have focused principally on decision-making by single agents and multiple agents, and not on the burgeoning pattern-recognition side of machine learning, with its focus on large data sets and simple gradient-based algorithms for prediction and inference. This gap has begun to close in recent years, due to new formulations of learning problems as involving competition between subsystems that are construed as adversaries

(Goodfellow et al., 2014), the need to robustify learning systems with regard to against actual adversaries (Xu et al., 2009) and with regard to mismatch between assumptions and data-generating mechanisms (Yang, 2011; Giordano et al., 2018), and an increasing awareness that real-world machine-learning systems are often embedded in larger economic systems or networks (Jordan, 2018).

These emerging connections bring significant algorithmic and conceptual challenges to the fore. Indeed, while gradient-based learning has been a major success in machine learning, both in theory and in practice, work on gradient-based algorithms in game theory has often highlighted their limitations. For example, gradient-based approaches are known to be difficult to tune and train

(Daskalakis et al., 2017; Mescheder et al., 2017; Hommes and Ochea, 2012; Balduzzi et al., 2018), and recent work has shown that gradient-based learning will almost surely avoid a subset of the local Nash equilibria in general-sum games (Mazumdar and Ratliff, ). Moreover, there is no shortage of work showing that gradient-based algorithms can converge to limit cycles or even diverge in game-theoretic settings (Benaïm and Hirsch, 1999; Hommes and Ochea, 2012; Daskalakis et al., 2017; Mertikopoulos et al., 2018b).

These drawbacks have led to a renewed interest in approaches to finding the Nash equilibria of zero-sum games, or equivalently, to solving saddle point problems. Recent work has attempted to use second-order information to reduce oscillations around equilibria and speed up convergence to fixed points of the gradient dynamics (Mescheder et al., 2017; Balduzzi et al., 2018). Other recent approaches have attempted to tackle the problem from the variational inequality perspective but also with an eye on reducing oscillatory behaviors (Mertikopoulos et al., 2018a; Gidel et al., 2018).

None of these approaches, however, address a fundamental issue that arises in zero-sum games. As we will discuss, the set of attracting fixed points for the gradient dynamics in zero-sum games can include critical points that are not Nash equilibria. In fact, any saddle point of the underlying function that does not satisfy a particular alignment condition of a Nash equilibrium is a candidate attracting equilibrium for the gradient dynamics. Further, as we show, these points are attracting for a variety of recently proposed adjustments to gradient-based algorithms, including consensus optimization (Mescheder et al., 2017), the symplectic gradient adjustment (Balduzzi et al., 2018), and a two-timescale version of simultaneous gradient descent (Heusel et al., 2017). Moreover, we show by counterexample that these algorithms can all converge to non-Nash stationary points.

We present a new gradient-based algorithm for finding the local Nash equilibria of two-player zero-sum games and prove that the only stationary points to which the algorithm can converge are local Nash equilibria. Our algorithm makes essential use of the underlying structure of zero-sum games. To obtain our theoretical results we work in continuous time—via an ordinary differential equation (ODE)—and our algorithm is obtained via a discretization of the ODE. While a naive discretization would require a matrix inversion and would be computationally burdensome, our discretization is a two-timescale discretization that avoids matrix inversion entirely and is of a similar computational complexity as that of other gradient-based algorithms.

The paper is organized as follows. In Section 2 we define our notation and the problem we address. In Section 3 we define the limiting ODE that we would like our algorithm to follow and show that it has the desirable property that its only limit points are local Nash equilibria of the game. In Section 4 we introduce local symplectic surgery, a two-timescale procedure that asymptotically tracks the limiting ODE and show that it can be implemented efficiently. Finally, in Section 5 we present two numerical examples to validate the algorithm. The first is a toy example with three local Nash equilibria, and one non-Nash fixed point. We show that simultaneous gradient descent and other recently proposed algorithms for zero-sum games can converge to any of the four points while the proposed algorithm only converges to the local Nash equilibria. The second example is a small generative adversarial network (GAN), where we show that the proposed algorithm converges to a suitable solution within a similar number of steps as simultaneous gradient descent.

## 2 Preliminaries

We consider a two-player game, in which one player tries to minimize a function, , with respect to their decision variable , and the other player aims to maximize with respect to their decision variable , where . We write such a game as , since the second player can be seen as minimizing . We assume that neither player knows anything about the critical points of

, but that both players follow the rules of the game. Such a situation arises naturally when training machine learning algorithms (e.g., training generative adversarial networks or in multi-agent reinforcement learning). Without restricting

, and assuming both players are non-cooperative, the best they can hope to achieve is a local Nash equilibrium; i.e., a point that satisfies

 f(x∗,y)≤f(x∗,y∗)≤f(x,y∗),

for all and in neighborhoods of and respectively. Such equilibria are locally optimal for both players with respect to their own decision variable, meaning that neither player has an incentive to unilaterally deviate from such a point. As was shown in Ratliff et al. (2013), generically, local Nash equilibria will satisfy slightly stronger conditions, namely they will be differential Nash equilibria (DNE):

A strategy is a differential Nash equilibrium if:

• and .

• , and .

Here and denote the partial derivatives of with respect to and respectively, and and denote the matrices of second derivatives of with respect to and . Both differential and local Nash equilibria in two-player zero-sum games are, by definition, special saddle points of the function that satisfy a particular alignment condition with respect to the player’s decision variables. Indeed, the definition of differential Nash equilibria, which holds for almost all local Nash equilibria in a formal mathematical sense, makes this condition clear: the directions of positive and negative curvature of the function at a local Nash equilibria must be aligned with the minimizing and maximizing player’s decision variables respectively.

We note that the key difference between local and differential Nash equilibria is that , and are required to be definite instead of semidefinite. This distinction simplifies our analysis while still allowing our results to hold for almost all continuous games.

### 2.1 Issues with gradient-based algorithms in zero-sum games

Having introduced local Nash equilibria as the solution concept of interest, we now consider how to find such solutions, and in particular we highlight some issues with gradient-based algorithms in zero-sum continuous games. The most common method of finding local Nash equilibria in such games is to have both players randomly initialize their variables and then follow their respective gradients. That is, at each step , each agent updates their variable as follows:

 xn+1 =xn−γnDxf(xn,yn) yn+1 =yn+γnDyf(xn,yn),

where is a sequence of step sizes. The minimizing player performs gradient descent on their cost while the maximizing player ascends their gradient. We refer to this algorithm as simultaneous gradient descent (simGD). To simplify the notation, we let

, and define the vector-valued function

as:

 ω(z)=[Dxf(x,y)−Dyf(x,y)].

In this notation, the simGD update is given by:

 zn+1=zn−γnω(zn). (1)

Since (1) is in the form of a discrete-time dynamical system, it is natural to examine its limiting behavior through the lens of dynamical systems theory. Intuitively, given a properly chosen sequence of step sizes,  (1) should have the same limiting behavior as the continuous-time flow:

 ˙z=−ω(z). (2)

We can analyze this flow in neighborhoods of equilibria by studying the Jacobian matrix of , denoted :

 J(z)=[   D2xxf(x,y)   D2yxf(x,y)−D2xyf(x,y)−D2yyf(x,y)]. (3)

We remark that the diagonal blocks of are always symmetric and . Thus can be written as the sum of a block symmetric matrix and a block anti-symmetric matrix , where:

 S(z)=[   D2xxf(z)   00−D2yyf(z)]  ;  A(z)=[   0   D2yxf(z)−D2xyf(z)0].

Given the structure of the Jacobian, we can now draw links between differential Nash equilibria and equilibrium concepts in dynamical systems theory. We focus on hyperbolic critical points of .

A strategy is a critical point of if . It is a hyperbolic critical point if for all , where

, denotes the real part of the eigenvalue

of . It is well known that hyperbolic critical points are generic among critical points of smooth dynamical systems (see e.g. (Sastry, 1999)), meaning that our focus on hyperbolic critical points is not very restrictive. Of particular interest are locally asymptotically stable equilibria of the dynamics.

A strategy is a locally asymptotically stable equilibrium (LASE) of the continuous-time dynamics if and for all . LASE have the desirable property that they are locally exponentially attracting under the flow of . This implies that a properly discretized version of will also converge exponentially fast in a neighborhood of such points. LASE are the only attracting hyperbolic equilibria. Thus, making statements about all the LASE of a certain continuous-time dynamical system allows us to characterize all attracting hyperbolic equilibria.

As shown in Ratliff et al. (2013) and Nagarajan and Kolter (2017), the fact that all differential Nash equilibria are critical points of coupled with the structure of in zero-sum games guarantees that all differential Nash equilibria of the game are LASE of the gradient dynamics. However the converse is not true. The structure present in zero-sum games is not enough to ensure that the differential Nash equilibria are the only LASE of the gradient dynamics. When either or is indefinite at a critical point of , the Jacobian can still have eigenvalues with strictly positive real parts.

Consider a matrix having the form:

 M=[ac−c−b],

where and . These conditions imply that cannot be the Jacobian of at an local Nash equilibria. However, if and , both of the eigenvalues of will have strictly positive real parts, and such a point could still be a LASE of the gradient dynamics.

Such points, which we refer to as non-Nash LASE of  (2), are what makes having guarantees on the convergence of algorithms in zero-sum games particularly difficult. Non-Nash LASE are not locally optimal for both players, and may not even be optimal for one of the players. By definition, at least one of the two players has a direction in which they would move to unilaterally decrease their cost. Such points arise solely due to the gradient dynamics, and persist even in other gradient-based dynamics suggested in the literature. In Appendix B, we show that three recent algorithms for finding local Nash equilibria in zero-sum continuous games—consensus optimization, symplectic gradient adjustment, and a two-time scale version of simGD—are susceptible to converge to such points and therefore have no guarantees of convergence to local Nash equilibria. We note that such points can be very common since every saddle point of that is not a local Nash equilibrium is a candidate non-Nash LASE of the gradient dynamics. Further, local minima or maxima of could also be non-Nash LASE of the gradient dynamics.

To understand how non-Nash equilibria can be attracting under the flow of , we again analyze the Jacobian of . At such points, the symmetric matrix must have both positive and negative eigenvalues. The sum of with , however, has eigenvalues with strictly positive real part. Thus, the anti-symmetric matrix can be seen as stabilizing such points.

Previous gradient-based algorithms for zero-sum games have also pinpointed the matrix as the source of problems in zero-sum games, however they focus on a different issue. Consensus optimization (Mescheder et al., 2017) and the symplectic gradient adjustment (Balduzzi et al., 2018) both seek to adjust the gradient dynamics to reduce oscillatory behaviors in neighborhoods of stable equilibria. Since the matrix is anti-symmetric, it has only imaginary eigenvalues. If it dominates , then the eigenvalues of can have a large imaginary component. This leads to oscillations around equilibria that have been shown empirically to slow down convergence (Mescheder et al., 2017). Both of these adjustments rely on tunable hyper-parameters to achieve their goals. Their effectiveness is therefore highly reliant on the choice of parameter. Further, as shown in Appendix B neither of the adjustments are able to rule out convergence to non-Nash equilibria.

A second promising line of research into theoretically sound methods of finding the Nash equilibria of zero-sum games has approached the issue from the perspective of variational inequalities (Mertikopoulos et al., 2018a; Gidel et al., 2018). In Mertikopoulos et al. (2018a) extragradient methods were used to solve coherent saddle point problems and reduce oscillations when converging to saddle points. In such problems, however, all saddle points of the function are assumed to be local Nash equilibria, and thus the issue of converging to non-Nash equilibria is assumed away. Similarly, by assuming that is monotone, as in the theoretical treatment of the averaging scheme proposed in Gidel et al. (2018), the cost function is implicitly assumed to be convex-concave. This in turn implies that the Jacobian satisfies the conditions for a Nash equilibrium everywhere. The behavior of their approaches in more general zero-sum games with less structure (like the training of GANs) is therefore not well known. Moreover, since their approach relies on averaging the gradients, they do not fundamentally change the nature of the critical points of simGD.

In the following sections we propose an algorithm for which the only LASE are the differential Nash equilibria of the game. We also show that, regardless of the choice of hyper-parameter, the Jacobian of the new dynamics at LASE has real eigenvalues, which means that the dynamics cannot exhibit oscillatory behaviors around differential Nash equilibria.

## 3 Constructing the limiting differential equation

In this section we define the continuous-time flow that our discrete-time algorithm should ideally follow.

###### Assumption 1 (Lipschitz assumptions on f and J)

Assume that and and are -Lipschitz and -Lipschitz respectively. Finally assume that all critical points of are hyperbolic.

We do not require to be invertible everywhere, but only at the critical points of .

Now, consider the continuous-time flow:

 ˙z=−h(z)=−12(ω(z)+JT(z)(JT(z)J(z)+λ(z)I)−1JT(z)ω(z)), (4)

where is such that for all and and .

The function ensures that, even when is not invertible everywhere, the inverse matrix in (4) exists. The vanishing condition ensures us that the Jacobian of the adjustment term is exactly at differential Nash equilibria.

The dynamics introduced in (4) can be seen as an adjusted version of the gradient dynamics where the adjustment term only allows trajectories to approach critical points of along the players’ axes. If a critical point is not locally optimal for one of the players (i.e., it is a non-Nash critical point) then that player can push the dynamics out of a neighborhood of that point. The mechanism is easier to see if we assume is invertible and set . This results in the following dynamics:

 ˙z=−12(ω(z)+JT(z)J−1(z)ω(z)). (5)

In this simplified form we can see that the Jacobian of the adjustment is approximately when is small. This approximation is exact at critical points of . Adding this adjustment term to exactly cancels out the rotational part of the vector field contributed by the antisymmetric matrix in a neighborhood of critical points. Since we identified as the source of oscillatory behaviors and non-Nash equilibria in Section 2, this adjustment addresses both of these issues. The following theorem establishes this formally.

Under Assumption 1 and if , the continuous-time dynamical system satisfies:

• is a LASE of is a differential Nash equilibrium of the game .

• If is a critical point of , then the Jacobian of at has real eigenvalues.

We first show that:

 h(z)=0⟺ω(z)=0.

Clearly, . To show the converse, we assume that but . This implies that:

 JT(z)(JT(z)J(z)+λ(z)I)−1JT(z)ω(z)=−ω(z).

Since we assumed that this cannot be true, we must have that .

Having shown that under our assumptions, the critical points of are the same as those of , we now note that the Jacobian of at a critical point must have the form:

 Jh(z)=12(J(z)+JT(z)(JT(z)J(z))−1JT(z)J(z))=12(J(z)+JT(z))=S(z).

By assumption, at critical points, is invertible and . Given that , terms that include disappear, and the adjustment term contributes only a factor of to the Jacobian of at a critical point. This exactly cancels out the antisymmetric part of the Jacobian of . The Jacobian of is therefore symmetric at critical points of and has positive eigenvalues only when and .

Since these are also the conditions for differential Nash equilibria, all differential Nash equilibria of must be LASE of . Further, non-Nash LASE of cannot be LASE of , since by definition either or is indefinite at such points. To show the second part of the theorem, we simply note that must be symmetric at all critical points which in turn implies that it has only real eigenvalues.

The continuous-time dynamical system therefore solves both of the problems we highlighted in Section 2, for any choice of the function that satisfies our assumptions. The assumption that

is never an eigenvector of

with an eigenvalue of ensures that the adjustment does not create new critical points. In high dimensions the assumption is mild since the scenario is extremely specific, but it is also possible to show that this assumption can be removed entirely by adding a time-varying term to while still retaining the theoretical guarantees. We show this in Appendix A.

Theorem 3 shows that the only attracting hyperbolic equilibria of the limiting ordinary differential equation (ODE) are the differential Nash equilibria of the game. Also, since is symmetric at critical points of , if either or has at least one negative eigenvalue then such a point would be a linearly unstable equilibrium of . Such points are linearly unstable and are therefore almost surely avoided when the algorithm is randomly initialized (Benaïm and Hirsch, 1995; Sastry, 1999).

Theorem 3 also guarantees that the continuous-time dynamics do not oscillate near critical points. Oscillatory behaviors, as outlined in Mescheder et al. (2017), are known to slow down convergence of the discretized version of the process. Reducing oscillations near critical points is the main goal of consensus optimization (Mescheder et al., 2017) and the symplectic gradient adjustment (Balduzzi et al., 2018)

. However, for both algorithms, the extent to which they are able to reduce the oscillations depends on the choice of hyperparameter. The proposed dynamics achieves this for any

that satisfies our assumptions.

We close this section by noting that one can pre-multiply the adjustment term by some function such that while still retaining the theoretical properties described in Theorem 3. Such a function can be used to ensure that the dynamics closely track a trajectory of simGD except in neighborhoods of critical points. For example, if the matrix is ill-conditioned, such a term could be used to ensure that the adjustment does not dominate the underlying gradient dynamics. In Section 5 we give an example of such a damping function.

## 4 Two-timescale approximation

Given the limiting ODE, we could perform a straightforward Euler discretization to obtain a discrete-time update having the form:

 zn+1=zn−γh(zn).

However, due to the matrix inversion, such a discrete-time update would be prohibitively expensive to implement in high-dimensional parameter spaces like those encountered when training GANs. To solve this problem, we now introduce a two-timescale approximation to the continuous-time dynamics that has the same limiting behavior, but is much faster to compute at each iteration, than the simple discretization. Since this procedure serves to exactly remove the symplectic part, of Jacobian in neighborhoods of hyperbolic critical points, we refer to this two-timescale procedure as local symplectic surgery (LSS). In Appendix A we derive the two-timescale update rule for the time-varying version of the limiting ODE and show that it also has the same properties.

The two-timescale approximation to (4) is given by:

 zn+1=zn−anh1(zn,vn)vn+1=vn−bnh2(zn,vn), (6)

where and are defined as:

 h1(z,v) =12(ω(z)+JT(z)v) h2(z,v) =JT(z)J(z)v−JT(z)ω(z)+λ(z)v,

and the sequences of step sizes , satisfy the following assumptions:

###### Assumption 2 (Assumptions on the step sizes)

The sequences and satisfy:

• , and ;

• , and ;

• .

We note that is Lipschitz continuous in uniformly in under Assumption 1.

The process performs gradient descent on a regularized version of least squares, where the regularization is governed by . If the process is on a faster time scale, the intuition is that it will first converge to , and then will track the limiting ODE in (4). In the next section we show that this behavior holds even in the presence of noise.

The key benefit to the two-timescale process is that and can be computed efficiently since neither require a matrix inversion. In fact, as we show in Appendix C, the computation can be done with Jacobian-vector products with the same order of complexity as that of simGD, consensus optimization, and the symplectic gradient adjustment. This insight gives rise to the procedure outlined in Algorithm 1.

### 4.1 Long-term behavior of the two-timescale approximation

We now show that LSS asymptotically tracks the limiting ODE even in the presence of noise. This implies that the algorithm has the same limiting behavior as (4

). In particular, our setup allows us to treat the case where one only has access to unbiased estimates of

and at each iteration. This is the setting most likely to be encountered in practice, for example in the case of training GANs in a mini-batch setting.

 E[^h1(z,v)] =ω(z)+JT(z)v E[^h2(z,v)] =JT(z)J(z)v+JT(z)ω(z).

To place this in the form of classical two-timescale stochastic approximation processes, we write each estimator and as the sum of its mean and zero-mean noise processes and respectively. This results in the following two timescale process:

 zn+1=zn−an[ω(zn)+JT(zn)vn+Mzn+1]vn+1=vn−bn[JT(zn)J(zn)vn+JT(zn)ω(zn)+λ(zn)vn+Mvn+1]. (7)

We assume that the noise processes satisfy the following standard conditions (Benaïm, 1999; Borkar, 2008):

###### Assumption 3

Assumptions on the noise: Define the filtration :

 Fn=σ(z0,v0,Mv1,Mz1,...,Mzn,Mvn),

for . Given , we assume that:

• and are conditionally independent given for .

• and for .

• and almost surely for some positive constants and .

Given our assumptions on the estimator, cost function, and step sizes we now show that (7) asymptotically tracks a trajectory of the continuous-time dynamics almost surely. Since , , and are not uniformly Lipschitz continuous in both and , we cannot directly invoke results from the literature. Instead, we adapt the proof of Theorem 2 in Chapter 6 of Borkar (2008) to show that almost surely. We then invoke Proposition 4.1 from Benaïm (1999) to show that asymptotically tracks . We note that this approach only holds on the event . Thus, if the stochastic approximation process remains bounded, then under our assumptions we are sure to track a trajectory of the limiting ODE.

Under Assumptions 1-3, and on the event :

 (zn,vn)→{(z,v∗(z)):z∈Rd},

almost surely.

We first rewrite (7) as:

 zn+1 =zn−bn[anbnh1(zn,vn)+¯Mzn+1] vn+1 =vn−bn[h2(zn,vn)+Mvn+1],

where . By assumption, . Since is locally Lipschitz continuous, it is bounded on the event . Thus, almost surely.

From Lemma 1 in Chapter 6 of Borkar (2008) , the above processes, on the event , converge almost surely to internally chain-transitive invariant sets of and . Since, for a fixed , is a Lipschitz continuous function of with a globally asymptotically stable equilibrium at , the claim follows.

Having shown that almost surely, we now show that will asymptotically track a trajectory of the limiting ODE. Let us first define for to be the trajectory of starting at at time .

Given Assumptions 1-3, let . On the event , for any integer we have:

 limn→∞sup0≤h≤K ∥zn+h−z(tn+h,tn,zn)∥2=0.

The proof makes use of Propositions 4.1 and 4.2 in Benaïm (1999) which are supplied in Appendix E.

We first rewrite the process as:

 zn+1=zn−an[h(z)−JT(zn)(v∗(zn)−vn)+Mzn+1].

We note that, from Lemma 4.1, almost surely. Since , we can write this process as:

 zn+1=zn−an[h(z)−χn+Mzn+1],

where almost surely. Since is continuously differentiable, it is locally Lipschitz, and on the event it is bounded. It thus induces a continuous globally integrable vector field, and therefore satisfies the assumptions for Propositions 4.1 in Benaïm (1999). Further, by assumption the sequence of step sizes and martingale difference sequences satisfy the assumptions of Proposition 4.2 in Benaïm (1999). Invoking Proposition 4.1 and 4.2 in Benaïm (1999) gives us the desired result.

Theorem 4.1 guarantees that LSS asymptotically tracks a trajectory of the limiting ODE. The approximation will therefore avoid non-Nash equilibria of the gradient dynamics. Further, the only locally asymptotically stable points for LSS must be the differential Nash equilibria of the game.

## 5 Numerical Examples

We now present two numerical examples that illustrate the performance of both the limiting ODE and LSS. The first is a zero-sum game played over a function in that allows us to observe the behavior of both the limiting ODE around both local Nash and non-Nash equilibria. In the second example we use LSS to train a small generative adversarial network (GAN) to learn a mixture of eight Gaussians. Further numerical experiments and comments are provided in Appendix D.

### 5.1 2-D example

For the first example, we consider the game based on the following function in :

 f(x,y)=e−0.01(x2+y2)((0.3x2+y)2+(0.5y2+x)2).

This function is a fourth-order polynomial that is scaled by an exponential to ensure that it is bounded. The gradient dynamics associated with function have four LASE. By evaluating the Jacobian of at these points we find that three of the LASE are local Nash equilibria. These are denoted by ‘x’ in Figure 1. The fourth LASE is a non-Nash equilibrium which is denoted with a star. In Figure 1, we plot the sample paths of both simGD and our limiting ODE from the same initial positions, shown with red dots. We clearly see that simGD converges to all four LASE, depending on the initialization. Our algorithm, on the other hand, only converges to the local Nash equilibria. When initialized close to the non-Nash equilibrium it diverges from the simGD path and ends up converging to a LNE.

This numerical example also allows us to study the behavior of our algorithm around LASE. By focusing on a local Nash equilibrium, as in Figure 1B, we observe that the limiting ODE approaches it directly even when simGD displays oscillatory behaviors. This empirically validates the second part of Theorem 3.

In Figure 2 we empirically validate that LSS asymptotically tracks the limiting ODE. When the fast timescale has not converged, the process tracks the gradient dynamics. Once it has converged however, we see that it closely tracks the limiting ODE which leads it to converge to only the local Nash equilibria. This behavior highlights an issue with the two-timescale approach. Since the non-Nash equilibria of the gradient dynamics are saddle points for the new dynamics they can slow down convergence. However, the process will eventually escape such points (Benaïm, 1999).

In our numerical experiments we let . We also make use of a damping function as described in Section 3. The limiting ODE is therefore given by:

 ˙z=−(ω(z)+e−ξ2||v||2v),

where . For the two-timescale process, since there is no noise we use constant step sizes and the following update:

 zn+1 =zn−γ1(ω(zn)+e−ξ2||JT(zn)vn||2JT(zn)vn) vn+1 =vn−γ2(JT(zn)J(zn)vn+λ(zn)vn−JT(zn)ω(zn)),

where ,,, and .

We now train a generative adversarial network with LSS. Both the discriminator and generator are fully connected neural networks with four hidden layers of 16 neurons each. The tanh activation function is used since it satisfies the smoothness assumptions for our functions. For the latent space, we use a 16-dimensional Gaussian with mean zero and covariance

. The ground truth distribution is a mixture of eight Gaussians with their modes uniformly spaced around the unit circle and covariance .

In Figure 3, we show the progression of the generator at , , , and iterations for a GAN initialized with the same weights and biases and then trained with A. simGD and B. LSS. We can see empirically that, in this example, LSS converges to the true distribution while simGD quickly suffers mode collapse, showing how the adjusted dynamics can lead to convergence to better equilibria. Further numerical experiments are shown in Appendix D.

We caution that convergence rate per se is not necessarily a reasonable metric on which to compare performance in the GAN setting or in other game-theoretic settings. Competing algorithms may converge faster than our method when used to train GANs, but once because the competitors could be converging quickly to a non-Nash equilibrium, which is not desirable. Indeed, the optimal solution is known to be a local Nash equilibrium for GANs (Goodfellow et al., 2014; Nagarajan and Kolter, 2017). LSS may initially move towards a non-Nash equilibrium, while subsequently escaping the neighborhood of such points before converging. This will lead to a slower convergence rate, but a better quality solution.

## 6 Discussion

We have introduced local symplectic surgery, a new two-timescale algorithm for finding the local Nash equilibria of two-player zero-sum continuous games. We have established that this comes with the guarantee that the only hyperbolic critical points to which it can converge are the local Nash equilibria of the underlying game. This significantly improves upon previous methods for finding such points which, as shown in Appendix B, cannot give such guarantees. We have analyzed the asymptotic properties of the proposed algorithm and have shown that the algorithm can be implemented efficiently. Altogether, these results show that the proposed algorithm yields limit points with game-theoretic relevance while ruling out oscillations near those equilibria and having a similar per-iteration complexity as existing methods which do not come with the same guarantees. Our numerical examples allow us to empirically observe these properties.

It is important to emphasize that our analysis has been limited to neighborhoods of equilibria; the proposed algorithm can converge in principle to limit cycles at other locations of the space. These are hard to rule out completely. Moreover, some of these limit cycles may actually have some game-theoretic relevance (Hommes and Ochea, 2012; Benaim and Hirsch, 1997). Another limitation of our analysis is that we have assumed the existence of local Nash equilibria in games. Showing that they exist and finding them is very hard to do in general. Our algorithm will converge to local Nash equilibria, but may diverge when the game does not admit equilibria or when the algorithm does not come any equilibria its region of attraction. Thus, divergence of our algorithm is not a certificate that no equilibria exist. Such caveats, however, are the same as those for other gradient-based approaches for finding local Nash equilibria.

Another drawback to our approach is the use of second-order information. Though the two-timescale approximation does not need access to the full Jacobian of the gradient dynamics, the update does involve computing Jacobian-vector products. This is similar to other recently proposed approaches but will be inherently slower to compute than pure first- or zeroth-order methods. Bridging this gap while retaining similar theoretical properties remains an interesting avenue of further research.

In all, we have shown that some of the inherent flaws to gradient-based methods in zero-sum games can be overcome by designing our algorithms to take advantage of the game-theoretic setting. Indeed, by using the structure of local Nash equilibria we designed an algorithm that has significantly stronger theoretical support than existing approaches.

## References

• Balduzzi et al. (2018) D. Balduzzi, S. Racaniere, J. Martens, J. Foerster, K. Tuyls, and T. Graepel. The mechanics of n-player differentiable games. In International Conference on Machine Learning, 2018.
• Banerjee and Peng (2003) B. Banerjee and J. Peng. Adaptive policy gradient in multiagent learning. In Proceedings of the Second International Joint Conference on Autonomous Agents and Multiagent Systems, 2003.
• Basar and Olsder (1998) T. Basar and G. Olsder. Dynamic Noncooperative Game Theory. Society for Industrial and Applied Mathematics, 2 edition, 1998.
• Benaïm (1999) M. Benaïm. Dynamics of stochastic approximation algorithms. In Séminaire de Probabilités XXXIII, pages 1–68. Springer Berlin Heidelberg, 1999.
• Benaïm and Hirsch (1995) M. Benaïm and M. Hirsch. Dynamics of Morse-Smale urn processes. Ergodic Theory and Dynamical Systems, 15(6), 12 1995.
• Benaim and Hirsch (1997) M. Benaim and M. Hirsch. Learning processes, mixed equilibria and dynamical systems arising from repeated games. Games and Economic Behavior, 1997.
• Benaïm and Hirsch (1999) M. Benaïm and M. Hirsch. Mixed equilibria and dynamical systems arising from fictitious play in perturbed games. Games and Economic Behavior, 29:36–72, 1999.
• Borkar (2008) V. S. Borkar. Stochastic Approximation: A Dynamical Systems Viewpoint. Cambridge University Press, 2008.
• C. Daskalakis (2009) C. Papadimitriou C. Daskalakis, P. Goldberg. The complexity of computing a Nash equilibrium. SIAM Journal on Computing, 39:195–259, 02 2009.
• Cesa-Bianchi and Lugosi (2006) N. Cesa-Bianchi and G. Lugosi. Prediction, Learning, and Games. Cambridge University Press, Cambridge, UK, 2006.
• Daskalakis et al. (2017) C. Daskalakis, A. Ilyas, V. Syrgkanis, and H. Zeng. Traning GANs with Optimism. arxiv:1711.00141, 2017.
• Foerster et al. (2017) J. Foerster, R. Y. Chen, M. Al-Shedivat, S. Whiteson, P. Abbeel, and I. Mordatch. Learning with opponent-learning awareness. CoRR, abs/1709.04326, 2017.
• Gidel et al. (2018) G. Gidel, H. Berard, P. Vincent, and S. Lacoste-Julien. A variational inequality perspective on generative adversarial nets. CoRR, 2018. URL http://arxiv.org/abs/1802.10551.
• Giordano et al. (2018) R. Giordano, T. Broderick, and M. I. Jordan. Covariances, robustness, and variational Bayes. Journal of Machine Learning Research, 2018.
• Goodfellow et al. (2014) I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio. Generative adversarial networks. arxiv:1406.2661, 2014.
• Heusel et al. (2017) M. Heusel, H. Ramsauer, T. Unterthiner, B. Nessler, and S. Hochreiter. GANs trained by a two time-scale update rule converge to a local Nash equilibrium. In Advances in Neural Information Processing Systems 30, 12 2017.
• Hommes and Ochea (2012) C. H. Hommes and M. I. Ochea.

Multiple equilibria and limit cycles in evolutionary games with logit dynamics.

Games and Economic Behavior, 74(1):434 –441, 2012.
• Jordan (2018) M. I. Jordan. Artificial intelligence: The revolution hasn’t happened yet. Medium, 2018.
• (19) E. Mazumdar and L. J. Ratliff. On the convergence of gradient-based learning in continuous games. ArXiv e-prints.
• Mertikopoulos et al. (2018a) P. Mertikopoulos, H. Zenati, B. Lecouat, C. Foo, V. Chandrasekhar, and G. Piliouras. Mirror descent in saddle-point problems: Going the extra (gradient) mile. CoRR, abs/1807.02629, 2018a.
• Mertikopoulos et al. (2018b) Panayotis Mertikopoulos, Christos H. Papadimitriou, and Georgios Piliouras. Cycles in adversarial regularized learning. In roceedings of the 29th annual ACM-SIAM symposium on discrete algorithms, 2018b.
• Mescheder et al. (2017) L. M. Mescheder, S. Nowozin, and A. Geiger. The numerics of GANs. In Advances in Neural Information Processing Systems 30, 2017.
• Nagarajan and Kolter (2017) V. Nagarajan and Z. Kolter. Gradient descent GAN optimization is locally stable. In Advances in Neural Information Processing Systems 30. 2017.
• Nisan et al. (2007) N. Nisan, T. Roughgarden, E. Tardos, and V. Vazirani. Algorithmic Game Theory. Cambridge University Press, Cambridge, UK, 2007.
• Ratliff et al. (2013) L. J. Ratliff, S. A. Burden, and S. S. Sastry. Characterization and computation of local Nash equilibria in continuous games. In Proceedings of the 51st Annual Allerton Conference on Communication, Control, and Computing, pages 917–924, Oct 2013.
• Sastry (1999) S. S. Sastry. Nonlinear Systems. Springer New York, 1999.
• Xu et al. (2009) H. Xu, C. Caramanis, and S. Mannor.

Robustness and regularization of support vector machines.

Journal of Machine Learning Research, 10:1485–1510, December 2009. ISSN 1532-4435.
• Yang (2011) L. Yang. Active learning with a drifting distribution. In Advances in Neural Information Processing Systems. 2011.

In this section we analyze a slightly different version of (4) that allows us to remove the assumption that is never an eigenvector of with associated eigenvalue . Though this assumption is relatively mild, since intuitively it will be very rare that is exactly the eigenvector of the adjustment matrix, we show that by adding a third term to (4) we can remove it entirely while retaining our theoretical guarantees. The new dynamics are constructed by adding a time-varying term to the dynamics that goes to zero only when is zero. This guarantees that the only critical points of the limiting dynamics are the critical points of . The analysis of these dynamics is slightly more involved and requires generalizations of the definition of a LASE to handle time-varying dynamics. We first define an equilibrium of a potentially time-varying dynamical system as a point such that for all . We can now generalize the definition of a LASE to the time-varying setting.

A strategy is a locally uniformly asymptotically stable equilibrium of the time-varying continuous time dynamics if is an equilibrium of , and and for all .

Locally uniformly asymptotically stable equilibria, under this definition, also have the property that they are locally exponentially attracting under the flow, . Further, since the linearization around a locally uniformly asymptotically stable equilibrium is time-invariant, we can still invoke converse Lyapunov theorems like those presented in Sastry (1999) when deriving the non-asymptotic bounds.

Having defined equilibria and a generalization of LASE for time-varying systems, we now introduce a time-varying version of the continuous-time ODE presented in Section 3 which allows us to remove the assumption that is never an eigenvector of with associated eigenvalue . The limiting ODE is given by:

 ˙z=−hTV(z,t)=−(h(z)+gTV(z,t)), (8)

where is as described in Section 3, can be decomposed as:

 gTV(z,t)=λ1(z)u(t),

where satisfies:

• for all .

• .

• ,

and where satisfies:

• such that .

• .

Thus we require that the time-varying adjustment term must be bounded and is equal to zero only when . Most importantly, we require that for any that is not a critical point of , must be changing in time. An example of a that satisfies these requirements is:

 gTV(z,t)=ξ1(1−e−ξ2||ω(z)||2)cos(t)u0, (9)

for , and .

These conditions, as the next theorem shows, allow us to guarantee that the only locally asymptotically stable equilibria are the differential Nash equilibria of the game.

Under Assumption 1 the continuous-time dynamical system satisfies:

• is a locally uniformly asymptotically stable equilibrium of is a DNE of the game .

• If is an equilibrium point of , then the Jacobian of at is time-invariant and has real eigenvalues.

We first show that:

 hTV(z,t)≡0  ∀t≥0⟺ω(z)=0.

By construction . To show the converse, we assume that there exists a such that but . This implies that:

 −gTV(z,t) =ω(z)+JT(z)(JT(z)J(z)+λ(z)I)−1JT(z)ω(z)∀t≥0.

Since is a constant and , taking the derivative of both sides with respect to gives us the following condition on under our assumption:

 Dtu(t)=0  ∀t≥0.

By assumption this cannot be true. Thus, we have a contradiction and .

Having shown that the critical points of are the same as that of , we now note that the Jacobian of , at critical points, must be . Under the same development as the proof of Theorem 3 the Jacobian of is given by:

 JTV(z)=J(z)+JT(z)+(DzgTV(z,t)).

Again, by construction when . The third term therefore disappears and we have that . The proof now follows from that of Theorem 3.

We have shown that adding a time-varying term to the original adjusted dynamics allows us to remove the assumption that the adjustment term is never exactly . As in Section 3 we can now construct a two-timescale process that asymptotically tracks (8). We assume that is a deterministic function of a trajectory of an ODE:

 ˙θ=−h3(θ),

with a fixed initial condition such that . We assume that is Lipschitz-continuous and is continuous and bounded. Note that under our assumptions, for all .

The form of introduced in (9), can be written as , where satisfies the linear dynamical system:

 ˙θ=[0−110]θ,

with .

Given this setup, the continuous-time dynamics can be written as:

 ˙θ=−h3(θ)˙z=−h4(z,θ), (10)

where:

 h4(z,θ) =12(ω(z)+JT(z)(JT(z)J(z)+λ(z)I)−1JT(z)ω(z)+λ1(z)u(θ)).

Having made this further assumption on the time-varying term, we now introduce the two-timescale process that asymptotically tracks (10). This process is given by:

 θn+1=θn−anh3(θn)zn+1=zn−an^h5(zn,vn,θn)vn+1=vn−bn^h6(zn,vn), (11)

where

 E[^h5(z,v,θ)] =h5(z,v,θ):=12(ω(z)+JT(z)v)+λ1(z)u(θ) E[^h6(z,v)] =h6(z,v):=JT(z)J(z)v−JT(z)ω(z)+λ(z)v.

Proceeding as in Section 3, we write and where and are martingale difference sequences satisfying Assumption 3. We note that the process is deterministic.

This two-timescale process gives rise to the time-varying version of local symplectic surgery (TVLSS) outlined in Algorithm 2.