Competitive Gradient Descent

We introduce a new algorithm for the numerical computation of Nash equilibria of competitive two-player games. Our method is a natural generalization of gradient descent to the two-player setting where the update is given by the Nash equilibrium of a regularized bilinear local approximation of the underlying game. It avoids oscillatory and divergent behaviors seen in alternating gradient descent. Using numerical experiments and rigorous analysis, we provide a detailed comparison to methods based on optimism and consensus and show that our method avoids making any unnecessary changes to the gradient dynamics while achieving exponential (local) convergence for (locally) convex-concave zero sum games. Convergence and stability properties of our method are robust to strong interactions between the players, without adapting the stepsize, which is not the case with previous methods. In our numerical experiments on non-convex-concave problems, existing methods are prone to divergence and instability due to their sensitivity to interactions among the players, whereas we never observe divergence of our algorithm. The ability to choose larger stepsizes furthermore allows our algorithm to achieve faster convergence, as measured by the number of model evaluations.

READ FULL TEXT VIEW PDF

Authors

page 1

page 2

page 3

page 4

07/10/2020

Exponential Convergence of Gradient Methods in Concave Network Zero-sum Games

Motivated by Generative Adversarial Networks, we study the computation o...
10/06/2021

O(1/T) Time-Average Convergence in a Generalization of Multiagent Zero-Sum Games

We introduce a generalization of zero-sum network multiagent matrix game...
01/26/2020

Reproducibility Challenge NeurIPS 2019 Report on "Competitive Gradient Descent"

This is a report for reproducibility challenge of NeurlIPS 2019 on the p...
09/30/2020

Gradient Descent-Ascent Provably Converges to Strict Local Minmax Equilibria with a Finite Timescale Separation

We study the role that a finite timescale separation parameter τ has on ...
05/27/2022

Regularized Gradient Descent Ascent for Two-Player Zero-Sum Markov Games

We study the problem of finding the Nash equilibrium in a two-player zer...
02/24/2017

Strongly-Typed Agents are Guaranteed to Interact Safely

As artificial agents proliferate, it is becoming increasingly important ...
09/30/2020

Solving Zero-Sum Games through Alternating Projections

In this work, we establish near-linear and strong convergence for a natu...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Competitive optimization: Whereas traditional optimization is concerned with a single agent trying to optimize a cost function, competitive optimization extends this problem to the setting of multiple agents each trying to minimize their own cost function, which in general depends on the actions of all agents. The present work deals with the case of two such agents:

(1)

for two functions .
In single agent optimization, the solution of the problem consists of the minimizer of the cost function. In competitive optimization, the right definition of solution is less obvious, but often one is interested in computing Nash– or strategic equilibria: Pairs of strategies, such that no player can decrease their costs by unilaterally changing their strategies. If and are not convex, finding a global Nash equilibrium is typically impossible and instead we hope to find a "good" local Nash equilibrium.

The benefits of competition: While competitive optimization problems arise naturally in mathematical economics and game/decision theory (Nisan et al., 2007), they also provide a highly expressive and transparent language to formulate algorithms in a wide range of domains. In optimization (Bertsimas et al., 2011) and statistics (Huber and Ronchetti, 2009)

it has long been observed that competitive optimization is a natural way to encode robustness requirements of algorithms. More recently, researchers in machine learning have been using multi-agent optimization to design highly flexible objective functions for reinforcement learning

(Liu et al., 2016; Pfau and Vinyals, 2016; Pathak et al., 2017; Wayne and Abbott, 2014; Vezhnevets et al., 2017) and generative models (Goodfellow et al., 2014). We believe that this approach has still a lot of untapped potential, but its full realization depends crucially on the development of efficient and reliable algorithms for the numerical solution of competitive optimization problems.

Gradient descent/ascent and the cycling problem: For differentiable objective functions, the most naive approach to solving (1) is gradient descent ascent (GDA), whereby both players independently change their strategy in the direction of steepest descent of their cost function. Unfortunately, this procedure features oscillatory or divergent behavior even in the simple case of a bilinear game () (see Figure 2). In game-theoretic terms, GDA lets both players choose their new strategy optimally with respect to the last move of the other player. Thus, the cycling behaviour of GDA is not surprising: It is the analogue of "Rock! Paper! Scissors! Rock! Paper! Scissors! Rock! Paper!…" in the eponymous hand game. While gradient descent is a reliable basic workhorse

for single-agent optimization, GDA can not play the same role for competitive optimization. At the moment, the lack of such a

workhorse greatly hinders the broader adoption of methods based on competition.

Existing works: Most existing approaches to stabilizing GDA follow one of three lines of attack.
In the special case , the problem can be written as a minimization problem , where . For certain structured problems, Gilpin et al. (2007) use techniques from convex optimization (Nesterov, 2005) to minimize the implicitly defined . For general problems, the two-scale update rules proposed in Goodfellow et al. (2014); Heusel et al. (2017); Metz et al. (2016) can be seen as an attempt to approximate and its gradients.
In GDA, players pick their next strategy based on the last strategy picked by the other players. Methods based on follow the regularized leader (Shalev-Shwartz and Singer, 2007; Grnarova et al., 2017), fictitious play (Brown, 1951), predictive updates (Yadav et al., 2017), opponent learning awareness (Foerster et al., 2018), and optimism (Rakhlin and Sridharan, 2013; Daskalakis et al., 2017; Mertikopoulos et al., 2019)

propose more sophisticated heuristics that the players could use to predict each other’s next move. Algorithmically, many of these methods can be considered variations of the

extragradient method (Korpelevich, 1977)(see also Facchinei and Pang (2003)[Chapter 12]). Finally, some methods directly modify the gradient dynamics, either by promoting convergence through gradient penalties (Mescheder et al., 2017), or by attempting to disentangle convergent potential parts from rotational Hamiltonian

parts of the vector field

(Balduzzi et al., 2018; Letcher et al., 2019).

Our contributions: Our main conceptual objection to most existing methods is that they lack a clear game-theoretic motivation, but instead rely on the ad-hoc introduction of additional assumptions, modifications, and model parameters.
Their main practical shortcoming is that to avoid divergence the stepsize has to be chosen inversely proportional to the magnitude of the interaction of the two players (as measured by , ).
On the one hand, the small stepsize results in slow convergence. On the other hand, a stepsize small enough to prevent divergence will not be known in advance in most problems. Instead it has to be discovered through tedious trial and error, which is further aggravated by the lack of a good diagnostic for improvement in multi-agent optimization (which is given by the objective function in single agent optimization).
We alleviate the above mentioned problems by introducing a novel algorithm, competitive gradient descent (CGD) that is obtained as a natural extension of gradient descent to the competitive setting. Recall that in the single player setting, the gradient descent update is obtained as the optimal solution to a regularized linear approximation of the cost function. In the same spirit, the update of CGD is given by the Nash equilibrium of a regularized bilinear approximation of the underlying game. The use of a bilinear– as opposed to linear approximation lets the local approximation preserve the competitive nature of the problem, significantly improving stability. We prove (local) convergence results of this algorithm in the (locally) convex-concave zero-sum games. We show that stronger interactions between the two players only improve convergence, without requiring an adaptation of the stepsize. In comparison, the existing methods need to reduce the stepsize to match the increase of the interactions to avoid divergence, which we illustrate on a series of polynomial test cases considered in previous works.

We begin our numerical experiments by trying to use a GAN on a bimodal Gaussian mixture model. Even in this simple example, trying five different (constant) stepsizes under RMSProp, the existing methods diverge. The typical solution would be to decay the learning rate. However even with a constant learning rate, CGD succeeds with all these stepsize choices to approximate the main features of the target distribution. In fact, throughout our experiments we

never

saw CGD diverge. In order to measure the convergence speed more quantitatively, we next consider a nonconvex matrix estimation problem, measuring computational complexity in terms of the number of gradient computations performed. We observe that all methods show improved speed of convergence for larger stepsizes, with CGD roughly matching the convergence speed of optimistic gradient descent 

(Daskalakis et al., 2017), at the same stepsize. However, as we increase the stepsize, other methods quickly start diverging, whereas CGD continues to improve, thus being able to attain significantly better convergence rates (more than two times as fast as the other methods in the noiseless case, with the ratio increasing for larger and more difficult problems). For small stepsize or games with weak interactions on the other hand, CGD automatically invests less computational time per update, thus gracefully transitioning to a cheap correction to GDA, at minimal computational overhead. We believe that the robustness of CGD makes it an excellent candidate for the fast and simple training of machine learning systems based on competition, hopefully helping them reach the same level of automatization and ease-of-use that is already standard in minimization based machine learning.

2 Competitive gradient descent

We propose a novel algorithm, which we call competitive gradient descent (CGD), for the solution of competitive optimization problems , where we have access to function evaluations, gradients, and Hessian-vector products of the objective functions 111Here and in the following, unless otherwise mentioned, all derivatives are evaluated in the point .

for  do
       ;
       ;
      
return ;
Algorithm 1 Competitive Gradient Descent (CGD)

How to linearize a game: To motivate this algorithm, we remind ourselves that gradient descent with stepsize applied to the function can be written as

(2)

This models a (single) player solving a local linear approximation of the (minimization) game, subject to a quadratic penalty that expresses her limited confidence in the global accuracy of the model. The natural generalization of this idea to the competitive case should then be given by the two players solving a local approximation of the true game, both subject to a quadratic penalty that expresses their limited confidence in the accuracy of the local approximation.
In order to implement this idea, we need to find the appropriate way to generalize the linear approximation in the single agent setting to the competitive setting: How to linearize a game?.

Linear or Multilinear: GDA answers the above question by choosing a linear approximation of . This seemingly natural choice has the flaw that linear functions can not express any interaction between the two players and are thus unable to capture the competitive nature of the underlying problem. From this point of view it is not surprising that the convergent modifications of GDA are, implicitly or explicitly, based on higher order approximations (see also (Li et al., 2017)). An equally valid generalization of the linear approximation in the single player setting is to use a bilinear approximation in the two-player setting. Since the bilinear approximation is the lowest order approximation that can capture some interaction between the two players, we argue that the natural generalization of gradient descent to competitive optimization is not GDA, but rather the update rule , where is a Nash equilibrium of the game 222We could alternatively use the penalty for both players, without changing the solution.

(3)

Indeed, the (unique) Nash equilibrium of the Game (3) can be computed in closed form.

Theorem 2.1.

Among all (possibly randomized) strategies with finite first moment, the only Nash equilibrium of the Game (3) is given by

(4)
(5)

given that the matrix inverses in the above expression exist 333We note that the matrix inverses exist for all but one value of , and for all in the case of a zero sum game..

Proof.

Let be randomized strategies. By subtracting and adding , and taking expectations, we can rewrite the game as

(6)
(7)

Thus, the objective value for both players can always be improved by decreasing the variance while keeping the expectation the same, meaning that the optimal value will always (and only) be achieved by a deterministic strategy. We can then replace the

with , set the derivative of the first expression with respect to and of the second expression with respect to to zero, and solve the resulting system of two equations for the Nash equilibrium . ∎

According to Theorem 2.1, the Game (3) has exactly one optimal pair of strategies, which is deterministic. Thus, we can use these strategies as an update rule, generalizing the idea of local optimality from the single– to the multi agent setting and obtaining Algorithm 1.

What I think that they think that I think … that they do: Another game-theoretic interpretation of CGD follows from the observation that its update rule can be written as

(8)

Applying the expansion to the above equation, we observe that the first partial sum () corresponds to the optimal strategy if the other player’s strategy stays constant (GDA). The second partial sum () corresponds to the optimal strategy if the other player thinks that the other player’s strategy stays constant (LCGD, see Figure 1). The third partial sum () corresponds to the optimal strategy if the other player thinks that the other player thinks that the other player’s strategy stays constant, and so forth, until the Nash equilibrium is recovered in the limit. For small enough , we could use the above series expansion to solve for , which is known as Richardson iteration and would recover high order LOLA (Foerster et al., 2018). However, expressing it as a matrix inverse will allow us to use optimal Krylov subspace methods to obtain far more accurate solutions with fewer gradient evaluations.

Rigorous results on convergence and local stability: We will now show some basic convergence results for CGD, the proofs of which we defer to the appendix. Our results are restricted to the case of a zero-sum game (), but we expect that they can be extended to games that are dominated by competition. To simplify notation, we define

(9)

We furthermore define the spectral function .

Theorem 2.2.

If is two times differentiable with -Lipschitz continuous mixed Hessian, and the diagonal blocks of its Hessian are bounded as , we have

In particular, we can deduce the following local stability result

Theorem 2.3.

Let be a critical point () and assume furthermore that and with Lipschitz continuous mixed Hessian. Then there exists a neighbourhood of , such that CGD started in converges to a point in at an exponential rate that depends only on .

The results on local stability for existing modifications of GDA, including those of (Mescheder et al., 2017; Daskalakis et al., 2017; Mertikopoulos et al., 2019) (see also Liang and Stokes (2018)) all require the stepsize to be chosen inversely proportional to an upper bound on and indeed we will see in our experiments that the existing methods are prone to divergence under strong interactions between the two players (large ). In contrast to these results, our convergence results only improve as the interaction between the players becomes stronger.

3 Consensus, optimism, or competition?

We will now show that many of the convergent modifications of GDA correspond to different subsets of four common ingredients. Consensus optimization (ConOpt) (Mescheder et al., 2017), penalises the players for non-convergence by adding the squared norm of the gradient at the next location,

to both player’s loss function (here

is a hyperparameter). As we see in Figure 

1, the resulting gradient field has two additional Hessian corrections. Balduzzi et al. (2018); Letcher et al. (2019) observe that any game can be written as the sum of a potential game (that is easily solved by GDA), and a Hamiltonian game (that is easily solved by ConOpt). Based on this insight, they propose symplectic gradient adjustment

that applies (in its simplest form) ConOpt only using the skew-symmetric part of the Hessian, thus alleviating the problematic tendency of ConOpt to converge to spurious solutions.


Daskalakis et al. (2017) proposed to modify GDA as

(10)
(11)

which we will refer to as optimistic gradient descent ascent (OGDA). By interpreting the differences appearing in the update rule as finite difference approximations to Hessian vector products, we see that (to leading order) OGDA corresponds to yet second order correction of GDA (see Figure 1). It will also be instructive to compare the algorithms to linearized competitive gradient descent (LCGD), which is obtained by skipping the matrix inverse in CGD (which corresponds to taking only the leading order term in the limit ) and also coincides with first order LOLA (Foerster et al., 2018). As illustrated in Figure 1, these six algorithms amount to different subsets of the following four terms.

GDA:
LCGD:
SGA:
ConOpt:
OGDA:
CGD:
Figure 1: The update rules of the first player for (from top to bottom) GDA, LCGD, ConOpt, OGDA, and CGD, in a zero-sum game ().
  1. [wide, labelwidth=!, labelindent=0pt]

  2. The gradient term , which corresponds to the most immediate way in which the players can improve their cost.

  3. The competitive term , which can be interpreted either as anticipating the other player to use the naive (GDA) strategy, or as decreasing the other players influence (by decreasing their gradient).

  4. The consensus term , that determines whether the players prefer to decrease their gradient () or to increase it (). The former corresponds the players seeking consensus, whereas the latter can be seen as the opposite of consensus.
    (It also corresponds to an approximate Newton method 444Applying a damped and regularized Newton’s method to the optimization problem of Player 1 would amount to choosing , for ..)

  5. The equilibrium term , , which arises from the players solving for the Nash equilibrium. This term lets each player prefer strategies that are less vulnerable to the actions of the other player.

Each of these is responsible for a different feature of the corresponding algorithm, which we can illustrate by applying the algorithms to three prototypical test cases considered in previous works.

  • [wide, labelwidth=!, labelindent=0pt]

  • We first consider the bilinear problem (see Figure 2). It is well known that GDA will fail on this problem, for any value of . For , all the other methods converge exponentially towards the equilibrium, with ConOpt and SGA converging at a faster rate due to the stronger gradient correction (). If we choose , OGDA, ConOpt, and SGA fail. The former diverges, while the latter two begin to oscillate widely. If we choose , all methods but CGD diverge.

  • In order to explore the effect of the consensus Term 3, we now consider the convex-concave problem (see Figure 3). For , all algorithms converge at an exponential rate, with ConOpt converging the fastest, and OGDA the slowest. The consensus promoting term of ConOpt accelerates convergence, while the competition promoting term of OGDA slows down the convergence. As we increase to , the OGDA and ConOpt start failing (diverge), while the remaining algorithms still converge at an exponential rate. Upon increasing further to , all algorithms diverge.

  • We further investigate the effect of the consensus Term 3 by considering the concave-convex problem (see Figure 3). The critical point does not correspond to a Nash-equilibrium, since both players are playing their worst possible strategy. Thus it is highly undesirable for an algorithm to converge to this critical point. However for , ConOpt does converge to which provides an example of the consensus regularization introducing spurious solutions. The other algorithms, instead, diverge away towards infinity, as would be expected. In particular, we see that SGA is correcting the problematic behavior of ConOpt, while maintaining its better convergence rate in the first example. As we increase to , the radius of attraction of under ConOpt decreases and thus ConOpt diverges from the starting point , as well.

The first experiment shows that the inclusion of the competitive Term 2 is enough to solve the cycling problem in the bilinear case. However, as discussed after Theorem 2.2, the convergence results of existing methods in the literature are not break down as the interactions between the players becomes too strong (for the given ). The first experiment illustrates that this is not just a lack of theory, but corresponds to an actual failure mode of the existing algorithms. While introducing the competitive term is enough to fix the cycling behaviour of GDA, OGDA and ConOpt (for small enough ) add the additional consensus term to the update rule, with opposite signs.
In the second experiment (where convergence is desired), OGDA converges in a smaller parameter range than GDA and SGA, while only diverging slightly faster in the third experiment (where divergence is desired).
ConOpt, on the other hand, converges faster than GDA in the second experiment, for however, it diverges faster for the remaining values of and, what is more problematic, it converges to a spurious solution in the third experiment for .
Based on these findings, the consensus term with either sign does not seem to systematically improve the performance of the algorithm, which is why we suggest to only use the competitive term (that is, use LOLA/LCGD, or CGD, or SGA).

[scale=0.21]figures/bilinear_strong_alpha1.png [scale=0.21]figures/bilinear_strong_alpha3.png [scale=0.21]figures/bilinear_strong_alpha6.png

Figure 2: The first 50 iterations of GDA, LCGD, ConOpt, OGDA, and CGD with parameters and . The objective function is for, from left to right, . (Note that ConOpt and SGA coincide on a bilinear problem)

[scale=0.105]figures/quadratic_equi_alpha1.png [scale=0.105]figures/quadratic_equi_alpha3.png [scale=0.105]figures/quadratic_equi_alpha6.png [scale=0.105]figures/quadratic_noequi_alpha1.png [scale=0.105]figures/quadratic_noequi_alpha3.png [scale=0.105]figures/quadratic_noequi_alpha6.png

Figure 3: We measure the (non-)convergence to equilibrium in the separable convex-concave– (, left three plots) and concave convex problem (, right three plots), for . (Color coding given by GDA, SGA, LCGD, CGD, ConOpt, OGDA, the y-axis measures and the x-axis the number of iterations . Note that convergence is desired for the first problem, while divergence is desired for the second problem.

4 Implementation and numerical results

We briefly discuss the implementation of CGD. The Julia code used for our numerical experiments can be found under https://github.com/f-t-s/CGD.
Computing Hessian vector products: First, our algorithm requires products of the mixed Hessian , , which we want to compute using automatic differentiation. As was already observed by Pearlmutter (1994), Hessian vector products can be computed at minimal overhead over the cost of computing gradients, by combining forward– and reverse mode automatic differentiation. To this end, a function is defined using reverse mode automatic differentiation. The Hessian vector product can then be evaluated as , using forward mode automatic differentiation. Many AD frameworks, like Autograd (https://github.com/HIPS/autograd) and ForwardDiff(https://github.com/JuliaDiff/ForwardDiff.jl, (Revels et al., 2016)) together with ReverseDiff(https://github.com/JuliaDiff/ReverseDiff.jl) support this procedure.

Matrix inversion for the equilibrium term: Similar to a truncated Newton method (Nocedal and Wright, 2006), we propose to use iterative methods to approximate the inverse-matrix vector products arising in the equilibrium term 4. We will focus on zero-sum games, where the matrix is always symmetric positive definite, making the conjugate gradient (CG) algorithm the method of choice. For nonzero sum games we recommend using the GMRES or BCGSTAB (see for example Saad (2003) for details). We suggest terminating the iterative solver after a given relative decrease of the residual is achieved ( for a small parameter , when solving the system ). In our experiments we choose . Given the strategy of one player, is the optimal counter strategy which can be found without solving another system of equations Thus, we recommend in each update to only solve for the strategy of one of the two players using Equation (4), and then use the optimal counter strategy for the other player. The computational cost can be further improved by using last round’s optimal strategy as a a warm start of the inner CG solve. An appealing feature of the above algorithm is that the number of iterations of CG adapts to the difficulty of solving the equilibrium term 4. If it is easy, we converge rapidly and CGD thus gracefully reduces to LCGD, at only a small overhead. If it is difficult, we might need many iterations, but correspondingly the problem would be very hard without the preconditioning provided by the equilibrium term.

Experiment: Fitting a bimodal distribution: We use a simple GAN to fit a Gaussian mixture model with two modes, in two dimensions (see supplement for details). We apply SGA, ConOpt (), OGDA, and CGD for stepsize together with RMSProp (. In each case, CGD produces an reasonable approximation of the input distribution without any mode collapse. In contrast, all other methods diverge after some initial cycling behaviour! Reducing the steplength to , did not seem to help, either. While we do not claim that the other methods can not be made work with proper hyperparameter tuning, this result substantiates our claim that CGD is significantly more robust than existing methods for competitive optimization. (See the appendix for more details regarding the experiments).

[scale=0.215]figures/plot_iter_207_grad_824.png [scale=0.215]figures/plot_iter_315_grad_1256.png [scale=0.215]figures/plot_iter_83_grad_1060.png [scale=0.215]figures/plot_iter_214_grad_852.png [scale=0.215]figures/plot_iter_316_grad_1260.png [scale=0.215]figures/plot_iter_98_grad_2028.png

Figure 4:

For all methods, initially the players cycle between the two modes (first column). For all methods but CGD, the dynamics eventually become unstable (middle column). Under CGD, the mass eventually distributes evenly among the two modes (right column). (The arrows show the update of the generator and the colormap encodes the logit output by the discriminator.)

Experiment: Estimating a covariance matrix: To show that CGD is also competitive in terms of computational complexity we consider the noiseless case of the covariance estimation example used by Daskalakis et al. (2017)[Appendix C], We study the tradeoff between the number of evaluations of the forward model (thus accounting for the inner loop of CGD) and the residual and observe that for comparable stepsize, the convergence rate of CGD is similar to the other methods. However, due to CGD being convergent for larger stepsize it can beat the other methods by more than a factor two (see appendix for details).

[scale=0.215]apfigures/cvest_d_20.png [scale=0.215]apfigures/cvest_d_40.png [scale=0.215]apfigures/cvest_d_60.png

Figure 5: We plot the decay of the residual after a given number of model evaluations, for increasing problem sizes and . Experiments that are not plotted diverged.

5 Conclusion and outlook

We propose a novel and natural generalization of gradient descent to competitive optimization. Besides its attractive game-theoretic interpretation, the algorithm shows improved robustness properties compared to the existing methods, which we study using a combination of theoretical analysis and computational experiments. We see two particularly interesting directions for future work. First, we would like to further study the practical implementation and performance of CGD, developing it to become a useful tool for practitioners to solve competitive optimization problems. Second, we would like to study extensions of CGD to the setting of more than two players. As hinted in Section 2, a natural candidate would be to simply consider multilinear quadratically regularized local models, but the practical implementation and evaluation of this idea is still open.

Acknowledgments

A. Anandkumar is supported in part by Bren endowed chair, Darpa PAI, and Microsoft, Google and Adobe faculty fellowships. F. Schäfer gratefully acknowledges support by the Air Force Office of Scientific Research under award number FA9550-18-1-0271 (Games for Computation and Learning) and by Amazon AWS under the Caltech Amazon Fellows program.

References

Appendix A Proofs of convergence

Proof of Theorem 2.3.

To shorten the expressions below, we set , , , , , , , and . Letting be the update step of CGD and using Taylor expansion, we obtain

By expanding zero to and , we obtain

We now plug the update rule of CGD into and and observe that to obtain

By plugging this into our main computation, we obtain

By positivity of squares, we have

For we have from which we deduce the result. ∎

Theorem 2.4 follows from Theorem 2.3 by relatively standard arguments:

Proof of Theorem 2.4.

Since and the gradient and Hessian of are continuous, there exists a neighbourhood of such that for all possible starting points , we have . Then, by convergence of the geometric series there exists a closed neighbourhood of , such that for we have and thus converges at an exponential rate to a point in . ∎

Appendix B Details regarding the experiments

b.1 Experiment: Estimating a covariance matrix

We consider the problem , where the are empirical covariance matrices obtained from samples distributed according to . For our experiments, the matrix is created as , where the entries of are distributed i.i.d. standard Gaussian. We consider the algorithms OGDA, SGA, ConOpt, and CGD, with , and let the stepsizes range over . We begin with the deterministic case , corresponding to the limit of large sample size. We let and evaluate the algorithms according to the trade-off between the number of forward evaluations and the corresponding reduction of the residual , starting with a random initial guess (the same for all algorithms) obtained as , , where the entries of

are i.i.d uniformly distributed in

. We count the number of "forward passes" per outer iteration as follows.

  • OGDA: 2

  • SGA: 4

  • ConOpt:

  • CGD: 4 + 2 number of CG iterations

The results are summarized in Figure 6. We see consistently that for the same stepsize, CGD has convergence rate comparable to that of OGDA. However, as we increase the stepsize the other methods start diverging, thus allowing CGD to achieve significantly better convergence rates by using larger stepsizes. For larger dimensions () OGDA, SGA, and ConOpt become even more unstable such that OGDA with the smallest stepsize is the only other method that still converges, although at a much slower rate than CGD with larger stepsizes.

[scale=0.45]apfigures/cvest_d_20.png [scale=0.45]apfigures/cvest_d_40.png [scale=0.45]apfigures/cvest_d_60.png

Figure 6: The decay of the residual as a function of the number of forward iterations (, from top to bottom). Note that missing combinations of algorithms and stepsizes correspond to divergent experiments. While the exact behavior of the different methods is subject to some stochasticity, results as above were typical during our experiments.

We now consider the stochastic setting, where at each iteration a new is obtained as the empirical covariance matrix of samples of , for .

[scale=0.45]apfigures/cvest_NUM_BATCH_100.png [scale=0.45]apfigures/cvest_NUM_BATCH_1000.png [scale=0.45]apfigures/cvest_NUM_BATCH_10000.png

Figure 7: The decay of the residual as a function of the number of forward iterations in the stochastic case with and batch sizes of , from top to bottom).

In this setting, the stochastic noise very quickly dominates the error, preventing CGD from achieving significantly better approximations than the other algorithms, while other algorihtms decrease the error more rapidly, initially. It might be possible to improve the performance of our algorithm by lowering the accuracy of the inner linea system solve, following the intuition that in a noisy environment, a very accurate solve is not worth the cost. However, even without tweaking it is noticable than the trajectories of CGD are less noisy than those of the other algorithms, and it is furthermore the only algorithm that does not diverge for any of the stepsizes. It is interesting to note that the trajectories of CGD are consistently more regular than those of the other algorithms, for comparable stepsizes.

b.2 Experiment: Fitting a bimodal distribution

We use a GAN to fit a Gaussian mixture of two Gaussian random variables with means

and

, and standard deviation

Generator and discriminator are given by dense neural nets with four hidden layers of

units each that are initialized as orthonormal matrices, and ReLU as nonlinearities after each hidden layer. The generator uses 512-variate standard Gaussian noise as input, and both networks use a linear projection as their final layer. At each step, the discriminator is shown 256 real, and 256 fake examples. We interpret the output of the discriminator as a logit and use sigmoidal crossentropy as a loss function. We tried stepsizes

together with RMSProp ( and applied SGA, ConOpt (), OGDA, and CGD. Note that the RMSProp version of CGD with diagonal scaling given by the matrices , is obtained by replacing the quadratic penalties and in the local game by and , and carrying out the remaining derivation as before. This also allows to apply other adaptive methods like ADAM. On all methods, the generator and discriminator are initially chasing each other across the strategy space, producing the typical cycling pattern. When using SGA, ConOpt, or OGDA, however, eventually the algorithm diverges with the generator either mapping all the mass far away from the mode, or collapsing the generating map to become zero. Therefore, we also tried decreasing the stepsize to , which however did not prevent the divergence. For CGD, after some initial cycles the generator starts splitting the mass and distributes is roughly evenly among the two modes. During our experiments, this configuration appeared to be robust.