Negative Momentum for Improved Game Dynamics

07/12/2018 ∙ by Gauthier Gidel, et al. ∙ 10

Games generalize the optimization paradigm by introducing different objective functions for different optimizing agents, known as players. Generative Adversarial Networks (GANs) are arguably the most popular game formulation in recent machine learning literature. GANs achieve great results on generating realistic natural images, however they are known for being difficult to train. Training them involves finding a Nash equilibrium, typically performed using gradient descent on the two players' objectives. Game dynamics can induce rotations that slow down convergence to a Nash equilibrium, or prevent it altogether. We provide a theoretical analysis of the game dynamics. Our analysis, supported by experiments, shows that gradient descent with a negative momentum term can improve the convergence properties of some GANs.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 8

page 9

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

footnotetext: Equal contribution.

Recent advances in machine learning are largely driven by the success of gradient-based optimization methods for the training process. A common learning paradigm is empirical risk minimization, where a (potentially non-convex) objective, that depends on the data, is minimized. However, some recently introduced approaches require the joint minimization of several objectives. For example, actor-critic methods can be written as a bi-level optimization problem (Pfau and Vinyals, 2016) and generative adversarial networks (GANs) (Goodfellow et al., 2014) use a two-player game formulation.

Games generalize the standard optimization framework by introducing different objective functions for different optimizing agents, known as players. We are commonly interested in finding a local Nash equilibrium: a set of parameters from which no player can (locally and unilaterally) improve its objective function. Games with differentiable objectives often proceed by simultaneous or alternating gradient steps on the players’ objectives. Even though the dynamics of gradient based methods is well understood for minimization problems, new issues appear in multi-player games. For instance, some stable stationary points of the dynamics may not be (local) Nash equilibria (Adolphs et al., 2018).

Motivated by a decreasing trend of momentum values in GAN literature (see Fig. 1), we study the effect of two particular algorithmic choices: (i) the choice between simultaneous and alternating updates, and (ii) the choice of step-size and momentum value. The idea behind our approach is that a momentum term combined with the alternating gradient method can be used to manipulate the natural oscillatory behavior of adversarial games. We summarize our main contributions as follows:

  • We prove in §5 that the alternating gradient method with negative momentum is the only setting within our study parameters (Fig. 2) that converges on a bilinear smooth game. Using a zero or positive momentum value, or doing simultaneous updates in such games fails to converge.

  • We show in §4

    that, for general dynamics, when the eigenvalues of the Jacobian have a large imaginary part, negative momentum can improve the local convergence properties of the gradient method.

  • We confirm the benefits of negative momentum for training GANs with the notoriously ill-behaved saturating loss on both toy settings, and real datasets.

Outline.

§2 describes the fundamentals of the analytic setup that we use. §3 provides a formulation for the optimal step-size, and discusses the constraints and intuition behind it. §4 presents our theoretical results and guarantees on negative momentum. §5 studies the properties of alternating and simultaneous methods with negative momentum on a bilinear smooth game. §6 contains experimental results on toy and real datasets. Finally, in §7, we review some of the existing work on smooth game optimization as well as GAN stability and convergence.

Most single-objective

Mirza and Osindero (2014)

Denton et al. (2015)

Radford et al. (2015)

Zhu et al. (2017)

Arjovsky et al. (2017)

Gulrajani et al. (2017)

Miyato et al. (2018)

0.2

0.4

0.6

0.8

1.0

Time

Momentum
Figure 1: Decreasing trend in the value of momentum used for training GANs across time.
Method Bounded Converges Bound on Simult. Thm. 5 >0 0 <0 Altern. Thm. 6 >0 Conjecture: 0 <0
Figure 2: Left: Effect of gradient methods on an unconstrained bilinear example: The quantity is the distance to the optimum (see formal definition in §5) and is the momentum value. Right: Graphical intuition of the role of momentum in two steps of simultaneous updates (tan) or alternated updates (olive). Positive momentum (red) drives the iterates outwards whereas negative momentum (blue) pulls the iterates back towards the center, but it is only strong enough for alternated updates.

2 Background

Notation

In this paper, scalars are lower-case letters (e.g.,

), vectors are lower-case bold letters (e.g.,

), matrices are upper-case bold letters (e.g., ) and operators are upper-case letters (e.g., ). The spectrum of a squared matrix is denoted by , and its spectral radius is defined as . We respectively note and

the smallest and the largest positive singular values of

. The identity matrix of

is written . We use and to respectively denote the real and imaginary part of a complex number. and stand for the standard asymptotic notations. Finally, all the omitted proofs can be found in §D.

Game theory formulation of GANs

Generative adversarial networks consist of a discriminator and a generator . In this game, the discriminator’s objective is to tell real from generated examples. The generator’s goal is to produce examples that are sufficiently close to real examples to confuse the discriminator.

From a game theory point of view, GAN training is a differentiable two-player game: the discriminator

aims at minimizing its cost function and the generator aims at minimizing its own cost function . Using the same formulation as the one in Mescheder et al. (2017) and Gidel et al. (2018), the GAN objective has the following form,

(1)

Given such a game setup, GAN training consists of finding a local Nash Equilibrium, which is a state in which neither the discriminator nor the generator can improve their respective cost by a small change in their parameters. In order to analyze the dynamics of gradient-based methods near a Nash Equilibrium, we look at the gradient vector field,

(2)

and its associated Jacobian ,

(3)

Games in which are called zero-sum games and (1) can be reformulated as a min-max problem. This is the case for the original min-max GAN formulation, but not the case for the non-saturating loss (Goodfellow et al., 2014) which is commonly used in practice.

For a zero-sum game, we note . When the matrices and are zero, the Jacobian is anti-symmetric and has pure imaginary eigenvalues. We call games with pure imaginary eigenvalues purely adversarial games. This is the case in a simple bilinear game . This game can be formulated as a GAN where the true distribution is a Dirac on 0, the generator is a Dirac on and the discriminator is linear. This setup was extensively studied in 2D by Gidel et al. (2018).

Conversely, when is zero and the matrices and are symmetric and definite positive, the Jacobian is symmetric and has real positive eigenvalues. We call games with real positive eigenvalues purely cooperative games. This is the case, for example, when the objective function is separable such as where and are two convex functions. Thus, the optimization can be reformulated as two separated minimization of and with respect to their respective parameters.

These notions of adversarial and cooperative games can be related to the notions of potential games (Monderer and Shapley, 1996) and Hamiltonian games recently introduced by Balduzzi et al. (2018): a game is a potential game (resp. Hamiltonian game) if its Jacobian is symmetric (resp. asymmetric). Our definition of cooperative game is a bit more general than the definition of potential game since some non-symmetric matrices may have positive eigenvalues. Similarly, the notion of adversarial game generalizes the Hamiltonian games since some non-antisymmetric matrices may have pure imaginary eigenvalues, for instance,

In this work, we are interested in games in between purely adversarial games and purely cooperative ones, i.e., games which have eigenvalues with non-negative real part (cooperative component) and non-zero imaginary part (adversarial component). For , a simple class of such games is parametrized by ,

(4)

Simultaneous Gradient Method.

Let us consider the dynamics of the simultaneous gradient method. It is defined as the repeated application of the operator,

(5)

where is the learning rate. Now, for brevity we write the joint parameters . For , let be the point of the sequence computed by the gradient method,

(6)

Then, if the gradient method converges, and its limit point is a fixed point of such that is positive-definite, then is a local Nash equilibrium. Interestingly, some of the stable stationary points of gradient dynamics may not be Nash equilibrium (Adolphs et al., 2018). In this work, we focus on the local convergence properties near the stationary points of gradient . To the best of our knowledge, there is no first order method alleviating this issue. In the following, is a stationary point of the gradient dynamics (i.e. a point such that ).

3 Tuning the Step-Size

Under certain conditions on a fixed point operator, linear convergence is guaranteed in a neighborhood around a fixed point.

Theorem 1 (Prop. 4.4.1 Bertsekas (1999)).

If the spectral radius , then, for in a neighborhood of , the distance of to the stationary point converges at a linear rate of .

From the definition in (5), we have:

(7)

If the eigenvalues of all have a positive real-part, then for small enough , the eigenvalues of are inside a convergence circle of radius , as illustrated in Fig. 3. Thm. 1 guarantees the existence of an optimal step-size which yields a non-trivial convergence rate . Thm. 2 gives analytic bounds on the optimal step-size , and lower-bounds the best convergence rate we can expect.

Theorem 2.

If the eigenvalues of all have a positive real-part, then, the best step-size , which minimizes the spectral radius of , is the solution of a (convex) quadratic by parts problem, and satisfies,

(8)
(9)
(10)

where are sorted such that . Particularly, when we are in the case of the top plot of Fig.3 and

When is positive-definite, the best is attained either because of one or several limiting eigenvalues. We illustrate and interpret these two cases in Fig. 3. In multivariate convex optimization, the optimal step-size depends on the extreme eigenvalues and their ratio, the condition number. Unfortunately, the notion of the condition number does not trivially extend to games, but Thm. 2 seems to indicate that the real part of the inverse of the eigenvalues play an important role in the dynamics of smooth games. We think that a notion of condition number might be meaningful for such games and we propose an illustrative example to discuss this point in §B. Note that when the eigenvalues are pure positive real numbers belonging to , (8) provides the standard bound obtained with a step-size (see §D.2 for details).

Figure 3: Eigenvalues of the Jacobian and their trajectories for growing step-sizes. The unit circle is drawn in black, and the red dashed circle has radius equal to the largest eigenvalue , which is directly related to the convergence rate. Therefore, smaller red circles mean better convergence rates. Top: The red circle is limited by the tangent trajectory line , which means the best convergence rate is limited only by the eigenvalue which will pass furthest from the origin as grows, i.e., . Bottom: The red circle is cut (not tangent) by the trajectories at points and . The is optimal because any increase in will push the eigenvalue out of the red circle, while any decrease in step-size will retract the eigenvalue out of the red circle, which will lower the convergence rate in any case. Figure inspired by Mescheder et al. (2017).

Note that, in (9), we have because are sorted such that, . In (8), we can see that if the Jacobian of has an almost purely imaginary eigenvalue then is close to and thus, the convergence rate of the gradient method may be arbitrarily close to 1. Zhang and Mitliagkas (2017) provide an analysis of the momentum method for quadratics, showing that momentum can actually help to better condition the model. One interesting point from their work is that the best conditioning is achieved when the added momentum makes the Jacobian eigenvalues turn from positive reals into complex conjugate pairs. Our goal is to use momentum to wrangle game dynamics into convergence manipulating the eigenvalues of the Jacobian.

4 Negative Momentum

As shown in (8), the presence of eigenvalues with large imaginary parts can restrict us to small step-sizes and lead to slow convergence rates. In order to improve convergence, we add a negative momentum term into the update rule. Informally, one can think of negative momentum as friction that can damp oscillations. The new momentum term leads to a modification of the parameter update operator of (5). We use a similar state augmentation as Zhang and Mitliagkas (2017) to form a compound state . The update rule (5) turns into the following,

(11)
where (12)

in which is the momentum parameter. Therefore, the Jacobian of has the following form,

(13)

Note that for , we recover the gradient method.

Figure 4: Transformation of the eigenvalues by a negative momentum method for a game introduced in (4) with . Convergence circles for gradient method are in red, negative momentum in green, and unit circle in black. Solid convergence circles are optimized over all step-sizes, while dashed circles are at a given step-size . For a fixed , original eigenvalues are in red and negative momentum eigenvalues are in blue. Their trajectories as sweeps in are in light colors. Negative momentum helps as the new convergence circle (green) is smaller, due to shifting the original eigenvalues (red dots) towards the origin (right blue dots), while the eigenvalues due to state augmentation (left blue dots) have smaller magnitude and do not influence the convergence rate. Negative momentum allows faster convergence (green circle inside the solid red circle) for a broad range of step-sizes.

In some situations, if is adjusted properly, negative momentum can improve the convergence rate to a local stationary point by pushing the eigenvalues of its Jacobian towards the origin. In the following theorem, we provide an explicit equation for the eigenvalues of the Jacobian of .

Theorem 3.

The eigenvalues of are

(14)

where and is the complex square root of with positive real part*** If is a negative real number we set . Moreover we have the following Taylor approximation,

(15)
(16)

When is small enough, is a complex number close to . Consequently, is close to the original eigenvalue for gradient dynamics , and , the eigenvalue introduced by the state augmentation, is close to 0. We formalize this intuition by providing the first order approximation of both eigenvalues.

In Fig. 4, we illustrate the effects of negative momentum on a game described in (4). Negative momentum shifts the original eigenvalues (trajectories in light red) by pushing them to the left towards the origin (trajectories in light blue).

Since our goal is to minimize the largest magnitude of the eigenvalues of which are computed in Thm. 3, we want to understand the effect of on these eigenvalues with potential large magnitude. Let , we define the (squared) magnitude that we want to optimize,

(17)

We study the local behavior of for small . The following theorem shows that a well suited decreases , which corresponds to faster convergence.

Theorem 4.

For any s.t. ,

Particularly, we have and .

As we have seen previously in Fig. 3 and Thm. 2, there are only few eigenvalues which slow down the convergence. Thm. 4 is a local result showing that a small negative momentum can improve the magnitude of the limiting eigenvalues in the following cases: when there is only one limiting eigenvalue (since in that case the optimal step-size is ) or when there are several limiting eigenvalues and the intersection is not empty. We point out that we do not provide any guarantees on whether this intersection is empty or not but note that if the absolute value of the argument of is larger than then by (10), our theorem provides that the optimal step-size belongs to .

Since our result is local, it does not provide any guarantees on large negative values of . Nevertheless, we numerically optimized (17) with respect to and and found that for any non-imaginary fixed eigenvalue , the optimal momentum is negative and the associated optimal step-size is larger than . Another interesting aspect of negative momentum is that it admits larger step-sizes (see Fig. 4 and 5).

For a game with purely imaginary eigenvalues, when , Thm. 3 shows that . Therefore, at the first order, only has an impact on the imaginary part of . Consequently cannot be pushed into the unit circle, and the convergence guarantees of Thm. 1 do not apply. In other words, the analysis above provides convergence rates for games without any pure imaginary eigenvalues. It excludes the purely adversarial bilinear example ( in Eq. 4) that is discussed in the next section.

5 Bilinear Smooth Games

In this section we analyze the dynamics of a purely adversarial game described by,

(18)

The first order stationary condition for this game characterizes the solutions as

(19)

If (resp. ) does not belong to the column space of (resp. ), the game (18) admits no equilibrium. In the following, we assume that an equilibrium does exist for this game. Consequently, there exist and such that and . Using the translations and , we can assume without loss of generality, that , and . We provide upper and lower bounds on the squared distance from the known equilibrium,

(20)

where is the projection of ( onto the solution space. We show in §C, Lem. 2 that, for our methods of interest, this projection has a simple formulation that only depends on the initialization .

We aim to understand the difference between the dynamics of simultaneous steps and alternating steps. Practitioners have been widely using the latter instead of the former when optimizing GANs despite the rich optimization literature on simultaneous methods.

5.1 Simultaneous gradient descent

We define this class of methods with momentum using the following formulas,

(21)

In our simple setting, the operator is linear. One way to study the asymptotic properties of the sequence is to compute the eigenvalues of . The following proposition characterizes these eigenvalues.

Proposition 1.

The eigenvalues of are the roots of the 4 order polynomials:

(22)

Interestingly, these roots only depend on the product meaning that any re-scaling does not change the eigenvalues of and consequently the asymptotic dynamics of the iterates . The magnitude of the eigenvalues described in (22), characterizes the asymptotic properties for the iterates of the simultaneous method (21). We report the maximum magnitude of these roots for a given and for a grid of step-sizes and momentum values in Fig 7. We observe that they are always larger than 1, which transcribes a diverging behavior. The following theorem provides an analytical rate of divergence.

Figure 5: The effect of momentum in a simple min-max bilinear game where the equilibrium is at . (left-a) Simultaneous GD with no momentum (left-b) Alternating GD with no momentum. (left-c) Alternating GD with a momentum of . (left-d) Alternating GD with a momentum of . (right) A grid of experiments for alternating GD with different values of momentum () and step-sizes (): While any positive momentum leads to divergence, small enough value of negative momentum allows for convergence with large step-sizes. The color in each cell indicates the normalized distance to the equilibrium after 500k iteration, such that corresponds to the initial condition and values larger (smaller) than correspond to divergence (convergence).
Theorem 5.

For any and , the iterates of the simultaneous methods (21) diverge as,

This theorem states that the iterates of the simultaneous method (21) diverge geometrically for . Interestingly, this geometric divergence implies that even a uniform averaging of the iterates (standard in game optimization to ensure convergence (Freund et al., 1999)) cannot alleviate this divergence.

5.2 Alternating gradient descent

Alternating gradient methods take advantage of the fact that the iterates and are computed sequentially, to plug the value of (instead of for simultaneous update rule) into the update of ,

(23)

This slight change between (21) and (23) significantly shifts the eigenvalues of the Jacobian. We first characterize them with the following proposition.

Proposition 2.

The eigenvalues of are the roots of the 4 order polynomials:

(24)

The same way as in (22), these roots only depend on the product . The only difference is that the monomial with coefficient is of degree 2 in (22) and of degree 3 in (24). This difference is major since, for well chosen values of negative momentum, the eigenvalues described in Prop. 2 lie in the unit disk (see Fig. 7). As a consequence, the iterates of the alternating method with no momentum are bounded and do converge if we add some well chosen negative momentum:

Theorem 6.

If we set , and then we have

(25)

If we set and , then there exists such that for any , .

Our results from this section, namely Thm. 5 and Thm. 6, are summarized in Fig. 2, and demonstrate how alternating steps can improve the convergence properties of the gradient method for bilinear smooth games. Moreover, combining them with negative momentum can surprisingly lead to a linearly convergent method. The conjecture provided in Fig. 2 (divergence of the alternating method with positive momentum) is backed-up by the results provided in Fig. 5 and §A.1.

6 Experiments and Discussion

Min-Max Bilinear Game

[Fig. 5]   In our first experiments, we showcase the effect of negative momentum in a bilinear min-max optimization setup (4) where and . We compare the effect of positive and negative momentum in both cases of alternating and simultaneous gradient steps.

Fashion MNIST and CIFAR 10

[Fig. 6]    In our third set of experiments, we use negative momentum in a GAN setup on CIFAR-10 (Krizhevsky and Hinton, 2009) and Fashion-MNIST (Xiao et al., 2017) with saturating loss

and alternating steps. We use residual networks for both the generator and the discriminator with no batch-normalization. Following the same architecture as

Gulrajani et al. (2017), each residual block is made of two convolution layers with ReLUactivation function. Up-sampling and down-sampling layers are respectively used in the generator and discriminator. We experiment with different values of momentum on the discriminator and a constant value of 0.5 for the momentum of the generator. We observe that using a negative value can generally result in samples with higher quality and inception scores. Intuitively, using negative momentum only on the discriminator slows down the learning process of the discriminator and allows for better flow of the gradient to the generator. Note that we provide an additional experiment on mixture of Gaussians in § A.2.

Figure 6: Comparison between negative and positive momentum on GANs with saturating loss on CIFAR-10 (left) and on Fashion MNIST (right) using a residual network. For each dataset, a grid of different values of momentum () and step-sizes () is provided which describes the discriminator’s settings while a constant momentum of and step-size of is used for the generator. Each cell in CIFAR-10 (or Fashion MNIST) grid contains a single configuration in which its color (or its content) indicates the inception score (or a single sample) of the model. For CIFAR-10 experiments, yellow is higher while blue is the lower inception score. Along each row, the best configuration is chosen and more samples from that configuration are presented on the right side of each grid.

7 Related Work

Optimization

From an optimization point of view, a lot of work has been done in the context of understanding momentum and its variants (Polyak, 1964; Qian, 1999; Nesterov, 2013; Sutskever et al., 2013)

. Some recent studies have emphasized the importance of momentum tuning in deep learning such as

Sutskever et al. (2013), Kingma and Ba (2015), and Zhang and Mitliagkas (2017), however, none of them consider using negative momentum. Among recent work, using robust control theory, Lessard et al. (2016) study optimization procedures and cover a variety of algorithms including momentum methods. Their analysis is global and they establish worst-case bounds for smooth and strongly-convex functions. Mitliagkas et al. (2016) considered negative momentum in the context of asynchronous single-objective minimization. They show that asynchronous-parallel dynamics ‘bleed’ into optimization updates introducing momentum-like behavior into SGD. They argue that algorithmic momentum and asynchrony-induced momentum add up to create an effective ‘total momentum’ value. They conclude that to attain the optimal (positive) effective momentum in an asynchronous system, one would have to reduce algorithmic momentum to small or sometimes negative values. This differs from our work where we show that for games the optimal effective momentum may be negative. Ghadimi et al. (2015) analyze momentum and provide global convergence properties for functions with Lipschitz-continuous gradients. However, all the results mentioned above are restricted to minimization problems. The purpose of our work is to try to understand how momentum influences game dynamics which is intrinsically different from minimization dynamics.

GANs as games

A lot of recent work has attempted to make GAN training easier with new optimization methods. Daskalakis et al. (2018) extrapolate the next value of the gradient using previous history and Gidel et al. (2018) explore averaging and introduce a variant of the extra-gradient algorithm.

Balduzzi et al. (2018) develop new methods to understand the dynamics of general games: they decompose second-order dynamics into two components using Helmholtz decomposition and use the fact that the optimization of Hamiltonian games is well understood. It differs from our work since we do not consider any decomposition of the Jacobian but focus on the manipulation of its eigenvalues. Recently, Liang and Stokes (2018) provide a unifying theory for smooth two-player games for non-asymptotic local convergence. They also provide theory for choosing the right step-size required for convergence.

From another perspective, Odena et al. (2018) show that in a GAN setup, the average conditioning of the Jacobian of the generator becomes ill-conditioned during training. They propose Jacobian clamping to improve the inception score and Frechet Inception Distance. Mescheder et al. (2017) provide discussion on how the eigenvalues of the Jacobian govern the local convergence properties of GANs. They argue that the presence of eigenvalues with zero real-part and large imaginary-part results in oscillatory behavior but do not provide results on the optimal step-size and on the impact of momentum. Nagarajan and Kolter (2017) also analyze the local stability of GANs as an approximated continuous dynamical system. They show that during training of a GAN, the eigenvalues of the Jacobian of the corresponding vector field are pushed away from one along the real axis.

8 Conclusion

In this paper, we show analytically and empirically that alternating updates with negative momentum is the only method within our study parameters (Fig.2) that converges in bilinear smooth games. We study the effects of using negative values of momentum in a GAN setup both theoretically and experimentally. We show that, for a large class of adversarial games, negative momentum may improve the convergence rate of gradient-based methods by shifting the eigenvalues of the Jacobian appropriately into a smaller convergence disk. We found that, in simple yet intuitive examples, using negative momentum makes convergence to the Nash Equilibrium easier. Our experiments support the use of negative momentum for saturating losses on mixtures of Gaussians, as well as on other tasks using CIFAR-10 and fashion MNIST. Altogether, fully stabilizing learning in GANs requires a deep understanding of the underlying highly non-linear dynamics. We believe our work is a step towards a better understanding of these dynamics. We encourage deep learning researchers and practitioners to include negative values of momentum in their hyper-parameter search.

We believe that our results explain a decreasing trend in momentum values used for training GANs in the past few years reported in Fig. 4. Some of the most successful papers use zero momentum (Arjovsky et al., 2017; Gulrajani et al., 2017) for architectures that would otherwise call for high momentum values in a non-adversarial setting.

Acknowledgments

This research was partially supported by the Canada CIFAR AI Chair Program, the FRQNT nouveaux chercheurs program, 2019-NC-257943, the Canada Excellence Research Chair in “Data Science for Real-time Decision-making”, by the NSERC Discovery Grant RGPIN-2017-06936, a Google Focused Research Award and an IVADO grant. Authors would like to thank NVIDIA corporation for providing the NVIDIA DGX-1 used for this research. Authors are also grateful to Frédéric Bastien, Florian Bordes, Adam Beberg, Cam Moore and Nithya Natesan for their support.

Bibliography

Bibliography

Appendix A Additionnal Figures

a.1 Maximum magnitude of the eigenvalues gradient descent with negative momentum on a bilinear objective

In Figure 7 we numerically (using the formula provided in Proposition 1 and 2) computed the maximum magnitude of the eigenvalues gradient descent with negative momentum on a bilinear objective as a function of the step size and the momentum . We can notice that on one hand, for simultaneous gradient method, no value of and provide a maximum magnitude smaller than 1, causing a divergence of the algorithm. On the other hand, for alternating gradient method there exists a sweet spot where the maximum magnitude of the eigenvalues of the operator is smaller than 1 insuring that this method does converge linearly (since the Jacobian of a bilinear minmax proble is constant).

Figure 7: Contour plot of the maximum magnitude of the eigenvalues of the polynomial (left, simultaneous) and (right, alternated) for different values of the step-size and the momentum . Note that compared to (22) and (24) we used and we defined without loss of generality. On the left, magnitudes are always larger than , and equal to for . On the right, magnitudes are smaller than for and greater than elsewhere.

a.2 Mixture of Gaussian

[Fig. 8]   In this set of experiments we evaluate the effect of using negative momentum for a GAN with saturating loss

and alternating steps. The data in this experiment comes from eight Gaussian distributions which are distributed uniformly around the unit circle. The goal is to force the generator to generate 2-D samples that are coming from

all of the 8 distributions. Although this looks like a simple task, many GANs fail to generate diverse samples in this setup. This experiment shows whether the algorithm prevents mode collapse or not.

Figure 8: The effect of negative momentum for a mixture of 8 Gaussian distributions in a GAN setup. Real data and the results of using SGD with zero momentum on the Generator and using negative / zero / positive momentum () on the Discriminator are depicted.

We use a fully connected network with 4 hidden ReLU layers where each layer has 256 hidden units. The latent code of the generator is an 8-dimensional multivariate Gaussian. The model is trained for 100,000 iterations with a learning rate of

for stochastic gradient descent along with values of zero,

and momentum. We observe that negative momentum considerably improves the results compared to positive or zero momentum.

Appendix B Discussion on Momentum and Conditioning

In this section, we analyze the effect of the conditioning of the problem on the optimal value of momentum. Consider the following formulation as an extension of the bilinear min-max game discussed in §5, Eq. 4 (),

(26)

where is a square diagonal positive-definite matrix,

(27)

and its condition number is . Thus, we can re-write the vector field and the Jacobian as a function of and ,

(28)

The corresponding eigenvalues of the Jacobian are,

(29)

For simplicity, in the following we will note for .

Using Thm. (3), the eigenvalues of are,

(30)

where and is the complex square root of with positive real part.

Hence the spectral radius of can be explicitly formulated as a function of and ,

(31)

In Figure 9, we numerically computed the optimal that minimizes as a function of the step-size , for , and . To balance the game between the adversarial part and the cooperative part, we normalize the matrix such that the sum of its diagonal elements is . It can be seen that there is a competition between the type of the game (adversarial and cooperative) versus the conditioning of the matrix . In a more cooperative regime, increasing results in more positive values of momentum which is consistent with the intuition that cooperative games are almost minimization problems where the optimum value for the momentum is known (Polyak, 1964) to be . Interestingly, even if the condition number of is large, when the game is adversarial enough, the optimum value for the momentum is negative. This experimental setting seems to suggest the existence of a multidimensional condition number taking into account the difficulties introduced by the ill conditioning of as well as the adversarial component of the game.

Figure 9: Plot of the optimal value of momentum by for different ’s and condition numbers (). Blue/white/orange regions correspond to negative/zero/positive values of the optimal momentum, respectively.

Appendix C Lemmas and Definitions

Recall that the spectral radius of a matrix is the maximum magnitude of its eigenvalues.

(32)

For a symmetric matrix, this is equal to the spectral norm, which is the operator norm induced by the vector 2-norm. However, we are dealing with general matrices, so these two values may be different. The spectral radius is always smaller than the spectral norm, but it’s not a norm itself, as illustrated by the example below:

where we used the fact that the spectral norm is also the square root of the largest singular value.

In this section we will introduce three lemmas that we will use in the proofs of §D.

The first lemma is about the determinant of a block matrix.

Lemma 1.

Let four matrices such that and commute. Then

(33)

where is the determinant of .

Proof.

See (Zhang, 2006, Section 0.3). ∎

The second lemma is about the iterates of the simultaneous and the alternating methods introduced in §5 for the bilinear game. It shows that we can pick a subspace where the iterates will remain.

Lemma 2.

Let the updates computed by the simultaneous (resp. alternating) gradient method with momentum (21) (resp. (23)). There exists are couple solution of (18) only depending on such that,

(34)
Proof of Lemma 2.

Let us start with the simultaneous updates (21).

Let the SVD of where and are orthogonal matrices and

(35)

where is the rank of and are the (positive) singular values of . The update rules (21) implies that,

(36)

Consequently, for any and we have that,

(37)

Since the solutions of (18) verify the following first order conditions:

(38)

One can set as in (37) to be a couple of solution of (18) such that and . By an immediate recurrence, using (36) we have that for any initialization there exists a couple such that that for any ,

(39)

Consequently,

(40)

The proof for the alternated updates (23) are the same since we only use the fact that the iterates stay on the span of interest. ∎

Lemma 3.

Let and a sequence such that, , then we have three cases of interest for the spectral radius :

  • If , and is diagonalizable, then .

  • If , then there exist such that .

  • If , and is diagonalizable then .

Proof.

For that section we note the norm of :

  • If :

    We have for and any ,