# GANs Trained by a Two Time-Scale Update Rule Converge to a Local Nash Equilibrium

Generative Adversarial Networks (GANs) excel at creating realistic images with complex models for which maximum likelihood is infeasible. However, the convergence of GAN training has still not been proved. We propose a two time-scale update rule (TTUR) for training GANs with stochastic gradient descent on arbitrary GAN loss functions. TTUR has an individual learning rate for both the discriminator and the generator. Using the theory of stochastic approximation, we prove that the TTUR converges under mild assumptions to a stationary local Nash equilibrium. The convergence carries over to the popular Adam optimization, for which we prove that it follows the dynamics of a heavy ball with friction and thus prefers flat minima in the objective landscape. For the evaluation of the performance of GANs at image generation, we introduce the "Fréchet Inception Distance" (FID) which captures the similarity of generated images to real ones better than the Inception Score. In experiments, TTUR improves learning for DCGANs and Improved Wasserstein GANs (WGAN-GP) outperforming conventional GAN training on CelebA, CIFAR-10, SVHN, LSUN Bedrooms, and the One Billion Word Benchmark.

## Authors

• 2 publications
• 3 publications
• 16 publications
• 4 publications
• 28 publications
• ### Training Generative Adversarial Networks via stochastic Nash games

Generative adversarial networks (GANs) are a class of generative models ...
10/17/2020 ∙ by Barbara Franci, et al. ∙ 0

• ### Beyond Local Nash Equilibria for Adversarial Networks

Save for some special cases, current training methods for Generative Adv...
06/18/2018 ∙ by Frans A. Oliehoek, et al. ∙ 4

• ### Negative Momentum for Improved Game Dynamics

Games generalize the optimization paradigm by introducing different obje...
07/12/2018 ∙ by Gauthier Gidel, et al. ∙ 10

• ### Adaptive WGAN with loss change rate balancing

Optimizing the discriminator in Generative Adversarial Networks (GANs) t...
08/28/2020 ∙ by Xu Ouyang, et al. ∙ 0

• ### Implicit competitive regularization in GANs

Generative adversarial networks (GANs) are capable of producing high qua...
10/13/2019 ∙ by Florian Schäfer, et al. ∙ 49

• ### Stackelberg GAN: Towards Provable Minimax Equilibrium via Multi-Generator Architectures

We study the problem of alleviating the instability issue in the GAN tra...
11/19/2018 ∙ by Hongyang Zhang, et al. ∙ 8

• ### Approximation and convergence of GANs training: an SDE approach

Generative adversarial networks (GANs) have enjoyed tremendous empirical...
06/03/2020 ∙ by Haoyang Cao, et al. ∙ 0

## Code Repositories

### Deep-learning-with-cats

Deep learning with cats (^._.^)

### tf-SNDCGAN

Tensorflow Implementation of the paper "Spectral Normalization for Generative Adversarial Networks" (ICML 2017 workshop)

### DL-Scratch-Book

None

##### This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

## Introduction

Generative adversarial networks (GANs) Goodfellow:14nips have achieved outstanding results in generating realistic images Radford:15 ; Ledig:16 ; Isola:17 ; Arjovsky:17 ; Berthelot:17 and producing text Gulrajani:17

. GANs can learn complex generative models for which maximum likelihood or a variational approximations are infeasible. Instead of the likelihood, a discriminator network serves as objective for the generative model, that is, the generator. GAN learning is a game between the generator, which constructs synthetic data from random variables, and the discriminator, which separates synthetic data from real world data. The generator’s goal is to construct data in such a way that the discriminator cannot tell them apart from real world data. Thus, the discriminator tries to minimize the synthetic-real discrimination error while the generator tries to maximize this error. Since training GANs is a game and its solution is a Nash equilibrium, gradient descent may fail to converge

Salimans:16 ; Goodfellow:14nips ; Goodfellow:17tutorial . Only local Nash equilibria are found, because gradient descent is a local optimization method. If there exists a local neighborhood around a point in parameter space where neither the generator nor the discriminator can unilaterally decrease their respective losses, then we call this point a local Nash equilibrium.

To characterize the convergence properties of training general GANs is still an open challenge Goodfellow:14criteria ; Goodfellow:17tutorial . For special GAN variants, convergence can be proved under certain assumptions Lim:17 ; Grnarova:17 ; Tolstikhin:17 . A prerequisit for many convergence proofs is local stability Kushner:03 which was shown for GANs by Nagarajan and Kolter Nagarajan:17 for a min-max GAN setting. However, Nagarajan and Kolter require for their proof either rather strong and unrealistic assumptions or a restriction to a linear discriminator. Recent convergence proofs for GANs hold for expectations over training samples or for the number of examples going to infinity Li:17mmd ; Mroueh:17fisher ; Liu:17 ; Arora:17 , thus do not consider mini-batch learning which leads to a stochastic gradient Wang:17 ; Hjelm:17 ; Mescheder:17 ; Li:17 .

Recently actor-critic learning has been analyzed using stochastic approximation. Prasad et al. Prasad:15

showed that a two time-scale update rule ensures that training reaches a stationary local Nash equilibrium if the critic learns faster than the actor. Convergence was proved via an ordinary differential equation (ODE), whose stable limit points coincide with stationary local Nash equilibria. We follow the same approach. We prove that GANs converge to a local Nash equilibrium when trained by a two time-scale update rule (TTUR), i.e., when discriminator and generator have separate learning rates. This also leads to better results in experiments. The main premise is that the discriminator converges to a local minimum when the generator is fixed. If the generator changes slowly enough, then the discriminator still converges, since the generator perturbations are small. Besides ensuring convergence, the performance may also improve since the discriminator must first learn new patterns before they are transferred to the generator. In contrast, a generator which is overly fast, drives the discriminator steadily into new regions without capturing its gathered information. In recent GAN implementations, the discriminator often learned faster than the generator. A new objective slowed down the generator to prevent it from overtraining on the current discriminator

Salimans:16 . The Wasserstein GAN algorithm uses more update steps for the discriminator than for the generator Arjovsky:17 . We compare TTUR and standard GAN training. Fig. 1 shows at the left panel a stochastic gradient example on CelebA for original GAN training (orig), which often leads to oscillations, and the TTUR. On the right panel an example of a 4 node network flow problem of Zhang et al. Zhang:07 is shown. The distance between the actual parameter and its optimum for an one time-scale update rule is shown across iterates. When the upper bounds on the errors are small, the iterates return to a neighborhood of the optimal solution, while for large errors the iterates may diverge (see also Appendix Section A2.3).

Our novel contributions in this paper are:

• The two time-scale update rule for GANs,

• We proof that GANs trained with TTUR converge to a stationary local Nash equilibrium,

• The description of Adam as heavy ball with friction and the resulting second order differential equation,

• The convergence of GANs trained with TTUR and Adam to a stationary local Nash equilibrium,

• We introduce the “Fréchet Inception Distance” (FID) to evaluate GANs, which is more consistent than the Inception Score.

## Two Time-Scale Update Rule for GANs

We consider a discriminator

with parameter vector

and a generator with parameter vector . Learning is based on a stochastic gradient of the discriminator’s loss function and a stochastic gradient of the generator’s loss function . The loss functions and can be the original as introduced in Goodfellow et al. Goodfellow:14nips , its improved versions Goodfellow:17tutorial , or recently proposed losses for GANs like the Wasserstein GAN Arjovsky:17 . Our setting is not restricted to min-max GANs, but is valid for all other, more general GANs for which the discriminator’s loss function is not necessarily related to the generator’s loss function . The gradients and are stochastic, since they use mini-batches of real world samples and synthetic samples which are randomly chosen. If the true gradients are and , then we can define and with random variables and . Thus, the gradients and are stochastic approximations to the true gradients. Consequently, we analyze convergence of GANs by two time-scale stochastic approximations algorithms. For a two time-scale update rule (TTUR), we use the learning rates and for the discriminator and the generator update, respectively:

 wn+1 =wn+ b(n)(g(θn,wn)+M(w)n), θn+1=θn+a(n) (h(θn,wn)+M(θ)n). (1)

For more details on the following convergence proof and its assumptions see Appendix Section A2.1. To prove convergence of GANs learned by TTUR, we make the following assumptions (The actual assumption is ended by , the following text are just comments and explanations):

1. [label=(A0)]

2. The gradients and are Lipschitz.

Consequently, networks with Lipschitz smooth activation functions like ELUs (

) Clevert:16

fulfill the assumption but not ReLU networks.

3. , , , ,

4. The stochastic gradient errors and are martingale difference sequences w.r.t. the increasing -field with and , where and are positive deterministic constants. The original Assumption (A3) from Borkar 1997 follows from Lemma 2 in Bertsekas:00 (see also Ramaswamy:16 ). The assumption is fulfilled in the Robbins-Monro setting, where mini-batches are randomly sampled and the gradients are bounded.

5. For each , the ODE has a local asymptotically stable attractor within a domain of attraction such that is Lipschitz. The ODE has a local asymptotically stable attractor within a domain of attraction. The discriminator must converge to a minimum for fixed generator parameters and the generator, in turn, must converge to a minimum for this fixed discriminator minimum. Borkar 1997 required unique global asymptotically stable equilibria Borkar:97 . The assumption of global attractors was relaxed to local attractors via Assumption (A6) and Theorem 2.7 in Karmakar & Bhatnagar Karmakar:17 . See for more details Assumption (A6) in the Appendix Section A2.1.3

. Here, the GAN objectives may serve as Lyapunov functions. These assumptions of locally stable ODEs can be ensured by an additional weight decay term in the loss function which increases the eigenvalues of the Hessian. Therefore, problems with a region-wise constant discriminator that has zero second order derivatives are avoided. For further discussion see Appendix Section

A2 (C3).

6. and . Typically ensured by the objective or a weight decay term.

The next theorem has been proved in the seminal paper of Borkar 1997 Borkar:97 .

###### Theorem 1 (Borkar).

If the assumptions are satisfied, then the updates Eq. (1) converge to a.s.

The solution is a stationary local Nash equilibrium Prasad:15 , since as well as are local asymptotically stable attractors with and . An alternative approach to the proof of convergence using the Poisson equation for ensuring a solution to the fast update rule can be found in the Appendix Section A2.1.2. This approach assumes a linear update function in the fast update rule which, however, can be a linear approximation to a nonlinear gradient Konda:02 ; Konda:03 . For the rate of convergence see Appendix Section A2.2, where Section A2.2.1 focuses on linear and Section A2.2.2 on non-linear updates. For equal time-scales it can only be proven that the updates revisit an environment of the solution infinitely often, which, however, can be very large Zhang:07 ; DiCastro:10 . For more details on the analysis of equal time-scales see Appendix Section A2.3. The main idea of the proof of Borkar Borkar:97 is to use perturbed ODEs according to Hirsch 1989 Hirsch:89 (see also Appendix Section C of Bhatnagar, Prasad, & Prashanth 2013 Bhatnagar:13 ). The proof relies on the fact that there eventually is a time point when the perturbation of the slow update rule is small enough (given by ) to allow the fast update rule to converge. For experiments with TTUR, we aim at finding learning rates such that the slow update is small enough to allow the fast to converge. Typically, the slow update is the generator and the fast update the discriminator. We have to adjust the two learning rates such that the generator does not affect discriminator learning in a undesired way and perturb it too much. However, even a larger learning rate for the generator than for the discriminator may ensure that the discriminator has low perturbations. Learning rates cannot be translated directly into perturbation since the perturbation of the discriminator by the generator is different from the perturbation of the generator by the discriminator.

## Adam Follows an HBF ODE and Ensures TTUR Convergence

[3,r, ,[Heavy Ball with Friction]Heavy Ball with Friction, where the ball with mass overshoots the local minimum and settles at the flat minimum

. ] In our experiments, we aim at using Adam stochastic approximation to avoid mode collapsing. GANs suffer from “mode collapsing” where large masses of probability are mapped onto a few modes that cover only small regions. While these regions represent meaningful samples, the variety of the real world data is lost and only few prototype samples are generated. Different methods have been proposed to avoid mode collapsing

Che:17 ; Metz:16 . We obviate mode collapsing by using Adam stochastic approximation Kingma:14 . Adam can be described as Heavy Ball with Friction (HBF) (see below), since it averages over past gradients. This averaging corresponds to a velocity that makes the generator resistant to getting pushed into small regions. Adam as an HBF method typically overshoots small local minima that correspond to mode collapse and can find flat minima which generalize well Hochreiter:97nc1 . Fig. Adam Follows an HBF ODE and Ensures TTUR Convergence depicts the dynamics of HBF, where the ball settles at a flat minimum. Next, we analyze whether GANs trained with TTUR converge when using Adam. For more details see Appendix Section A3.

We recapitulate the Adam update rule at step , with learning rate , exponential averaging factors for the first and

for the second moment of the gradient

:

 gn ⟵ ∇f(θn−1) (2) mn ⟵ (β1/(1−βn1)) mn−1 + ((1−β1)/(1−βn1)) gn vn ⟵ (β2/(1−βn2)) vn−1 + ((1−β2)/(1−βn2)) gn⊙gn θn ⟵ θn−1 − a mn/(√vn+ϵ) ,

where following operations are meant componentwise: the product , the square root , and the division in the last line. Instead of learning rate , we introduce the damping coefficient with for . Adam has parameters for averaging the gradient and parametrized by a positive for averaging the squared gradient. These parameters can be considered as defining a memory for Adam. To characterize and in the following, we define the exponential memory and the polynomial memory for some positive constant . The next theorem describes Adam by a differential equation, which in turn allows to apply the idea of perturbed ODEs to TTUR. Consequently, learning GANs with TTUR and Adam converges.

###### Theorem 2.

If Adam is used with , and with as the full gradient of the lower bounded, continuously differentiable objective , then for stationary second moments of the gradient, Adam follows the differential equation for Heavy Ball with Friction (HBF):

 ¨θt + a(t) ˙θt + ∇f(θt) = 0 . (3)

###### Proof.

Gadat et al. derived a discrete and stochastic version of Polyak’s Heavy Ball method Polyak:64 , the Heavy Ball with Friction (HBF) Gadat:16 :

 θn+1 = θn − a(n+1) mn , (4) mn+1 = (1 − a(n+1) r(n)) mn + a(n+1) r(n) (∇f(θn) + Mn+1) .

These update rules are the first moment update rules of Adam Kingma:14 . The HBF can be formulated as the differential equation Eq. (3) Gadat:16 . Gadat et al. showed that the update rules Eq. (4) converge for loss functions with at most quadratic grow and stated that convergence can be proofed for that are -Lipschitz Gadat:16 . Convergence has been proved for continuously differentiable that is quasiconvex (Theorem 3 in Goudou & Munier Goudou:09 ). Convergence has been proved for that is -Lipschitz and bounded from below (Theorem 3.1 in Attouch et al. Attouch:00 ). Adam normalizes the average by the second moments of of the gradient : . is componentwise divided by the square root of the components of . We assume that the second moments of are stationary, i.e., . In this case the normalization can be considered as additional noise since the normalization factor randomly deviates from its mean. In the HBF interpretation the normalization by corresponds to introducing gravitation. We obtain

 vn = 1−β21−βn2 n∑l=1βn−l2 gl⊙gl ,  Δvn = vn−v = 1−β21−βn2 n∑l=1βn−l2 (gl⊙gl−v) . (5)

For a stationary second moment and , we have . We use a componentwise linear approximation to Adam’s second moment normalization , where all operations are meant componentwise. If we set , then and , since . For a stationary second moment , the random variable is a martingale difference sequence with a bounded second moment. Therefore can be subsumed into in update rules Eq. (4). The factor can be componentwise incorporated into the gradient which corresponds to rescaling the parameters without changing the minimum. ∎

According to Attouch et al. Attouch:00 the energy, that is, a Lyapunov function, is and . Since Adam can be expressed as differential equation and has a Lyapunov function, the idea of perturbed ODEs Borkar:97 ; Hirsch:89 ; Borkar:00 carries over to Adam. Therefore the convergence of Adam with TTUR can be proved via two time-scale stochastic approximation analysis like in Borkar Borkar:97 for stationary second moments of the gradient.

In the Appendix we further discuss the convergence of two time-scale stochastic approximation algorithms with additive noise, linear update functions depending on Markov chains, nonlinear update functions, and updates depending on controlled Markov processes. Futhermore, the Appendix presents work on the rate of convergence for both linear and nonlinear update rules using similar techniques as the local stability analysis of Nagarajan and Kolter

Nagarajan:17 . Finally, we elaborate more on equal time-scale updates, which are investigated for saddle point problems and actor-critic learning.

## Experiments

##### Performance Measure.

Before presenting the experiments, we introduce a quality measure for models learned by GANs. The objective of generative learning is that the model produces data which matches the observed data. Therefore, each distance between the probability of observing real world data and the probability of generating model data can serve as performance measure for generative models. However, defining appropriate performance measures for generative models is difficult Theis:15

. The best known measure is the likelihood, which can be estimated by annealed importance sampling

Wu:16 . However, the likelihood heavily depends on the noise assumptions for the real data and can be dominated by single samples Theis:15 . Other approaches like density estimates have drawbacks, too Theis:15 . A well-performing approach to measure the performance of GANs is the “Inception Score” which correlates with human judgment Salimans:16

. Generated samples are fed into an inception model that was trained on ImageNet. Images with meaningful objects are supposed to have low label (output) entropy, that is, they belong to few object classes. On the other hand, the entropy across images should be high, that is, the variance over the images should be large. Drawback of the Inception Score is that the statistics of real world samples are not used and compared to the statistics of synthetic samples. Next, we improve the Inception Score. The equality

holds except for a non-measurable set if and only if for a basis spanning the function space in which and live. These equalities of expectations are used to describe distributions by moments or cumulants, where are polynomials of the data . We generalize these polynomials by replacing by the coding layer of an inception model in order to obtain vision-relevant features. For practical reasons we only consider the first two polynomials, that is, the first two moments: mean and covariance. The Gaussian is the maximum entropy distribution for given mean and covariance, therefore we assume the coding units to follow a multidimensional Gaussian. The difference of two Gaussians (synthetic and real-world images) is measured by the Fréchet distance Frechet:57 also known as Wasserstein-2 distance Wasserstein:69 . We call the Fréchet distance between the Gaussian with mean obtained from and the Gaussian with mean obtained from the “Fréchet Inception Distance” (FID), which is given by Dowson:82 :

 d2((m,C),(mw,Cw))=∥m−mw∥22+Tr(C+Cw−2(CCw)1/2) . (6)

Next we show that the FID is consistent with increasing disturbances and human judgment. Fig. 2

evaluates the FID for Gaussian noise, Gaussian blur, implanted black rectangles, swirled images, salt and pepper noise, and CelebA dataset contaminated by ImageNet images. The FID captures the disturbance level very well. In the experiments we used the FID to evaluate the performance of GANs. For more details and a comparison between FID and Inception Score see Appendix Section

A1, where we show that FID is more consistent with the noise level than the Inception Score.

##### Model Selection and Evaluation.

We compare the two time-scale update rule (TTUR) for GANs with the original GAN training to see whether TTUR improves the convergence speed and performance of GANs. We have selected Adam stochastic optimization to reduce the risk of mode collapsing. The advantage of Adam has been confirmed by MNIST experiments, where Adam indeed considerably reduced the cases for which we observed mode collapsing. Although TTUR ensures that the discriminator converges during learning, practicable learning rates must be found for each experiment. We face a trade-off since the learning rates should be small enough (e.g. for the generator) to ensure convergence but at the same time should be large enough to allow fast learning. For each of the experiments, the learning rates have been optimized to be large while still ensuring stable training which is indicated by a decreasing FID or Jensen-Shannon-divergence (JSD). We further fixed the time point for stopping training to the update step when the FID or Jensen-Shannon-divergence of the best models was no longer decreasing. For some models, we observed that the FID diverges or starts to increase at a certain time point. An example of this behaviour is shown in Fig.

4. The performance of generative models is evaluated via the Fréchet Inception Distance (FID) introduced above. For the One Billion Word experiment, the normalized JSD served as performance measure. For computing the FID, we propagated all images from the training dataset through the pretrained Inception-v3 model following the computation of the Inception Score Salimans:16 , however, we use the last pooling layer as coding layer. For this coding layer, we calculated the mean and the covariance matrix . Thus, we approximate the first and second central moment of the function given by the Inception coding layer under the real world distribution. To approximate these moments for the model distribution, we generate 50,000 images, propagate them through the Inception-v3 model, and then compute the mean and the covariance matrix . For computational efficiency, we evaluate the FID every 1,000 DCGAN mini-batch updates, every 5,000 WGAN-GP outer iterations for the image experiments, and every 100 outer iterations for the WGAN-GP language model. For the one time-scale updates a WGAN-GP outer iteration for the image model consists of five discriminator mini-batches and ten discriminator mini-batches for the language model, where we follow the original implementation. For TTUR however, the discriminator is updated only once per iteration. We repeat the training for each single time-scale (orig) and TTUR learning rate eight times for the image datasets and ten times for the language benchmark. Additionally to the mean FID training progress we show the minimum and maximum FID over all runs at each evaluation time-step. For more details, implementations and further results see Appendix Section A4 and A6.

##### Simple Toy Data.

We first want to demonstrate the difference between a single time-scale update rule and TTUR on a simple toy min/max problem where a saddle point should be found. The objective in Fig. 3 (left) has a saddle point at and fulfills assumption A4. The norm measures the distance of the parameter vector to the saddle point. We update by gradient descent in and gradient ascent in using additive Gaussian noise in order to simulate a stochastic update. The updates should converge to the saddle point with objective value and the norm . In Fig. 3 (right), the first two rows show one time-scale update rules. The large learning rate in the first row diverges and has large fluctuations. The smaller learning rate in the second row converges but slower than the TTUR in the third row which has slow -updates. TTUR with slow -updates in the fourth row also converges but slower.

##### DCGAN on Image Data.

We test TTUR for the deep convolutional GAN (DCGAN) Radford:15 at the CelebA, CIFAR-10, SVHN and LSUN Bedrooms dataset. Fig. 4 shows the FID during learning with the original learning method (orig) and with TTUR. The original training method is faster at the beginning, but TTUR eventually achieves better performance. DCGAN trained TTUR reaches constantly a lower FID than the original method and for CelebA and LSUN Bedrooms all one time-scale runs diverge. For DCGAN the learning rate of the generator is larger then that of the discriminator, which, however, does not contradict the TTUR theory (see the Appendix Section A5). In Table 1 we report the best FID with TTUR and one time-scale training for optimized number of updates and learning rates. TTUR constantly outperforms standard training and is more stable.

##### WGAN-GP on Image Data.

We used the WGAN-GP image model Gulrajani:17 to test TTUR with the CIFAR-10 and LSUN Bedrooms datasets. In contrast to the original code where the discriminator is trained five times for each generator update, TTUR updates the discriminator only once, therefore we align the training progress with wall-clock time. The learning rate for the original training was optimized to be large but leads to stable learning. TTUR can use a higher learning rate for the discriminator since TTUR stabilizes learning. Fig. 5 shows the FID during learning with the original learning method and with TTUR. Table 1 shows the best FID with TTUR and one time-scale training for optimized number of iterations and learning rates. Again TTUR reaches lower FIDs than one time-scale training.

##### WGAN-GP on Language Data.

Finally the One Billion Word Benchmark Chelba:13

serves to evaluate TTUR on WGAN-GP. The character-level generative language model is a 1D convolutional neural network (CNN) which maps a latent vector to a sequence of one-hot character vectors of dimension 32 given by the maximum of a softmax output. The discriminator is also a 1D CNN applied to sequences of one-hot vectors of 32 characters. Since the FID criterium only works for images, we measured the performance by the Jensen-Shannon-divergence (JSD) between the model and the real world distribution as has been done previously

Gulrajani:17 . In contrast to the original code where the critic is trained ten times for each generator update, TTUR updates the discriminator only once, therefore we align the training progress with wall-clock time. The learning rate for the original training was optimized to be large but leads to stable learning. TTUR can use a higher learning rate for the discriminator since TTUR stabilizes learning. We report for the 4 and 6-gram word evaluation the normalized mean JSD for ten runs for original training and TTUR training in Fig. 6. In Table 1 we report the best JSD at an optimal time-step where TTUR outperforms the standard training for both measures. The improvement of TTUR on the 6-gram statistics over original training shows that TTUR enables to learn to generate more subtle pseudo-words which better resembles real words.

## Conclusion

For learning GANs, we have introduced the two time-scale update rule (TTUR), which we have proved to converge to a stationary local Nash equilibrium. Then we described Adam stochastic optimization as a heavy ball with friction (HBF) dynamics, which shows that Adam converges and that Adam tends to find flat minima while avoiding small local minima. A second order differential equation describes the learning dynamics of Adam as an HBF system. Via this differential equation, the convergence of GANs trained with TTUR to a stationary local Nash equilibrium can be extended to Adam. Finally, to evaluate GANs, we introduced the ‘Fréchet Inception Distance” (FID) which captures the similarity of generated images to real ones better than the Inception Score. In experiments we have compared GANs trained with TTUR to conventional GAN training with a one time-scale update rule on CelebA, CIFAR-10, SVHN, LSUN Bedrooms, and the One Billion Word Benchmark. TTUR outperforms conventional GAN training consistently in all experiments.

## Acknowledgment

This work was supported by NVIDIA Corporation, Bayer AG with Research Agreement 09/2017, Zalando SE with Research Agreement 01/2016, Audi.JKU Deep Learning Center, Audi Electronic Venture GmbH, IWT research grant IWT150865 (Exaptation), H2020 project grant 671555 (ExCAPE) and FWF grant P 28660-N31.

## References

The references are provided after Section References.

## A1 Fréchet Inception Distance (FID)

We improve the Inception score for comparing the results of GANs Salimans:16 . The Inception score has the disadvantage that it does not use the statistics of real world samples and compare it to the statistics of synthetic samples. Let be the distribution of model samples and the distribution of the samples from real world. The equality holds except for a non-measurable set if and only if for a basis spanning the function space in which and live. These equalities of expectations are used to describe distributions by moments or cumulants, where are polynomials of the data . We replacing by the coding layer of an Inception model in order to obtain vision-relevant features and consider polynomials of the coding unit functions. For practical reasons we only consider the first two polynomials, that is, the first two moments: mean and covariance. The Gaussian is the maximum entropy distribution for given mean and covariance, therefore we assume the coding units to follow a multidimensional Gaussian. The difference of two Gaussians is measured by the Fréchet distance Frechet:57 also known as Wasserstein-2 distance Wasserstein:69 . The Fréchet distance between the Gaussian with mean and covariance obtained from and the Gaussian obtained from is called the “Fréchet Inception Distance” (FID), which is given by Dowson:82 :

 d2((m,C),(mw,Cw))=∥m−mw∥22+Tr(C+Cw−2(CCw)1/2) . (7)

Next we show that the FID is consistent with increasing disturbances and human judgment on the CelebA dataset. We computed the on all CelebA images, while for computing we used 50,000 randomly selected samples. We considered following disturbances of the image :

1. Gaussian noise: We constructed a matrix with Gaussian noise scaled to . The noisy image is computed as for . The larger is, the larger is the noise added to the image, the larger is the disturbance of the image.

2. Gaussian blur

: The image is convolved with a Gaussian kernel with standard deviation

. The larger is, the larger is the disturbance of the image, that is, the more the image is smoothed.

3. Black rectangles: To an image five black rectangles are are added at randomly chosen locations. The rectangles cover parts of the image. The size of the rectangles is with . The larger is, the larger is the disturbance of the image, that is, the more of the image is covered by black rectangles.

4. Swirl: Parts of the image are transformed as a spiral, that is, as a swirl (whirlpool effect). Consider the coordinate in the noisy (swirled) image for which we want to find the color. Towards this end we need the reverse mapping for the swirl transformation which gives the location which is mapped to . We first compute polar coordinates relative to a center given by the angle and the radius . We transform them according to . Here is a parameter for the amount of swirl and indicates the swirl extent in pixels. The original coordinates, where the color for can be found, are and . We set to the center of the image and . The disturbance level is given by the amount of swirl . The larger is, the larger is the disturbance of the image via the amount of swirl.

5. Salt and pepper noise: Some pixels of the image are set to black or white, where black is chosen with 50% probability (same for white). Pixels are randomly chosen for being flipped to white or black, where the ratio of pixel flipped to white or black is given by the noise level . The larger is, the larger is the noise added to the image via flipping pixels to white or black, the larger is the disturbance level.

6. ImageNet contamination: From each of the 1,000 ImageNet classes, 5 images are randomly chosen, which gives 5,000 ImageNet images. The images are ensured to be RGB and to have a minimal size of 256x256. A percentage of of the CelebA images has been replaced by ImageNet images. means all images are from CelebA, means that 75% of the images are from CelebA and 25% from ImageNet etc. The larger is, the larger is the disturbance of the CelebA dataset by contaminating it by ImageNet images. The larger the disturbance level is, the more the dataset deviates from the reference real world dataset.

We compare the Inception Score Salimans:16 with the FID. The Inception Score with samples and classes is

 exp(1mm∑i=1K∑k=1p(yk∣Xi)logp(yk∣Xi)p(yk)) . (8)

The FID is a distance, while the Inception Score is a score. To compare FID and Inception Score, we transform the Inception Score to a distance, which we call “Inception Distance” (IND). This transformation to a distance is possible since the Inception Score has a maximal value. For zero probability , we set the value . We can bound the -term by

 logp(yk∣Xi)p(yk) ⩽ log11/m = logm . (9)

Using this bound, we obtain an upper bound on the Inception Score:

 exp(1mm∑i=1K∑k=1p(yk∣Xi)logp(yk∣Xi)p(yk)) (10) ⩽ exp(logm1mm∑i=1K∑k=1p(yk∣Xi)) (11) = exp(logm1mm∑i=11) = m . (12)

The upper bound is tight and achieved if

and every sample is from a different class and the sample is classified correctly with probability 1. The IND is computed “IND =

- Inception Score”, therefore the IND is zero for a perfect subset of the ImageNet with samples, where each sample stems from a different class. Therefore both distances should increase with increasing disturbance level. In Figure A7 we present the evaluation for each kind of disturbance. The larger the disturbance level is, the larger the FID and IND should be. In Figure A8, A9, A10, and A10 we show examples of images generated with DCGAN trained on CelebA with FIDs 500, 300, 133, 100, 45, 13, and FID 3 achieved with WGAN-GP on CelebA.

## A2 Two Time-Scale Stochastic Approximation Algorithms

Stochastic approximation algorithms are iterative procedures to find a root or a stationary point (minimum, maximum, saddle point) of a function when only noisy observations of its values or its derivatives are provided. Two time-scale stochastic approximation algorithms are two coupled iterations with different step sizes. For proving convergence of these interwoven iterates it is assumed that one step size is considerably smaller than the other. The slower iterate (the one with smaller step size) is assumed to be slow enough to allow the fast iterate converge while being perturbed by the the slower. The perturbations of the slow should be small enough to ensure convergence of the faster.

The iterates map at time step the fast variable and the slow variable to their new values:

 θn+1 = θn + a(n) (h(θn,wn,Z(θ)n) + M(θ)n) , (13) wn+1 = wn + b(n) (g(θn,wn,Z(w)n) + M(w)n) . (14)

The iterates use

• : mapping for the slow iterate Eq. (13),

• : mapping for the fast iterate Eq. (14),

• : step size for the slow iterate Eq. (13),

• : step size for the fast iterate Eq. (14),

• : additive random Markov process for the slow iterate Eq. (13),

• : additive random Markov process for the fast iterate Eq. (14),

• : random Markov process for the slow iterate Eq. (13),

• : random Markov process for the fast iterate Eq. (14).

### a2.1 Convergence of Two Time-Scale Stochastic Approximation Algorithms

The first result is from Borkar 1997 Borkar:97 which was generalized in Konda and Borkar 1999 Konda:99 . Borkar considered the iterates:

 θn+1 = θn + a(n) (h(θn,wn) + M(θ)n) , (15) wn+1 = wn + b(n) (g(θn,wn) + M(w)n) . (16)
##### Assumptions.

We make the following assumptions:

1. [label=(A0)]

2. Assumptions on the update functions: The functions and are Lipschitz.

3. Assumptions on the learning rates:

 ∑na(n) = ∞,∑na2(n) < ∞ , (17) ∑nb(n) = ∞,∑nb2(n) < ∞ , (18) a(n) = o(b(n)) , (19)
4. Assumptions on the noise: For the increasing -field

 Fn = σ(θl,wl,M(θ)l,M(w)l,l⩽n),n⩾0 ,

the sequences of random variables and satisfy

 ∑na(n) M(θ)n < ∞ a.s. (20) ∑nb(n) M(w)n < ∞ a.s. . (21)
5. Assumption on the existence of a solution of the fast iterate: For each , the ODE

 ˙w(t) = g(θ,w(t)) (22)

has a unique global asymptotically stable equilibrium such that is Lipschitz.

6. Assumption on the existence of a solution of the slow iterate: The ODE

 ˙θ(t) = h(θ(t),λ(θ(t))) (23)

has a unique global asymptotically stable equilibrium .

7. Assumption of bounded iterates:

 supn∥θn∥ < ∞ , (24) supn∥wn∥ < ∞ . (25)
##### Convergence Theorem

The next theorem is from Borkar 1997 Borkar:97 .

###### Theorem 3 (Borkar).

If the assumptions are satisfied, then the iterates Eq. (15) and Eq. (16) converge to a.s.

1. [label=(C0)]

2. According to Lemma 2 in Bertsekas:00 Assumption (A3) is fulfilled if is a martingale difference sequence w.r.t with

 E[∥M(θ)n∥2∣F(θ)n] ⩽ B1

and is a martingale difference sequence w.r.t with

 E[∥M(w)n∥2∣F(w)n] ⩽ B2 ,

where and are positive deterministic constants.

3. Assumption (A3) holds for mini-batch learning which is the most frequent case of stochastic gradient. The batch gradient is and the mini-batch gradient for batch size is , where the indexes are randomly and uniformly chosen. For the noise we have . Since the indexes are chosen without knowing past events, we have a martingale difference sequence. For bounded gradients we have bounded .

4. We address assumption (A4) with weight decay in two ways: (I) Weight decay avoids problems with a discriminator that is region-wise constant and, therefore, does not have a locally stable generator. If the generator is perfect, then the discriminator is 0.5 everywhere. For generator with mode collapse, (i) the discriminator is 1 in regions without generator examples, (ii) 0 in regions with generator examples only, (iii) is equal to the local ratio of real world examples for regions with generator and real world examples. Since the discriminator is locally constant, the generator has gradient zero and cannot improve. Also the discriminator cannot improve, since it has minimal error given the current generator. However, without weight decay the Nash Equilibrium is not stable since the second order derivatives are zero, too. (II) Weight decay avoids that the generator is driven to infinity with unbounded weights. For example a linear discriminator can supply a gradient for the generator outside each bounded region.

5. The main result used in the proof of the theorem relies on work on perturbations of ODEs according to Hirsch 1989 Hirsch:89 .

6. Konda and Borkar 1999 Konda:99 generalized the convergence proof to distributed asynchronous update rules.

7. Tadić relaxed the assumptions for showing convergence Tadic:04a . In particular the noise assumptions (Assumptions A2 in Tadic:04a ) do not have to be martingale difference sequences and are more general than in Borkar:97 . In another result the assumption of bounded iterates is not necessary if other assumptions are ensured Tadic:04a . Finally, Tadić considers the case of non-additive noise Tadic:04a . Tadić does not provide proofs for his results. We were not able to find such proofs even in other publications of Tadić.

#### a2.1.2 Linear Update, Additive Noise, and Markov Chain

In contrast to the previous subsection, we assume that an additional Markov chain influences the iterates Konda:02 ; Konda:03

. The Markov chain allows applications in reinforcement learning, in particular in actor-critic setting where the Markov chain is used to model the environment. The slow iterate is the actor update while the fast iterate is the critic update. For reinforcement learning both the actor and the critic observe the environment which is driven by the actor actions. The environment observations are assumed to be a Markov chain. The Markov chain can include eligibility traces which are modeled as explicit states in order to keep the Markov assumption.

The Markov chain is the sequence of observations of the environment which progresses via transition probabilities. The transitions are not affected by the critic but by the actor.

Konda et al. considered the iterates Konda:02 ; Konda:03 :

 θn+1 = θn + a(n) Hn , (26) wn+1 = wn + b(n) (g(Z(w)n;θn) + G(Z(w)n;θn)  wn+ M(w)n wn) . (27)

is a random process that drives the changes of . We assume that is a slow enough process. We have a linear update rule for the fast iterate using the vector function and the matrix function .

##### Assumptions.

We make the following assumptions:

1. [label=(A0)]

2. Assumptions on the Markov process, that is, the transition kernel: The stochastic process takes values in a Polish (complete, separable, metric) space with the Borel -field

 Fn = σ(θl,wl,Z(w)l,Hl,l⩽n),n⩾0 .

For every measurable set and the parametrized transition kernel we have:

 P(Z(w)n+1∈A∣Fn) = P(Z(w)n+1∈A∣Z(w)n;θn) = P(Z(w)n,A;θn) . (28)

We define for every measurable function

 Pθf(z) := ∫P(z,d¯z;θn) f(¯z) .
3. Assumptions on the learning rates:

 ∑nb(n) = ∞,∑nb2(n) < ∞ , (29) ∑n(a(n)b(n))d < ∞ , (30)

for some .

4. Assumptions on the noise: The sequence is a -matrix valued -martingale difference with bounded moments:

 E[M(w)n∣Fn] = 0 , (31) supnE[∥∥M(w)n∥∥d] < ∞ , ∀d>0 . (32)

We assume slowly changing , therefore the random process satisfies

 supnE[∥Hn∥d] < ∞ , ∀d>0 . (33)
5. Assumption on the existence of a solution of the fast iterate: We assume the existence of a solution to the Poisson equation for the fast iterate. For each , there exist functions , , , and that satisfy the Poisson equations:

 ^g(z;θ) = g(z;θ) − ¯g(θ) + (Pθ^g(.;θ))(z) , (34) ^G(z;θ) = G(z;θ) − ¯G(θ) + (Pθ^G(.;θ))(z) . (35)
6. Assumptions on the update functions and solutions to the Poisson equation:

1. Boundedness of solutions: For some constant and for all :

 max{∥¯g(θ)∥} ⩽ C , (36) max{∥¯G(θ)∥} ⩽ C . (37)
2. Boundedness in expectation: All moments are bounded. For any , there exists such that

 supnE[∥∥^g(Z(w)n;θ)∥∥d] ⩽ Cd , (38) supnE[∥∥g(Z(w)n;θ)∥∥d] ⩽ Cd , (39) supnE[∥∥^G(Z(w)n;θ)∥∥d] ⩽ Cd , (40) supnE[∥∥G(Z(w)n;θ)∥∥d] ⩽ Cd . (41)
3. Lipschitz continuity of solutions: For some constant and for all ,:

 ∥∥¯g(θ) − ¯g(¯θ)∥∥ ⩽ C ∥θ−¯θ∥ , (42) ∥∥¯G(θ) − ¯G(¯θ)∥∥ ⩽ C ∥θ−¯θ∥ . (43)
4. Lipschitz continuity in expectation: There exists a positive measurable function on such that

 supnE[C(Z(w)n)d] < ∞ , ∀d>0 . (44)

Function gives the Lipschitz constant for every :

 ∥∥(Pθ^g(.;θ))(z) − (P¯θ^g(.;¯θ))(z)∥∥ ⩽ C(z) ∥θ−¯θ∥ , (45) ∥∥(Pθ^G(.;θ))(z) − (P¯θ^G(.;¯θ))(z)∥∥ ⩽ C(z) ∥θ−¯θ∥ . (46)
5. Uniform positive definiteness: There exists some such that for all and :

 wT ¯G(θ) w ⩾ α ∥w∥2 . (47)
##### Convergence Theorem.

We report Theorem 3.2 (see also Theorem 7 in Konda:03 ) and Theorem 3.13 from Konda:02 :

###### Theorem 4 (Konda & Tsitsiklis).

If the assumptions are satisfied, then for the iterates Eq. (26) and Eq. (27) holds:

 limn→∞∥∥¯G(θn) wn − ¯g(θn)∥∥ = 0   a.s. , (48) limn→∞∥∥wn − ¯G−1(θn) ¯g(θn)∥∥ = 0 . (49)