Generalization and Equilibrium in Generative Adversarial Nets (GANs)

03/02/2017 ∙ by Sanjeev Arora, et al. ∙ Duke University Princeton University 0

We show that training of generative adversarial network (GAN) may not have good generalization properties; e.g., training may appear successful but the trained distribution may be far from target distribution in standard metrics. However, generalization does occur for a weaker metric called neural net distance. It is also shown that an approximate pure equilibrium exists in the discriminator/generator game for a special class of generators with natural training objectives when generator capacity and training set sizes are moderate. This existence of equilibrium inspires MIX+GAN protocol, which can be combined with any existing GAN training, and empirically shown to improve some of them.



There are no comments yet.


page 13

page 14

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Generative Adversarial Networks (GANs) (Goodfellow et al., 2014) have become one of the dominant methods for fitting generative models to complicated real-life data, and even found unusual uses such as designing good cryptographic primitives (Abadi and Andersen, 2016). See a survey by Goodfellow (2016). Various novel architectures and training objectives were introduced to address perceived shortcomings of the original idea, leading to more stable training and more realistic generative models in practice (see  Odena et al. (2016); Huang et al. (2017); Radford et al. (2016); Tolstikhin et al. (2017); Salimans et al. (2016); Jiwoong Im et al. (2016); Durugkar et al. (2016) and the reference therein).

The goal is to train a generator deep net whose input is a standard Gaussian, and whose output is a sample from some distribution on , which has to be close to some target distribution (which could be, say, real-life images represented using raw pixels). The training uses samples from and together with the generator net also trains a discriminator deep net trying to maximise its ability to distinguish between samples from and

. So long as the discriminator is successful at this task with nonzero probability, its success can be used to generate a feedback (using backpropagation) to the generator, thus improving its distribution

. Training is continued until the generator wins, meaning that the discriminator can do no better than random guessing when deciding whether or not a particular sample came from or . This basic iterative framework has been tried with many training objectives; see Section 2.

Figure 1: Probability density with many peaks and valleys

But it has been unclear what to conclude when the generator wins this game: is close to in some metric? One seems to need some extension of generalization theory that would imply such a conclusion. The hurdle is that distribution could be complicated and may have many peaks and valleys; see Figure 1. The number of peaks (modes) may even be exponential in . (Recall the curse of dimensionality: in dimensions there are directions whose pairwise angle exceeds say , and each could be the site of a peak.) Whereas the number of samples from (and from for that matter) used in the training is a lot fewer, and thus may not reflect most of the peaks and valleys of .

A standard analysis due to (Goodfellow et al., 2014) shows that when the discriminator capacity (= number of parameters) and number of samples is “large enough”, then a win by the generator implies that is very close to (see Section 2). But the discussion in the previous paragraph raises the possibility that “sufficiently large” in this analysis may need to be .

Another open theoretical issue is whether an equilibrium always exists in this game between generator and discriminator. Just as a zero gradient is a necessary condition for standard optimization to halt, the corresponding necessary condition in a two-player game is an equilibrium. Conceivably some of the instability often observed while training GANs could just arise because of lack of equilibrium. (Recently Arjovsky et al. (2017)

suggest that using their Wasserstein objective in practice reduces instability, but we still lack proof of existence of an equilibrium.) Standard game theory is of no help here because we need a so-called pure equilibrium, and simple counter-examples such as

rock/paper/scissors show that it doesn’t exist in general222Such counterexamples are easily turned into toy GAN scenarios with generator and discriminator having finite capacity, and the game lacks a pure equilibrium. See Appendix C..

1.1 Our Contributions

We formally define generalization for GANs in Section 3 and show that for previously studied notions of distance between distributions, generalization is not guaranteed (Lemma 1). In fact we show that the generator can win even when and are arbitrarily far in any one of the standard metrics.

However, we can guarantee some weaker notion of generalization by introducing a new metric on distributions, the neural net distance. We show that generalization does happen with moderate number of training examples (i.e., when the generator wins, the two distributions must be close in neural net distance). However, this weaker metric comes at a cost: it can be near-zero even when the trained and target distributions are very far (Section 3.4).

To explore the existence of equilibria we turn in Section 4 to infinite mixtures of generator deep nets. These are clearly vastly more expressive than a single generator net: e.g., a standard result in bayesian nonparametrics says that every probability density is closely approximable by an infinite mixture of Gaussians (Ghosh et al., 2003). Thus unsurprisingly, an infinite mixture should win the game. We then prove rigorously that even a finite mixture of fairly reasonable size can closely approximate the performance of the infinite mixture (Theorem 4.2).

This insight also allows us to construct a new architecture for the generator network where there exists an approximate

equilibrium that is pure. (Roughly speaking, an approximate equilibrium is one in which neither of the players can gain much by deviating from their strategies.) This existence proof for an approximate equilibrium unfortunately involves a quadratic blowup in the “size” of the generator (which is still better than the naive exponential blowup one might expect). Improving this is left for future theoretical work. But we propose a heuristic approximation to the mixture idea to introduce a new framework for training that we call

MIX+GAN. It can be added on top of any existing GAN training procedure, including those that use divergence objectives. Experiments in Section 6 show that for several previous techniques, MIX+GAN stabilizes the training, and in some cases improves the performance.

2 Preliminaries

Notations. Throughout the paper we use for the dimension of samples, and for the number of parameters in the generator/discriminator. In Section 3 we use for number of samples.

Generators and discriminators. Let () denote the class of generators, where

is a function — which is often a neural network in practice — from

indexed by that denotes the parameters of the generators. Here denotes the possible ranges of the parameters and without loss of generality we assume is a subset of the unit ball333Otherwise we can scale the parameter properly by changing the parameterization. . The generator defines a distribution as follows: generate from

-dimensional spherical Gaussian distribution and then apply

on and generate a sample of the distribution . We drop the subscript in when it’s clear from context.

Let denote the class of discriminators, where is function from to and is the parameters of .

Training the discriminator consists of trying to make it output a high value (preferably ) when is sampled from distribution and a low value (preferably ) when is sampled from the synthetic distribution . Training the discriminator consists of trying to make its synthetic distribution “similar”to in the sense that the discriminator’s output tends to be similar on the two distributions.

We assume and are -Lipschitz with respect to their parameters. That is, for all and any input , we have (similar for ).

Notice, this is distinct from the assumption (which we will also sometimes make) that functions are Lipschitz: that focuses on the change in function value when we change , while keeping fixed444Both Lipschitz parameters can be exponential in the number of layers in the neural net, however our Theorems only depend on the of the Lipschitz parameters.

Objective functions. The standard GAN training (Goodfellow et al., 2014) consists of training parameters so as to optimize an objective function:


Intuitively, this says that the discriminator should give high values to the real samples and low values to the generated examples. The function was suggested because of its interpretation as the likelihood, and it also has a nice information-theoretic interpretation described below. However, in practice it can cause problems since as . The objective still makes intuitive sense if we replace by any monotone function , which yields the objective:


We call function the measuring function. It should be concave so that when and are the same distribution, the best strategy for the discriminator is just to output and the optimal value is . In later proofs, we will require to be bounded and Lipschitz. Indeed, in practice training often uses (which takes values in and is -Lipschitz) and the recently proposed Wasserstein GAN (Arjovsky et al., 2017) objective uses .

Training with finite samples. The objective function (2) assumes we have infinite number of samples from

to estimate the value

. With finite training examples , one uses to estimate the quantity . We call the distribution that gives probability to each of the ’s the empirical version of the real distribution. Similarly, one can use a empirical version to estimate

Standard interpretation via distance between distributions. Towards analyzing GANs, researchers have assumed access to infinite number of examples and that the discriminator is chosen optimally within some large class of functions that contain all possible neural nets. This often allows computing analytically the optimal discriminator and therefore removing the maximum operation from the objective (2), which leads to some interpretation of how and in what sense the resulting distribution is close to the true distribution .

Using the original objective function (1), then the optimal choice among all the possible functions from is , as shown in Goodfellow et al. (2014). Here is the density of in the real distribution, and is the density of in the distribution generated by generator . Using this discriminator — though it’s computationally infeasible to obtain it — one can show that the minimization problem over the generator correspond to minimizing the Jensen-Shannon (JS) divergence between the true distribution and the generative distribution . Recall that for two distributions and , the JS divergence is defined by

Other measuring functions and choice of discriminator class leads to different distance function between distribution other than JS divergence. Notably,  Arjovsky et al. (2017) shows that when , and the discriminator is chosen among all -Lipschitz functions, maxing out the discriminator, the generator is attempting to minimize the Wasserstein distance between and . Recall that Wasserstein distance between and is defined as

3 Generalization theory for GANs

The above interpretation of GANs in terms of minimizing distance (such as JS divergence and Wasserstein distance) between the real distribution and the generated distribution relies on two crucial assumptions: (i) very expressive class of discriminators such as the set of all bounded discriminator or the set of all 1-Lipschitz discriminators, and (ii) very large number of examples to compute/estimate the objective (1) or (2). Neither assumption holds in practice, and we will show next that this greatly affects the generalization ability, a notion we introduce in Section 3.1.

3.1 Definition of Generalization

Our definition is motivated from supervised classification, where training is said to generalize if the training and test error closely track each other. (Since the purpose of GANs training is to learn a distribution, one could also consider a stronger definition of successful training, as discussed in Section 3.4.)

Let be the training examples, and let

denote the uniform distribution over

. Similarly, let be a set of examples from the generated distribution . In the training of GANs, one implicitly uses to approximate the quantity . Inspired by the observation that the training objective of GANs and its variants is to minimize some distance (or divergence) between and using finite samples, we define the generalization of GANs as follows:

Definition 1.

Given , an empirical version of the true distribution with samples, a generated distribution generalizes under the divergence or distance between distributions with generalization error if the following holds with high probability555over the choice of ,


where and is an empirical version of the generated distribution with polynomial number of samples (drawn after is fixed).

In words, generalization in GANs means that the population distance between the true and generated distribution is close to the empirical distance between the empirical distributions. Our target is to make the former distance small, whereas the latter one is what we can access and minimize in practice. The definition allows only polynomial number of samples from the generated distribution because the training algorithm should run in polynomial time.

We also note that stronger versions of Definition 1

can be considered. For example, as an analog of uniform convergence in supervised learning, we can require (

3) to hold for all generators among a class of candidate generators. Indeed, our results in Section 3.3 show that all generators generalize under neural net distance with reasonable number of examples.

3.2 JS Divergence and Wasserstein don’t Generalize

As a warm-up, we show that JS divergence and Wasserstein distance don’t generalize with any polynomial number of examples because the population distance (divergence) is not reflected by the empirical distance.

Lemma 1.

Let be uniform Gaussian distributions and be an empirical versions of with examples. Then we have , .

There are two consequences of Lemma 1. First, consider the situation where . Then we have that but as long as we have polynomial number of examples. This violates the generalization definition equation  (3).

Second, consider the case and , that is, memorizes all of the training examples in . In this case, since is a discrete distribution with finite supports, with enough (polynomial) examples, in , effectively we also have that . Therefore, we have that whereas . In other words, with any polynomial number of examples, it’s possible to overfit to the training examples using Wasserstein distance. The same argument also applies to JS divergence. See Appendix B.1 for the formal proof. Notice, this result does not contradict the experiments of Arjovsky et al. (2017) since they actually use not Wasserstein distance but a surrogate distance that does generalize, as we show next.

3.3 Generalization bounds for neural net distance

Which distance measure between and is the GAN objective actually minimizing and can we analyze its generalization performance? Towards answering these questions in full generality (given multiple GANs objectives) we consider the following general distance measure that unifies JS divergence, Wasserstein distance, and the neural net distance that we define later in this section.

Definition 2 (-distance).

Let be a class of functions from to such that if . Let be a concave measuring function. Then the -divergence with respect to between two distributions and supported on is defined as

When , we have that is a distance function 666Technically it is a pseudometric. This is also known as integral probability metrics(Müller, 1997). , and with slightly abuse of notation we write it simply as

Example 1.

When and , we have that is the same as JS divergence. When and , then is the Wasserstein distance.

Example 2.

Suppose is a set of neural networks and , then original GAN objective function is equivalent to

Suppose is the set of neural networks, and , then the objective function used empirically in Arjovsky et al. (2017) is equivalent to

GANs training uses to be a class of neural nets with a bound on the number of parameters. We then informally refer to as the neural net distance. The next theorem establishes generalization in the sense of equation (3) does hold for it (with a uniform convergence) . We assume that the measuring function takes values in and that it is -Lipschitz. Further, is the class of discriminators that is -Lipschitz with respect to the parameters . As usual, we use to denote the number of parameters in .

Theorem 3.1.

In the setting of previous paragraph, let be two distributions and be empirical versions with at least samples each. There is a universal constant such that when , we have with probability at least over the randomness of and ,

See Appendix B.1 for the proof. The intuition is that there aren’t too many distinct discriminators, and thus given enough samples the expectation over the empirical distribution converges to the expectation over the true distribution for all discriminators.

Theorem 3.1 shows that the neural network divergence (and neural network distance) has a much better generalization properties than Jensen-Shannon divergence or Wasserstein distance. If the GAN successfully minimized the neural network divergence between the empirical distributions, that is, , then we know the neural network divergence between the distributions and is also small. It is possible to change the proof to also show that this generalization continues to hold at every iteration of the training as shown in the following corollary.

Corollary 3.1.

In the setting of Theorem 3.1, suppose () be the generators in the iterations of the training, and assume . There is a some universal constant such that when , with probability at least , for all ,

The key observation here is that the objective is separated into two parts and the generator is not directly related to . So even though we don’t have fresh examples, the generalization bound still holds. Detailed proof appears in Appendix B.1.

3.4 Generalization vs Diversity

Since the final goal of GANs training is to learn a distribution, it is worth understanding that though weak generalization in the sense of Section 3.3 is guaranteed, it comes with a cost. For JS divergence and Wasserstein distance, when the distance between two distributions is small, it is safe to conclude that the distributions and are almost the same. However, the neural net distance can be small even if are not very close. As a simple Corollary of Lemma 3.1, we obtain:

Corollary 3.2 (Low-capacity discriminators cannot detect lack of diversity).

Let be the empirical version of distribution with samples. There is a some universal constant such that when , we have that with probability at least ,

That is, the neural network distance for nets with parameters cannot distinguish between a distribution and a distribution with support . In fact the proof still works if the disriminator is allowed to take many more samples from ; the reason they don’t help is that its capacity is limited to .

We note that similar results have been shown before in study of pseudorandomness (Trevisan et al., 2009) and model criticism (Gretton et al., 2012).

4 Expressive power and existence of equilibrium

Section 3 clarified the notion of generalization for GANs: namely, neural-net divergence between the generated distribution and on the empirical samples closely tracks the divergence on the full distribution (i.e., unseen samples). But this doesn’t explain why in practice the generator usually “wins”so that the discriminator is unable to do much better than random guessing at the end. In other words, was it sheer luck that so many real-life distributions turned out to be close in neural-net distance to a distribution produced by a fairly compact neural net? This section suggests no luck may be needed.

The explanation starts with a thought experiment. Imagine allowing a much more powerful generator, namely, an infinite mixture of deep nets, each of size . So long as the deep net class is capable of generating simple gaussians, such mixtures are quite powerful, since a classical result says that an infinite mixtures of simple gaussians can closely approximate . Thus an infinite mixture of deep net generators will “win” the GAN game, not only against a discriminator that is a small deep net but also against more powerful discriminators (e.g., any Lipschitz function).

The next stage in the thought experiment is to imagine a much less powerful generator, which is a mix of only a few deep nets, not infinitely many. Simple counterexamples show that now the distribution will not closely approximate arbitrary with respect to natural metrics like . Nevertheless, could the generator still win the GAN game against a deep net of bounded capacity (i.e., the deep net is unable to distinguish and )? We show it can.

informal theorem: If the discriminator is a deep net with parameters, then a mixture of generator nets can produce a distribution that the discriminator will be unable to distinguish from with probability more than . (Here notation hides some nuisance factors.)

This informal theorem is also a component of our result below about the existence of an approximate pure equilibrium. We will first show that a finite mixture of generators can “win” against all discriminators, and then discuss how this mixed generator can be realized as a single generator network that is 1-layer deeper.

4.1 Equilibrium using a Mixture of Generators

For a class of generators and a class of discriminators , we can define the payoff of the game between generator and discriminator


Of course as we discussed in previous section, in practice these expectations should be with respect to the empirical distributions. Our discussions in this section does not depend on the distributions and , so we define this way for simplicity.

The well-known min-max theorem (v. Neumann, 1928) in game theory shows if both players are allowed to play mixed strategies then the game has a min-max solution. A mixed strategy for the generator is just a distribution supported on , and one for discriminator is a distribution supported on .

Theorem 4.1 (von Neumann).

There exists a value , and a pair of mixed strategies (,) such that

Note that this equilibrium involves both parties announcing their strategies at the start, such that neither will have any incentive to change their strategy after studying the opponent’s strategy. The payoff is generated by the generator first sample , and then generate an example . Therefore, the mixed generator is just a linear mixture of generators. A mixture of discriminators is more complicated because the objective function need not be linear in the discriminator. However in the case of our interest, the generator wins and even a mixture of discriminators cannot effectively distinguish between generated and real distribution. Therefore we do not consider a mixture of discriminators here.

Of course, this equilibrium involving an infinite mixture makes little sense in practice. We show that (as is folklore in game theory (Lipton and Young, 1994)) that we can approximate this min-max solution with mixture of finitely many generators and discriminators. More precisely we define -approximate equilibrium:

Definition 3.

A pair of mixed strategies is an -approximate equilibrium, if for some value

If the strategies are pure strategies, then this pair is called an -approximate pure equilibrium.

Suppose is -Lipschitz and bounded in , the generator and discriminators are -Lipschitz with respect to the parameters and -Lipschitz with respect to inputs, in this setting we can formalize the above Informal Theorem as follows:

Theorem 4.2.

In the settings above, if the generator can approximate any point mass777For all points and any , there is a generator such that ., there is a universal constant such that for any , there exists generators . Let be a uniform distribution on , and is a discriminator that outputs only , then is an -approximate equilibrium.

The proof uses a standard probabilistic argument and epsilon net argument to show that if we sample generators and discriminators from infinite mixture, they form an approximate equilibrium with high probability. For the second part, we use the fact that the generator can approximate any point mass, so an infinite mixture of generators can approximate the real distribution to win. Therefore indeed a mixture of generators can achieve an -approximate equilibrium.

Note that this theorem works for a wide class of measuring functions (as long as is concave). The generator always wins, and the discriminator’s (near) optimal strategy corresponds to random guessing (output a constant ).

4.2 Achieving Pure Equilibrium

Now we give a construction to augment the network structure, and achieve an approximate pure equilibrium for the GAN game for generator nets of size . This should be interpreted as: if deep nets of size are capable of generating any point mass, then the GAN game for the generator neural network of size

has an approximate equilibrium in which the generator wins. (The theorem is stated for RELU gates but also holds for standard activations such as sigmoid.)

Theorem 4.3.

Suppose the generator and discriminator are both -layer neural networks () with

parameters, and the last layer uses ReLU activation function. In the setting of Theorem 

4.2, there exists -layer neural networks of generators and discriminator with parameters, such that there exists an -approximate pure equilibrium with value .

To prove this theorem, we consider the mixture of generators as in Theorem 4.2, and show how to fold the mixture into a larger -layer neural network. We sketch the idea; details are in the Appendix B.2.

For mixture of generators, we construct a single neural network that approximately generates the mixture distribution using the gaussian input it has. To do that, we can pass the input through all the generators . We then show how to implement a “multi-way selector” that will select a uniformly random output from (). The selector involves a simple -layer network that selects a number from to with the appropriate probability and “disables”all the neural nets except the th one by forwarding an appropriate large negative input.

Remark: In practice, GANs use highly structured deep nets, such as convolutional nets. Our current proof of existence of pure equilibrium requires introducing less structured elements in the net, namely, the multiway selectors that implement the mixture within a single net. It is left for future work whether pure equilibria exist for the original structured architectures. In the meantime, in practice we recommend using, even for W-GAN, a mixture of structured nets for GAN training, and it seems to help in our experiments reported below.


Theorem 4.2 and Theorem 4.3 show that using a mixture of (not too many) generators and discriminators guarantees existence of approximate equilibrium. This suggests that using a mixture may lead to more stable training. Our experiments correspond to an older version of this paper, and they are done using a mixture for both generator and discriminators.

Of course, it is impractical to use very large mixtures, so we propose mix + gan: use a mixture of components, where is as large as allowed by size of GPU memory (usually ). Namely, train a mixture of generators and discriminators ) which share the same network architecture but have their own trainable parameters. Maintaining a mixture means of course maintaining a weight for the generator which corresponds to the probability of selecting the output of . These weights are also updated via backpropagation. This heuristic can be combined with existing methods like dcgan, w-gan etc., giving us new training methods mix+dcgan, mix+w-gan etc.

We use exponentiated gradient (Kivinen and Warmuth, 1997): store the log-probabilities , and then obtain the weights by applying soft-max function on them:

Note that our algorithm is maintaining weights on different generators and discriminators. This is very different from the idea of boosting where weights are maintained on samples. AdaGAN (Tolstikhin et al., 2017) uses ideas similar to boosting and maintains weights on training examples.

Given payoff function , training mix + gan boils down to optimizing:

Here the payoff function is the same as Equation (4). We use both measuring functions (for original GAN) and (for WassersteinGAN). In our experiments we alternatively update generators’ and discriminators’ parameters as well as their corresponding log-probabilities using ADAM (Kingma and Ba, 2015), with learning rate .

Empirically, it is observed that some components of the mixture tend to collapse and their weights diminish during the training. To encourage full use of the mixture capacity, we add to the training objective an entropy regularizer term that discourages the weights being too far away from uniform:

6 Experiments

Figure 2: MNIST Samples. Digits generated from (a) MIX+DCGAN and (b) DCGAN.
Figure 3: CelebA Samples. Faces generated from (a) MIX+DCGAN and (b) DCGAN.
Method Score
SteinGAN (Wang and Liu, 2016) 6.35
Improved GAN (Salimans et al., 2016) 8.090.07
AC-GAN (Odena et al., 2016) 8.25 0.07
S-GAN (best variant in (Huang et al., 2017)) 8.59 0.12
DCGAN (as reported in Wang and Liu (2016)) 6.58
DCGAN (best variant in Huang et al. (2017)) 7.160.10
DCGAN (5x size) 7.340.07
MIX+DCGAN (Ours, with components) 7.720.09
Wasserstein GAN 3.820.06
MIX+WassersteinGAN (Ours, with components) 4.040.07
Real data 11.240.12
Table 1: Inception Scores on CIFAR-10. Mixture of DCGANs achieves higher score than any single-component DCGAN does. All models except for WassersteinGAN variants are trained with labels.
Figure 4: MIX+DCGAN v.s. DCGAN Training Curve (Inception Score). MIX+DCGAN is consistently higher than DCGAN.
Figure 5: MIX+WassersteinGAN v.s. WassersteinGAN Training Curve (Wasserstein Objective). MIX+WassersteinGAN is better towards the end but loss drops less smoothly, which needs further investigation.
Figure 4: MIX+DCGAN v.s. DCGAN Training Curve (Inception Score). MIX+DCGAN is consistently higher than DCGAN.

In this section, we first explore the qualitative benefits of our method on image generation tasks: MNIST dataset (LeCun et al., 1998) of hand-written digits and the CelebA (Liu et al., 2015) dataset of human faces. Then for more quantitative evaluation we use the CIFAR-10 dataset (Krizhevsky and Hinton, 2009) and use the Inception Score introduced in Salimans et al. (2016). MNIST contains 60,000 labeled 2828-sized images of hand-written digits, CelebA contains over 200K 108108-sized images of human faces (we crop the center 6464 pixels for our experiments), and CIFAR-10 has 60,000 labeled 3232-sized RGB natural images which fall into 10 categories.

To reinforce the point that this technique works out of the box, no extensive hyper-parameter search or tuning is necessary. Please refer to our code for experimental setup.888Related code is public online at

6.1 Qualitative Results

The DCGAN architecture (Radford et al., 2016) uses deep convolutional nets as generators and discriminators. We trained mix + dcgan on MNIST and CelebA using the authors’ code as a black box, and compared visual qualities of generated images vs DCGAN.

The DCGAN architecture (Radford et al., 2016) uses deep convolutional nets as generators and discriminators. We trained MIX+DCGAN on MNIST and CelebA using the authors’ code as a black box, and compared visual qualities of generated images to those by DCGAN.

Results on MNIST is shown in Figure 2. In this experiment, the baseline DCGAN consists of a pair of a generator and a discriminator, which are 5-layer deconvoluitonal neural networks, and are conditioned on image labels. Our MIX+DCGAN model consists of a mixture of such DCGANs so that it has generators and discriminators. We observe that our method produces somewhat cleaner digits than the baseline (note the fuzziness in the latter).

Results on CelebA dataset are also in Figure 3, using the same architecture as for MNIST, except the models are not conditioned on image labels anymore. Again, our method generates more faithful and more diverse samples than the baseline. Note that one may need to zoom in to fully perceive the difference, since both the two datasets are rather easy for DCGAN.

In Appendix A, we also show generated digits and faces from each components of MIX+DCGAN.

6.2 Quantitative Results

Now we turn to quantitative measurement using Inception Score (Salimans et al., 2016). Our method is applied to DCGAN and WassersteinGAN Arjovsky et al. (2017), and throughout, mixtures of generators and discriminators are used. At first sight the comparison DCGAN v.s. MIX+DCGAN seems unfair because the latter uses

times the capacity of the former, with corresponding penalty in running time per epoch. To address this, we also compare our method with larger versions of

DCGAN with roughly the same number of parameters, and we found the former is consistently better than the later, as detailed below.

To construct MIX+DCGAN, we build on top of the DCGAN trained with losses proposed by Huang et al. (2017), which is the best variant so far without improved training techniques. The same hyper-parameters are used for fair comparison. See Huang et al. (2017) for more details. Similarly, for the MIX+WassersteinGAN, the base GAN is identical to that proposed by Arjovsky et al. (2017) using their hyper-parameter scheme.

For a quantitative comparison, inception score is calculated for each model, using 50,000 freshly generated samples that are not used in training. To sample a single image from our MIX+ models, we first select a generator from the mixture according to their assigned weights , and then draw a sample from the selected generator.

Table 1 shows the results on the CIFAR-10 dataset. We find that, simply by applying our method to the baseline models, our MIX+ models achieve 7.72 v.s. 7.16 on DCGAN, and 4.04 v.s. 3.82 on WassersteinGAN. To confirm that the superiority of MIX+ models is not solely due to more parameters, we also tested a DCGAN model with 5 times many parameters (roughly the same number of parameters as a 5-component MIX+DCGAN), which is tuned using a grid search over 27 sets of hyper-parameters (learning rates, dropout rates, and regularization weights). It gets only 7.34 (labeled as ”5x size” in Table 1), which is lower than that of a MIX+DCGAN. It is unclear how to apply MIX+ to S-GANs. We tried mixtures of the upper and bottom generators separately, resulting in worse scores somehow. We leave that for future exploration.

Figure 5 shows how Inception Scores of MIX+DCGAN v.s. DCGAN evolve during training. MIX+DCGAN outperforms DCGAN throughout the entire training process, showing that it makes effective use of the additional capacity.

Arjovsky et al. (2017) shows that (approximated) Wasserstein loss, which is the neural network divergence by our definition, is meaningful because it correlates well with visual quality of generated samples. Figure 5 shows the training dynamics of neural network divergence of MIX+WassersteinGAN v.s. WassersteinGAN, which strongly indicates that MIX+WassersteinGAN is capable of achieving a much lower divergence as well as of improving the visual quality of generated samples.

7 Conclusions

The notion of generalization for GANs has been clarified by introducing a new notion of distance between distributions, the neural net distance. (Whereas popular distances such as Wasserstein and JS may not generalize.) Assuming the visual cortex also is a deep net (or some network of moderate capacity) generalization with respect to this metric is in principle sufficient to make the final samples look realistic to humans, even if the GAN doesn’t actually learn the true distribution.

One issue raised by our analysis is that the current GANs objectives cannot even enforce that the synthetic distribution has high diversity (Section 3.4). This is empirically verified in a follow-up work(Arora and Zhang, 2017). Furthermore the issue cannot be fixed by simply providing the discriminator with more training examples. Possibly some other change to the GANs setup are needed.

The paper also made progress another unexplained issue about GANs, by showing that a pure approximate equilibrium exists for a certain natural training objective (Wasserstein) and in which the generator wins the game. No assumption about the target distribution is needed.

Suspecting that a pure equilibrium may not exist for all objectives, we recommend in practice our MIX+GAN protocol using a small mixture of discriminators and generators. Our experiments show it improves the quality of several existing GAN training methods.

Finally, existence of an equilibrium does not imply that a simple algorithm (in this case, backpropagation) would find it easily. Understanding convergence remains widely open.


This paper was done in part while the authors were hosted by Simons Institute. We thank Moritz Hardt, Kunal Talwar, Luca Trevisan, Eric Price, and the referees for useful comments. This research was supported by NSF, Office of Naval Research, and the Simons Foundation.


Appendix A Generated Samples from Components of Mix+dcgan

In Figure 6 and Figure 7 we showed generated digits and face from different components of MIX+DCGAN.

Figure 6: MNIST Samples. Digits generated from each of the 3 components of MIX+DCGAN
Figure 7: MNIST Samples. Faces generated from each of the 3 components of MIX+DCGAN

Appendix B Omitted Proofs

In this section we give detailed proofs for the theorems in the main document.

b.1 Omitted Proofs for Section 3

We first show that JS divergence and Wasserstein distances can lead to overfitting.

Lemma 2 (Lemma 1 restated).

Let be uniform Gaussian distributions and be an empirical versions of with examples. Then we have


For Jensen-Shannon divergence, observe that is a continuous distribution and is discrete, therefore .

For Wasserstein distance, let be the empirical samples (fixed arbitrarily). For , by standard concentration and union bounds, we have

Therefore, using the earth-mover interpretation of Wasserstein distance, we know . ∎

Next we consider sampling for both the generated distribution and the real distribution, and show that the JS divergence or Wasserstein distance do not generalize.

Theorem B.1.

Let be uniform Gaussian distributions . Suppose are empirical versions of with samples. Then with probability at least we have

Further, let be the convolution of with a Gaussian distribution , as long as for small enough constant , we have with probability at least .


For the Jensen-Shannon divergence, we know with probability 1 the supports of are disjoint, therefore .

For Wasserstein distance, note that for two random Gaussian vectors

, their difference is also a Gaussian with expected square norm 2. Therefore we have

As a result, setting to be a fixed constant (0.1 suffices), with probability , we can union bound over all the pairwise distances for points in support of and support of . With high probability, the closest pair between and has distance at least 1, therefore the Wasserstein distance .

Finally we prove that even if we add noise to the two distributions, the JS divergence is still large. For distributions , let be their density functions. Let , we can rewrite the JS divergence as

Let be a Bernoulli variable with probability of being 1. Note that where is the entropy of . Therefore . Let be the union of radius-0.2 balls near the samples in and . Since with high probability, all these samples have pairwise distance at least 1, by Gaussian density function we know (a) the balls do not intersect; (b) within each ball ; (c) the union of these balls take at least fraction of the density in .

Therefore for every , we know , therefore

Next we prove the neural network distance does generalize, given enough samples. Let us first recall the settings here: we assume that the measuring function takes values in is -Lipschitz. Further, is the class of discriminators that is -Lipschitz with respect to the parameters . As usual, we use to denote the number of parameters in .

Theorem B.2 (Theorem 3.1 restated).

In the setting described in the previous paragraph, let be two distributions and be empirical versions with at least samples each. There is a universal constant such that when , we have with probability at least over the randomness of and ,


The proof uses concentration bounds. We show that with high probability, for every discriminator ,


If , let be the optimal discriminator, we then have

The other direction is similar.

Now we prove the claimed bounds (5) (proof of (6) is identical). Let be a finite set such that every point in is within distance of a point in (a so-called -net). Standard constructions give an satisfying . For every , by Chernoff bound we know

Therefore, when for large enough constant , we can union bound over all . With high probability (at least ), for all we have .

Now, for every , we can find a such that . Therefore

This finishes the proof of (5). ∎

Finally, we generalize the above Theorem to hold for all generators in a family.

Corollary B.1 (Corollary 3.1 restated).

In the setting of Theorem 3.1, suppose () be the generators in the iterations of the training, and assume . There is a some universal constant such that when , with probability at least , for all ,


This follows from the proof of Theorem 3.1. Note that we have fresh samples for every generator distribution, so Equation (6) is true with high probability by union bound. For the real distribution, notice that Equation (5) does not depend on the generator, so it is also true with high probability. ∎

b.2 Omitted Proof for Section 4: Expressive power and existence of equilibrium

Mixed Equilibrium

We first show there is a finite mixture of generators and discriminators that approximates the equilibrium of infinite mixtures.

Again we recall the settings here: suppose is -Lipschitz and bounded in , the generator and discriminators are -Lipschitz with respect to the parameters and -Lipschitz with respect to inputs.

Theorem B.3 (Theorem 4.2 restated).

In the settings above, if the generator can approximate any point mass999For all points and any , there is a generator such that ., there is a universal constant such that for any , there exists generators . Let be a uniform distribution on , and is a discriminator that outputs only , then is an -approximate equilibrium.


We first prove the value of the game must be equal to .For the discriminator, one strategy is to just output . This strategy has payoff no matter what the generator does, so .

For the generator, we use the assumption that for any point and any , there is a generator (which we denote by ) such that . Now for any , consider the following mixture of generators: sample , then use the generator . Let be the distribution generated by this mixture of generators. The Wasserstein distance between and is bounded by . Since the discriminator is -Lipschitz, it cannot distinguish between and . In particular we know for any discriminator