1 Introduction
Generative Adversarial Networks (GANs) (Goodfellow et al., 2014) have become one of the dominant methods for fitting generative models to complicated reallife data, and even found unusual uses such as designing good cryptographic primitives (Abadi and Andersen, 2016). See a survey by Goodfellow (2016). Various novel architectures and training objectives were introduced to address perceived shortcomings of the original idea, leading to more stable training and more realistic generative models in practice (see Odena et al. (2016); Huang et al. (2017); Radford et al. (2016); Tolstikhin et al. (2017); Salimans et al. (2016); Jiwoong Im et al. (2016); Durugkar et al. (2016) and the reference therein).
The goal is to train a generator deep net whose input is a standard Gaussian, and whose output is a sample from some distribution on , which has to be close to some target distribution (which could be, say, reallife images represented using raw pixels). The training uses samples from and together with the generator net also trains a discriminator deep net trying to maximise its ability to distinguish between samples from and
. So long as the discriminator is successful at this task with nonzero probability, its success can be used to generate a feedback (using backpropagation) to the generator, thus improving its distribution
. Training is continued until the generator wins, meaning that the discriminator can do no better than random guessing when deciding whether or not a particular sample came from or . This basic iterative framework has been tried with many training objectives; see Section 2.But it has been unclear what to conclude when the generator wins this game: is close to in some metric? One seems to need some extension of generalization theory that would imply such a conclusion. The hurdle is that distribution could be complicated and may have many peaks and valleys; see Figure 1. The number of peaks (modes) may even be exponential in . (Recall the curse of dimensionality: in dimensions there are directions whose pairwise angle exceeds say , and each could be the site of a peak.) Whereas the number of samples from (and from for that matter) used in the training is a lot fewer, and thus may not reflect most of the peaks and valleys of .
A standard analysis due to (Goodfellow et al., 2014) shows that when the discriminator capacity (= number of parameters) and number of samples is “large enough”, then a win by the generator implies that is very close to (see Section 2). But the discussion in the previous paragraph raises the possibility that “sufficiently large” in this analysis may need to be .
Another open theoretical issue is whether an equilibrium always exists in this game between generator and discriminator. Just as a zero gradient is a necessary condition for standard optimization to halt, the corresponding necessary condition in a twoplayer game is an equilibrium. Conceivably some of the instability often observed while training GANs could just arise because of lack of equilibrium. (Recently Arjovsky et al. (2017)
suggest that using their Wasserstein objective in practice reduces instability, but we still lack proof of existence of an equilibrium.) Standard game theory is of no help here because we need a socalled pure equilibrium, and simple counterexamples such as
rock/paper/scissors show that it doesn’t exist in general^{2}^{2}2Such counterexamples are easily turned into toy GAN scenarios with generator and discriminator having finite capacity, and the game lacks a pure equilibrium. See Appendix C..1.1 Our Contributions
We formally define generalization for GANs in Section 3 and show that for previously studied notions of distance between distributions, generalization is not guaranteed (Lemma 1). In fact we show that the generator can win even when and are arbitrarily far in any one of the standard metrics.
However, we can guarantee some weaker notion of generalization by introducing a new metric on distributions, the neural net distance. We show that generalization does happen with moderate number of training examples (i.e., when the generator wins, the two distributions must be close in neural net distance). However, this weaker metric comes at a cost: it can be nearzero even when the trained and target distributions are very far (Section 3.4).
To explore the existence of equilibria we turn in Section 4 to infinite mixtures of generator deep nets. These are clearly vastly more expressive than a single generator net: e.g., a standard result in bayesian nonparametrics says that every probability density is closely approximable by an infinite mixture of Gaussians (Ghosh et al., 2003). Thus unsurprisingly, an infinite mixture should win the game. We then prove rigorously that even a finite mixture of fairly reasonable size can closely approximate the performance of the infinite mixture (Theorem 4.2).
This insight also allows us to construct a new architecture for the generator network where there exists an approximate
equilibrium that is pure. (Roughly speaking, an approximate equilibrium is one in which neither of the players can gain much by deviating from their strategies.) This existence proof for an approximate equilibrium unfortunately involves a quadratic blowup in the “size” of the generator (which is still better than the naive exponential blowup one might expect). Improving this is left for future theoretical work. But we propose a heuristic approximation to the mixture idea to introduce a new framework for training that we call
MIX+GAN. It can be added on top of any existing GAN training procedure, including those that use divergence objectives. Experiments in Section 6 show that for several previous techniques, MIX+GAN stabilizes the training, and in some cases improves the performance.2 Preliminaries
Notations. Throughout the paper we use for the dimension of samples, and for the number of parameters in the generator/discriminator. In Section 3 we use for number of samples.
Generators and discriminators. Let () denote the class of generators, where
is a function — which is often a neural network in practice — from
indexed by that denotes the parameters of the generators. Here denotes the possible ranges of the parameters and without loss of generality we assume is a subset of the unit ball^{3}^{3}3Otherwise we can scale the parameter properly by changing the parameterization. . The generator defines a distribution as follows: generate fromdimensional spherical Gaussian distribution and then apply
on and generate a sample of the distribution . We drop the subscript in when it’s clear from context.Let denote the class of discriminators, where is function from to and is the parameters of .
Training the discriminator consists of trying to make it output a high value (preferably ) when is sampled from distribution and a low value (preferably ) when is sampled from the synthetic distribution . Training the discriminator consists of trying to make its synthetic distribution “similar”to in the sense that the discriminator’s output tends to be similar on the two distributions.
We assume and are Lipschitz with respect to their parameters. That is, for all and any input , we have (similar for ).
Notice, this is distinct from the assumption (which we will also sometimes make) that functions are Lipschitz: that focuses on the change in function value when we change , while keeping fixed^{4}^{4}4Both Lipschitz parameters can be exponential in the number of layers in the neural net, however our Theorems only depend on the of the Lipschitz parameters.
Objective functions. The standard GAN training (Goodfellow et al., 2014) consists of training parameters so as to optimize an objective function:
(1) 
Intuitively, this says that the discriminator should give high values to the real samples and low values to the generated examples. The function was suggested because of its interpretation as the likelihood, and it also has a nice informationtheoretic interpretation described below. However, in practice it can cause problems since as . The objective still makes intuitive sense if we replace by any monotone function , which yields the objective:
(2) 
We call function the measuring function. It should be concave so that when and are the same distribution, the best strategy for the discriminator is just to output and the optimal value is . In later proofs, we will require to be bounded and Lipschitz. Indeed, in practice training often uses (which takes values in and is Lipschitz) and the recently proposed Wasserstein GAN (Arjovsky et al., 2017) objective uses .
Training with finite samples. The objective function (2) assumes we have infinite number of samples from
to estimate the value
. With finite training examples , one uses to estimate the quantity . We call the distribution that gives probability to each of the ’s the empirical version of the real distribution. Similarly, one can use a empirical version to estimateStandard interpretation via distance between distributions. Towards analyzing GANs, researchers have assumed access to infinite number of examples and that the discriminator is chosen optimally within some large class of functions that contain all possible neural nets. This often allows computing analytically the optimal discriminator and therefore removing the maximum operation from the objective (2), which leads to some interpretation of how and in what sense the resulting distribution is close to the true distribution .
Using the original objective function (1), then the optimal choice among all the possible functions from is , as shown in Goodfellow et al. (2014). Here is the density of in the real distribution, and is the density of in the distribution generated by generator . Using this discriminator — though it’s computationally infeasible to obtain it — one can show that the minimization problem over the generator correspond to minimizing the JensenShannon (JS) divergence between the true distribution and the generative distribution . Recall that for two distributions and , the JS divergence is defined by
Other measuring functions and choice of discriminator class leads to different distance function between distribution other than JS divergence. Notably, Arjovsky et al. (2017) shows that when , and the discriminator is chosen among all Lipschitz functions, maxing out the discriminator, the generator is attempting to minimize the Wasserstein distance between and . Recall that Wasserstein distance between and is defined as
3 Generalization theory for GANs
The above interpretation of GANs in terms of minimizing distance (such as JS divergence and Wasserstein distance) between the real distribution and the generated distribution relies on two crucial assumptions: (i) very expressive class of discriminators such as the set of all bounded discriminator or the set of all 1Lipschitz discriminators, and (ii) very large number of examples to compute/estimate the objective (1) or (2). Neither assumption holds in practice, and we will show next that this greatly affects the generalization ability, a notion we introduce in Section 3.1.
3.1 Definition of Generalization
Our definition is motivated from supervised classification, where training is said to generalize if the training and test error closely track each other. (Since the purpose of GANs training is to learn a distribution, one could also consider a stronger definition of successful training, as discussed in Section 3.4.)
Let be the training examples, and let
denote the uniform distribution over
. Similarly, let be a set of examples from the generated distribution . In the training of GANs, one implicitly uses to approximate the quantity . Inspired by the observation that the training objective of GANs and its variants is to minimize some distance (or divergence) between and using finite samples, we define the generalization of GANs as follows:Definition 1.
Given , an empirical version of the true distribution with samples, a generated distribution generalizes under the divergence or distance between distributions with generalization error if the following holds with high probability^{5}^{5}5over the choice of ,
(3) 
where and is an empirical version of the generated distribution with polynomial number of samples (drawn after is fixed).
In words, generalization in GANs means that the population distance between the true and generated distribution is close to the empirical distance between the empirical distributions. Our target is to make the former distance small, whereas the latter one is what we can access and minimize in practice. The definition allows only polynomial number of samples from the generated distribution because the training algorithm should run in polynomial time.
We also note that stronger versions of Definition 1
can be considered. For example, as an analog of uniform convergence in supervised learning, we can require (
3) to hold for all generators among a class of candidate generators. Indeed, our results in Section 3.3 show that all generators generalize under neural net distance with reasonable number of examples.3.2 JS Divergence and Wasserstein don’t Generalize
As a warmup, we show that JS divergence and Wasserstein distance don’t generalize with any polynomial number of examples because the population distance (divergence) is not reflected by the empirical distance.
Lemma 1.
Let be uniform Gaussian distributions and be an empirical versions of with examples. Then we have , .
There are two consequences of Lemma 1. First, consider the situation where . Then we have that but as long as we have polynomial number of examples. This violates the generalization definition equation (3).
Second, consider the case and , that is, memorizes all of the training examples in . In this case, since is a discrete distribution with finite supports, with enough (polynomial) examples, in , effectively we also have that . Therefore, we have that whereas . In other words, with any polynomial number of examples, it’s possible to overfit to the training examples using Wasserstein distance. The same argument also applies to JS divergence. See Appendix B.1 for the formal proof. Notice, this result does not contradict the experiments of Arjovsky et al. (2017) since they actually use not Wasserstein distance but a surrogate distance that does generalize, as we show next.
3.3 Generalization bounds for neural net distance
Which distance measure between and is the GAN objective actually minimizing and can we analyze its generalization performance? Towards answering these questions in full generality (given multiple GANs objectives) we consider the following general distance measure that unifies JS divergence, Wasserstein distance, and the neural net distance that we define later in this section.
Definition 2 (distance).
Let be a class of functions from to such that if . Let be a concave measuring function. Then the divergence with respect to between two distributions and supported on is defined as
When , we have that is a distance function ^{6}^{6}6Technically it is a pseudometric. This is also known as integral probability metrics(Müller, 1997). , and with slightly abuse of notation we write it simply as
Example 1.
When and , we have that is the same as JS divergence. When and , then is the Wasserstein distance.
Example 2.
Suppose is a set of neural networks and , then original GAN objective function is equivalent to
Suppose is the set of neural networks, and , then the objective function used empirically in Arjovsky et al. (2017) is equivalent to
GANs training uses to be a class of neural nets with a bound on the number of parameters. We then informally refer to as the neural net distance. The next theorem establishes generalization in the sense of equation (3) does hold for it (with a uniform convergence) . We assume that the measuring function takes values in and that it is Lipschitz. Further, is the class of discriminators that is Lipschitz with respect to the parameters . As usual, we use to denote the number of parameters in .
Theorem 3.1.
In the setting of previous paragraph, let be two distributions and be empirical versions with at least samples each. There is a universal constant such that when , we have with probability at least over the randomness of and ,
See Appendix B.1 for the proof. The intuition is that there aren’t too many distinct discriminators, and thus given enough samples the expectation over the empirical distribution converges to the expectation over the true distribution for all discriminators.
Theorem 3.1 shows that the neural network divergence (and neural network distance) has a much better generalization properties than JensenShannon divergence or Wasserstein distance. If the GAN successfully minimized the neural network divergence between the empirical distributions, that is, , then we know the neural network divergence between the distributions and is also small. It is possible to change the proof to also show that this generalization continues to hold at every iteration of the training as shown in the following corollary.
Corollary 3.1.
In the setting of Theorem 3.1, suppose () be the generators in the iterations of the training, and assume . There is a some universal constant such that when , with probability at least , for all ,
The key observation here is that the objective is separated into two parts and the generator is not directly related to . So even though we don’t have fresh examples, the generalization bound still holds. Detailed proof appears in Appendix B.1.
3.4 Generalization vs Diversity
Since the final goal of GANs training is to learn a distribution, it is worth understanding that though weak generalization in the sense of Section 3.3 is guaranteed, it comes with a cost. For JS divergence and Wasserstein distance, when the distance between two distributions is small, it is safe to conclude that the distributions and are almost the same. However, the neural net distance can be small even if are not very close. As a simple Corollary of Lemma 3.1, we obtain:
Corollary 3.2 (Lowcapacity discriminators cannot detect lack of diversity).
Let be the empirical version of distribution with samples. There is a some universal constant such that when , we have that with probability at least ,
That is, the neural network distance for nets with parameters cannot distinguish between a distribution and a distribution with support . In fact the proof still works if the disriminator is allowed to take many more samples from ; the reason they don’t help is that its capacity is limited to .
4 Expressive power and existence of equilibrium
Section 3 clarified the notion of generalization for GANs: namely, neuralnet divergence between the generated distribution and on the empirical samples closely tracks the divergence on the full distribution (i.e., unseen samples). But this doesn’t explain why in practice the generator usually “wins”so that the discriminator is unable to do much better than random guessing at the end. In other words, was it sheer luck that so many reallife distributions turned out to be close in neuralnet distance to a distribution produced by a fairly compact neural net? This section suggests no luck may be needed.
The explanation starts with a thought experiment. Imagine allowing a much more powerful generator, namely, an infinite mixture of deep nets, each of size . So long as the deep net class is capable of generating simple gaussians, such mixtures are quite powerful, since a classical result says that an infinite mixtures of simple gaussians can closely approximate . Thus an infinite mixture of deep net generators will “win” the GAN game, not only against a discriminator that is a small deep net but also against more powerful discriminators (e.g., any Lipschitz function).
The next stage in the thought experiment is to imagine a much less powerful generator, which is a mix of only a few deep nets, not infinitely many. Simple counterexamples show that now the distribution will not closely approximate arbitrary with respect to natural metrics like . Nevertheless, could the generator still win the GAN game against a deep net of bounded capacity (i.e., the deep net is unable to distinguish and )? We show it can.
informal theorem: If the discriminator is a deep net with parameters, then a mixture of generator nets can produce a distribution that the discriminator will be unable to distinguish from with probability more than . (Here notation hides some nuisance factors.)
This informal theorem is also a component of our result below about the existence of an approximate pure equilibrium. We will first show that a finite mixture of generators can “win” against all discriminators, and then discuss how this mixed generator can be realized as a single generator network that is 1layer deeper.
4.1 Equilibrium using a Mixture of Generators
For a class of generators and a class of discriminators , we can define the payoff of the game between generator and discriminator
(4) 
Of course as we discussed in previous section, in practice these expectations should be with respect to the empirical distributions. Our discussions in this section does not depend on the distributions and , so we define this way for simplicity.
The wellknown minmax theorem (v. Neumann, 1928) in game theory shows if both players are allowed to play mixed strategies then the game has a minmax solution. A mixed strategy for the generator is just a distribution supported on , and one for discriminator is a distribution supported on .
Theorem 4.1 (von Neumann).
There exists a value , and a pair of mixed strategies (,) such that
Note that this equilibrium involves both parties announcing their strategies at the start, such that neither will have any incentive to change their strategy after studying the opponent’s strategy. The payoff is generated by the generator first sample , and then generate an example . Therefore, the mixed generator is just a linear mixture of generators. A mixture of discriminators is more complicated because the objective function need not be linear in the discriminator. However in the case of our interest, the generator wins and even a mixture of discriminators cannot effectively distinguish between generated and real distribution. Therefore we do not consider a mixture of discriminators here.
Of course, this equilibrium involving an infinite mixture makes little sense in practice. We show that (as is folklore in game theory (Lipton and Young, 1994)) that we can approximate this minmax solution with mixture of finitely many generators and discriminators. More precisely we define approximate equilibrium:
Definition 3.
A pair of mixed strategies is an approximate equilibrium, if for some value
If the strategies are pure strategies, then this pair is called an approximate pure equilibrium.
Suppose is Lipschitz and bounded in , the generator and discriminators are Lipschitz with respect to the parameters and Lipschitz with respect to inputs, in this setting we can formalize the above Informal Theorem as follows:
Theorem 4.2.
In the settings above, if the generator can approximate any point mass^{7}^{7}7For all points and any , there is a generator such that ., there is a universal constant such that for any , there exists generators . Let be a uniform distribution on , and is a discriminator that outputs only , then is an approximate equilibrium.
The proof uses a standard probabilistic argument and epsilon net argument to show that if we sample generators and discriminators from infinite mixture, they form an approximate equilibrium with high probability. For the second part, we use the fact that the generator can approximate any point mass, so an infinite mixture of generators can approximate the real distribution to win. Therefore indeed a mixture of generators can achieve an approximate equilibrium.
Note that this theorem works for a wide class of measuring functions (as long as is concave). The generator always wins, and the discriminator’s (near) optimal strategy corresponds to random guessing (output a constant ).
4.2 Achieving Pure Equilibrium
Now we give a construction to augment the network structure, and achieve an approximate pure equilibrium for the GAN game for generator nets of size . This should be interpreted as: if deep nets of size are capable of generating any point mass, then the GAN game for the generator neural network of size
has an approximate equilibrium in which the generator wins. (The theorem is stated for RELU gates but also holds for standard activations such as sigmoid.)
Theorem 4.3.
Suppose the generator and discriminator are both layer neural networks () with
parameters, and the last layer uses ReLU activation function. In the setting of Theorem
4.2, there exists layer neural networks of generators and discriminator with parameters, such that there exists an approximate pure equilibrium with value .To prove this theorem, we consider the mixture of generators as in Theorem 4.2, and show how to fold the mixture into a larger layer neural network. We sketch the idea; details are in the Appendix B.2.
For mixture of generators, we construct a single neural network that approximately generates the mixture distribution using the gaussian input it has. To do that, we can pass the input through all the generators . We then show how to implement a “multiway selector” that will select a uniformly random output from (). The selector involves a simple layer network that selects a number from to with the appropriate probability and “disables”all the neural nets except the th one by forwarding an appropriate large negative input.
Remark: In practice, GANs use highly structured deep nets, such as convolutional nets. Our current proof of existence of pure equilibrium requires introducing less structured elements in the net, namely, the multiway selectors that implement the mixture within a single net. It is left for future work whether pure equilibria exist for the original structured architectures. In the meantime, in practice we recommend using, even for WGAN, a mixture of structured nets for GAN training, and it seems to help in our experiments reported below.
5 MIX+GANs
Theorem 4.2 and Theorem 4.3 show that using a mixture of (not too many) generators and discriminators guarantees existence of approximate equilibrium. This suggests that using a mixture may lead to more stable training. Our experiments correspond to an older version of this paper, and they are done using a mixture for both generator and discriminators.
Of course, it is impractical to use very large mixtures, so we propose mix + gan: use a mixture of components, where is as large as allowed by size of GPU memory (usually ). Namely, train a mixture of generators and discriminators ) which share the same network architecture but have their own trainable parameters. Maintaining a mixture means of course maintaining a weight for the generator which corresponds to the probability of selecting the output of . These weights are also updated via backpropagation. This heuristic can be combined with existing methods like dcgan, wgan etc., giving us new training methods mix+dcgan, mix+wgan etc.
We use exponentiated gradient (Kivinen and Warmuth, 1997): store the logprobabilities , and then obtain the weights by applying softmax function on them:
Note that our algorithm is maintaining weights on different generators and discriminators. This is very different from the idea of boosting where weights are maintained on samples. AdaGAN (Tolstikhin et al., 2017) uses ideas similar to boosting and maintains weights on training examples.
Given payoff function , training mix + gan boils down to optimizing:
Here the payoff function is the same as Equation (4). We use both measuring functions (for original GAN) and (for WassersteinGAN). In our experiments we alternatively update generators’ and discriminators’ parameters as well as their corresponding logprobabilities using ADAM (Kingma and Ba, 2015), with learning rate .
Empirically, it is observed that some components of the mixture tend to collapse and their weights diminish during the training. To encourage full use of the mixture capacity, we add to the training objective an entropy regularizer term that discourages the weights being too far away from uniform:
6 Experiments
Method  Score 

SteinGAN (Wang and Liu, 2016)  6.35 
Improved GAN (Salimans et al., 2016)  8.090.07 
ACGAN (Odena et al., 2016)  8.25 0.07 
SGAN (best variant in (Huang et al., 2017))  8.59 0.12 
DCGAN (as reported in Wang and Liu (2016))  6.58 
DCGAN (best variant in Huang et al. (2017))  7.160.10 
DCGAN (5x size)  7.340.07 
MIX+DCGAN (Ours, with components)  7.720.09 
Wasserstein GAN  3.820.06 
MIX+WassersteinGAN (Ours, with components)  4.040.07 
Real data  11.240.12 
In this section, we first explore the qualitative benefits of our method on image generation tasks: MNIST dataset (LeCun et al., 1998) of handwritten digits and the CelebA (Liu et al., 2015) dataset of human faces. Then for more quantitative evaluation we use the CIFAR10 dataset (Krizhevsky and Hinton, 2009) and use the Inception Score introduced in Salimans et al. (2016). MNIST contains 60,000 labeled 2828sized images of handwritten digits, CelebA contains over 200K 108108sized images of human faces (we crop the center 6464 pixels for our experiments), and CIFAR10 has 60,000 labeled 3232sized RGB natural images which fall into 10 categories.
To reinforce the point that this technique works out of the box, no extensive hyperparameter search or tuning is necessary. Please refer to our code for experimental setup.^{8}^{8}8Related code is public online at https://github.com/PrincetonML/MIXplusGANs.git
6.1 Qualitative Results
The DCGAN architecture (Radford et al., 2016) uses deep convolutional nets as generators and discriminators. We trained mix + dcgan on MNIST and CelebA using the authors’ code as a black box, and compared visual qualities of generated images vs DCGAN.
The DCGAN architecture (Radford et al., 2016) uses deep convolutional nets as generators and discriminators. We trained MIX+DCGAN on MNIST and CelebA using the authors’ code as a black box, and compared visual qualities of generated images to those by DCGAN.
Results on MNIST is shown in Figure 2. In this experiment, the baseline DCGAN consists of a pair of a generator and a discriminator, which are 5layer deconvoluitonal neural networks, and are conditioned on image labels. Our MIX+DCGAN model consists of a mixture of such DCGANs so that it has generators and discriminators. We observe that our method produces somewhat cleaner digits than the baseline (note the fuzziness in the latter).
Results on CelebA dataset are also in Figure 3, using the same architecture as for MNIST, except the models are not conditioned on image labels anymore. Again, our method generates more faithful and more diverse samples than the baseline. Note that one may need to zoom in to fully perceive the difference, since both the two datasets are rather easy for DCGAN.
In Appendix A, we also show generated digits and faces from each components of MIX+DCGAN.
6.2 Quantitative Results
Now we turn to quantitative measurement using Inception Score (Salimans et al., 2016). Our method is applied to DCGAN and WassersteinGAN Arjovsky et al. (2017), and throughout, mixtures of generators and discriminators are used. At first sight the comparison DCGAN v.s. MIX+DCGAN seems unfair because the latter uses
times the capacity of the former, with corresponding penalty in running time per epoch. To address this, we also compare our method with larger versions of
DCGAN with roughly the same number of parameters, and we found the former is consistently better than the later, as detailed below.To construct MIX+DCGAN, we build on top of the DCGAN trained with losses proposed by Huang et al. (2017), which is the best variant so far without improved training techniques. The same hyperparameters are used for fair comparison. See Huang et al. (2017) for more details. Similarly, for the MIX+WassersteinGAN, the base GAN is identical to that proposed by Arjovsky et al. (2017) using their hyperparameter scheme.
For a quantitative comparison, inception score is calculated for each model, using 50,000 freshly generated samples that are not used in training. To sample a single image from our MIX+ models, we first select a generator from the mixture according to their assigned weights , and then draw a sample from the selected generator.
Table 1 shows the results on the CIFAR10 dataset. We find that, simply by applying our method to the baseline models, our MIX+ models achieve 7.72 v.s. 7.16 on DCGAN, and 4.04 v.s. 3.82 on WassersteinGAN. To confirm that the superiority of MIX+ models is not solely due to more parameters, we also tested a DCGAN model with 5 times many parameters (roughly the same number of parameters as a 5component MIX+DCGAN), which is tuned using a grid search over 27 sets of hyperparameters (learning rates, dropout rates, and regularization weights). It gets only 7.34 (labeled as ”5x size” in Table 1), which is lower than that of a MIX+DCGAN. It is unclear how to apply MIX+ to SGANs. We tried mixtures of the upper and bottom generators separately, resulting in worse scores somehow. We leave that for future exploration.
Figure 5 shows how Inception Scores of MIX+DCGAN v.s. DCGAN evolve during training. MIX+DCGAN outperforms DCGAN throughout the entire training process, showing that it makes effective use of the additional capacity.
Arjovsky et al. (2017) shows that (approximated) Wasserstein loss, which is the neural network divergence by our definition, is meaningful because it correlates well with visual quality of generated samples. Figure 5 shows the training dynamics of neural network divergence of MIX+WassersteinGAN v.s. WassersteinGAN, which strongly indicates that MIX+WassersteinGAN is capable of achieving a much lower divergence as well as of improving the visual quality of generated samples.
7 Conclusions
The notion of generalization for GANs has been clarified by introducing a new notion of distance between distributions, the neural net distance. (Whereas popular distances such as Wasserstein and JS may not generalize.) Assuming the visual cortex also is a deep net (or some network of moderate capacity) generalization with respect to this metric is in principle sufficient to make the final samples look realistic to humans, even if the GAN doesn’t actually learn the true distribution.
One issue raised by our analysis is that the current GANs objectives cannot even enforce that the synthetic distribution has high diversity (Section 3.4). This is empirically verified in a followup work(Arora and Zhang, 2017). Furthermore the issue cannot be fixed by simply providing the discriminator with more training examples. Possibly some other change to the GANs setup are needed.
The paper also made progress another unexplained issue about GANs, by showing that a pure approximate equilibrium exists for a certain natural training objective (Wasserstein) and in which the generator wins the game. No assumption about the target distribution is needed.
Suspecting that a pure equilibrium may not exist for all objectives, we recommend in practice our MIX+GAN protocol using a small mixture of discriminators and generators. Our experiments show it improves the quality of several existing GAN training methods.
Finally, existence of an equilibrium does not imply that a simple algorithm (in this case, backpropagation) would find it easily. Understanding convergence remains widely open.
Acknowledgements
This paper was done in part while the authors were hosted by Simons Institute. We thank Moritz Hardt, Kunal Talwar, Luca Trevisan, Eric Price, and the referees for useful comments. This research was supported by NSF, Office of Naval Research, and the Simons Foundation.
References
 Abadi and Andersen [2016] Martín Abadi and David G Andersen. Learning to protect communications with adversarial neural cryptography. arXiv preprint arXiv:1610.06918, 2016.
 Arjovsky et al. [2017] Martin Arjovsky, Soumith Chintala, and Léon Bottou. Wasserstein gan. arXiv preprint arXiv:1701.07875, 2017.
 Arora and Zhang [2017] Sanjeev Arora and Yi Zhang. Do gans actually learn the distribution? an empirical study. arXiv preprint arXiv:1706.08224, 2017.
 Durugkar et al. [2016] I. Durugkar, I. Gemp, and S. Mahadevan. Generative MultiAdversarial Networks. ArXiv eprints, November 2016.
 Ghosh et al. [2003] Jayanta K Ghosh, RVJK Ghosh, and RV Ramamoorthi. Bayesian nonparametrics. Technical report, 2003.
 Goodfellow [2016] Ian Goodfellow. Nips 2016 tutorial: Generative adversarial networks. arXiv preprint arXiv:1701.00160, 2016.
 Goodfellow et al. [2014] Ian Goodfellow, Jean PougetAbadie, Mehdi Mirza, Bing Xu, David WardeFarley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial nets. In Advances in neural information processing systems, pages 2672–2680, 2014.

Gretton et al. [2012]
Arthur Gretton, Karsten M Borgwardt, Malte J Rasch, Bernhard Schölkopf, and
Alexander Smola.
A kernel twosample test.
Journal of Machine Learning Research
, 13(Mar):723–773, 2012.  Huang et al. [2017] Xun Huang, Yixuan Li, Omid Poursaeed, John Hopcroft, and Serge Belongie. Stacked generative adversarial networks. In Computer Vision and Patter Recognition, 2017.
 Jiwoong Im et al. [2016] D. Jiwoong Im, H. Ma, C. Dongjoo Kim, and G. Taylor. Generative Adversarial Parallelization. ArXiv eprints, December 2016.
 Kingma and Ba [2015] Diederik Kingma and Jimmy Ba. Adam: A method for stochastic optimization. In International Conference on Learning Representations, 2015.
 Kivinen and Warmuth [1997] Jyrki Kivinen and Manfred K Warmuth. Exponentiated gradient versus gradient descent for linear predictors. Information and Computation, 132(1):1–63, 1997.
 Krizhevsky and Hinton [2009] Alex Krizhevsky and Geoffrey Hinton. Learning multiple layers of features from tiny images. Technical report, 2009.

LeCun et al. [1998]
Yann LeCun, Corinna Cortes, and Christopher JC Burges.
The mnist database of handwritten digits, 1998.

Lipton and Young [1994]
Richard J Lipton and Neal E Young.
Simple strategies for large zerosum games with applications to
complexity theory.
In
Proceedings of the twentysixth annual ACM symposium on Theory of computing
, pages 734–740. ACM, 1994.  Liu et al. [2015] Ziwei Liu, Ping Luo, Xiaogang Wang, and Xiaoou Tang. Deep learning face attributes in the wild. In Proceedings of the IEEE International Conference on Computer Vision, pages 3730–3738, 2015.
 Müller [1997] Alfred Müller. Integral probability metrics and their generating classes of functions. Advances in Applied Probability, 29(02):429–443, 1997.
 Odena et al. [2016] Augustus Odena, Christopher Olah, and Jonathon Shlens. Conditional image synthesis with auxiliary classifier gans. arXiv preprint arXiv:1610.09585, 2016.
 Radford et al. [2016] Alec Radford, Luke Metz, and Soumith Chintala. Unsupervised representation learning with deep convolutional generative adversarial networks. In International Conference on Learning Representations, 2016.
 Salimans et al. [2016] Tim Salimans, Ian Goodfellow, Wojciech Zaremba, Vicki Cheung, Alec Radford, and Xi Chen. Improved techniques for training gans. In Advances in Neural Information Processing Systems, 2016.
 Tolstikhin et al. [2017] Ilya Tolstikhin, Sylvain Gelly, Olivier Bousquet, CarlJohann SimonGabriel, and Bernhard Schölkopf. Adagan: Boosting generative models. arXiv preprint arXiv:1701.02386, 2017.
 Trevisan et al. [2009] Luca Trevisan, Madhur Tulsiani, and Salil Vadhan. Regularity, boosting, and efficiently simulating every highentropy distribution. In Computational Complexity, 2009. CCC’09. 24th Annual IEEE Conference on, pages 126–136. IEEE, 2009.
 v. Neumann [1928] J v. Neumann. Zur theorie der gesellschaftsspiele. Mathematische annalen, 100(1):295–320, 1928.
 Wang and Liu [2016] Dilin Wang and Qiang Liu. Learning to draw samples: With application to amortized mle for generative adversarial learning. Technical report, 2016.
Appendix A Generated Samples from Components of Mix+dcgan
In Figure 6 and Figure 7 we showed generated digits and face from different components of MIX+DCGAN.
Appendix B Omitted Proofs
In this section we give detailed proofs for the theorems in the main document.
b.1 Omitted Proofs for Section 3
We first show that JS divergence and Wasserstein distances can lead to overfitting.
Lemma 2 (Lemma 1 restated).
Let be uniform Gaussian distributions and be an empirical versions of with examples. Then we have
Proof.
For JensenShannon divergence, observe that is a continuous distribution and is discrete, therefore .
For Wasserstein distance, let be the empirical samples (fixed arbitrarily). For , by standard concentration and union bounds, we have
Therefore, using the earthmover interpretation of Wasserstein distance, we know . ∎
Next we consider sampling for both the generated distribution and the real distribution, and show that the JS divergence or Wasserstein distance do not generalize.
Theorem B.1.
Let be uniform Gaussian distributions . Suppose are empirical versions of with samples. Then with probability at least we have
Further, let be the convolution of with a Gaussian distribution , as long as for small enough constant , we have with probability at least .
Proof.
For the JensenShannon divergence, we know with probability 1 the supports of are disjoint, therefore .
For Wasserstein distance, note that for two random Gaussian vectors
, their difference is also a Gaussian with expected square norm 2. Therefore we haveAs a result, setting to be a fixed constant (0.1 suffices), with probability , we can union bound over all the pairwise distances for points in support of and support of . With high probability, the closest pair between and has distance at least 1, therefore the Wasserstein distance .
Finally we prove that even if we add noise to the two distributions, the JS divergence is still large. For distributions , let be their density functions. Let , we can rewrite the JS divergence as
Let be a Bernoulli variable with probability of being 1. Note that where is the entropy of . Therefore . Let be the union of radius0.2 balls near the samples in and . Since with high probability, all these samples have pairwise distance at least 1, by Gaussian density function we know (a) the balls do not intersect; (b) within each ball ; (c) the union of these balls take at least fraction of the density in .
Therefore for every , we know , therefore
∎
Next we prove the neural network distance does generalize, given enough samples. Let us first recall the settings here: we assume that the measuring function takes values in is Lipschitz. Further, is the class of discriminators that is Lipschitz with respect to the parameters . As usual, we use to denote the number of parameters in .
Theorem B.2 (Theorem 3.1 restated).
In the setting described in the previous paragraph, let be two distributions and be empirical versions with at least samples each. There is a universal constant such that when , we have with probability at least over the randomness of and ,
Proof.
The proof uses concentration bounds. We show that with high probability, for every discriminator ,
(5)  
(6) 
If , let be the optimal discriminator, we then have
The other direction is similar.
Now we prove the claimed bounds (5) (proof of (6) is identical). Let be a finite set such that every point in is within distance of a point in (a socalled net). Standard constructions give an satisfying . For every , by Chernoff bound we know
Therefore, when for large enough constant , we can union bound over all . With high probability (at least ), for all we have .
Finally, we generalize the above Theorem to hold for all generators in a family.
Corollary B.1 (Corollary 3.1 restated).
In the setting of Theorem 3.1, suppose () be the generators in the iterations of the training, and assume . There is a some universal constant such that when , with probability at least , for all ,
Proof.
This follows from the proof of Theorem 3.1. Note that we have fresh samples for every generator distribution, so Equation (6) is true with high probability by union bound. For the real distribution, notice that Equation (5) does not depend on the generator, so it is also true with high probability. ∎
b.2 Omitted Proof for Section 4: Expressive power and existence of equilibrium
Mixed Equilibrium
We first show there is a finite mixture of generators and discriminators that approximates the equilibrium of infinite mixtures.
Again we recall the settings here: suppose is Lipschitz and bounded in , the generator and discriminators are Lipschitz with respect to the parameters and Lipschitz with respect to inputs.
Theorem B.3 (Theorem 4.2 restated).
In the settings above, if the generator can approximate any point mass^{9}^{9}9For all points and any , there is a generator such that ., there is a universal constant such that for any , there exists generators . Let be a uniform distribution on , and is a discriminator that outputs only , then is an approximate equilibrium.
Proof.
We first prove the value of the game must be equal to .For the discriminator, one strategy is to just output . This strategy has payoff no matter what the generator does, so .
For the generator, we use the assumption that for any point and any , there is a generator (which we denote by ) such that . Now for any , consider the following mixture of generators: sample , then use the generator . Let be the distribution generated by this mixture of generators. The Wasserstein distance between and is bounded by . Since the discriminator is Lipschitz, it cannot distinguish between and . In particular we know for any discriminator
Comments
There are no comments yet.