Generative Adversarial Nets (GANs) are emerging objects of study in machine learning, computer vision, natural language processing, and many other domains. In machine learning, study of such a framework has led to significant advances in adversarial defenses[28, 24] and machine security [4, 24]. In computer vision and natural language processing, GANs have resulted in improved performance over standard generative models for images and texts 
, such as variational autoencoder
and deep Boltzmann machine. A main technique to achieve this goal is to play a minimax two-player game between generator and discriminator under the design that the generator tries to confuse the discriminator with its generated contents and the discriminator tries to distinguish real images/texts from what the generator creates.
Despite a large amount of variants of GANs, many fundamental questions remain unresolved. One of the long-standing challenges is designing universal, easy-to-implement architectures that alleviate the instability issue of GANs training. Ideally, GANs are supposed to solve the minimax optimization problem , but in practice alternating gradient descent methods do not clearly privilege minimax over maximin or vice versa (page 35, ), which may lead to instability in training if there exists a large discrepancy between the minimax and maximin objective values. The focus of this work is on improving the stability of such minimax game in the training process of GANs.
To alleviate the issues caused by the large minimax gap, our study is motivated by the zero-sum Stackelberg competition 
in the domain of game theory. In the Stackelberg leadership model, the players of this game are oneleader and multiple followers, where the leader firm moves first and then the follower firms move sequentially. It is known that the Stackelberg model can be solved to find a subgame perfect Nash equilibrium. We apply this idea of Stackelberg leadership model to the architecture design of GANs. That is, we design an improved GAN architecture with multiple generators (followers) which team up to play against the discriminator (leader). We therefore name our model Stackelberg GAN. Our theoretical and experimental results establish that: GANs with multi-generator architecture have smaller minimax gap, and enjoy more stable training performances.
Our Contributions. This paper tackles the problem of instability during the GAN training procedure with both theoretical and experimental results. We study this problem by new architecture design.
We propose the Stackelberg GAN framework of multiple generators in the GAN architecture. Our framework is general since it can be applied to all variants of GANs, e.g., vanilla GAN, Wasserstein GAN, etc. It is built upon the idea of jointly optimizing an ensemble of GAN losses w.r.t. all pairs of discriminator and generator.
Differences from prior work. Although the idea of having multiple generators in the GAN architecture is not totally new, e.g., MIX+GAN , MGAN , MAD-GAN  and GMAN , there are key differences between Stackelberg GAN and prior work. a) In MGAN  and MAD-GAN , various generators are combined as a mixture of probabilistic models with assumption that the generators and discriminator have infinite capacity. Also, they require that the generators share common network parameters. In contrast, in the Stackelberg GAN model we allow various sampling schemes beyond the mixture model, e.g., each generator samples a fixed but unequal number of data points independently. Furthermore, each generator has free parameters. We also make no assumption on the model capacity in our analysis. This is an important research question as raised by . b) In MIX+GAN , the losses are ensembled with learned weights and an extra regularization term, which discourages the weights being too far away from uniform. We find it slightly unnecessary because the expressive power of each generator already allows implicit scaling of each generator. In the Stackelberg GAN, we apply equal weights for all generators and obtain improved guarantees. c) In GMAN , there are multiple discriminators while it is unclear in theory why multi-discriminator architecture works well. In this paper, we provide formal guarantees for our model.
We prove that the minimax duality gap shrinks as the number of generators increases (see Theorem 1 and Corollary 2). Unlike the previous work, our result has no assumption on the expressive power of generators and discriminator, but instead depends on their non-convexity. With extra condition on the expressive power of generators, we show that Stackelberg GAN is able to achieve -approximate equilibrium with generators (see Theorem 3). This improves over the best-known result in  which requires generators. At the core of our techniques is a novel application of the Shapley-Folkman lemma to the generic minimax problem, where in the literature the shrinked duality gap was only known to happen when the objective function is restricted to the Lagrangian function of a constrained optimization problem [29, 5]. This results in tighter bounds than that of the covering number argument as in 
. We also note that MIX+GAN is a heuristic model which does not exactly match the theoretical analysis in, while this paper provides formal guarantees for the exact model of Stackelberg GAN.
We empirically study the performance of Stackelberg GAN for various synthetic and real datasets. We observe that without any human assignment, surprisingly, each generator automatically learns balanced number of modes without any mode being dropped (see Figure 1). Compared with other multi-generator GANs with the same network capacity, our experiments show that Stackelberg GAN enjoys Fréchet Inception Distance on CIFAR-10 dataset while prior results achieve (smaller is better), achieving an improvement of .
2 Stackelberg GAN
Before proceeding, we define some notations and formalize our model setup in this section.
We will use bold lower-case letter to represent vector and lower-case letter to represent scalar. Specifically, we denote bythe parameter vector of discriminator and the parameter vector of generator. Let
be the output probability of discriminator given input, and let represent the generated vector given random input . For any function , we denote by the conjugate function of . Let be the convex closure of , which is defined as the function whose epigraph is the convex closed hull of that of function . We define . We will use to represent the number of generators.
2.1 Model Setup
Preliminaries. The key ingredient in the standard GAN is to play a zero-sum two-player
game between a discriminator and a generator — which are often parametrized by deep neural networks in practice — such that the goal of the generator is to map random noiseto some plausible images/texts and the discriminator aims at distinguishing the real images/texts from what the generator creates.
For every parameter implementations and of generator and discriminator, respectively, denote by the payoff value
where is some concave, increasing function. Hereby, is the distribution of true images/texts and
is a noise distribution such as Gaussian or uniform distribution. The standard GAN thus solves the following saddle point problems:
For different choices of function , problem (1) leads to various variants of GAN. For example, when , problem (1) is the classic GAN; when , it reduces to the Wasserstein GAN. We refer interested readers to the paper of  for more variants of GANs.
Stackelberg GAN. Our model of Stackelberg GAN is inspired from the Stackelberg competition in the domain of game theory. Instead of playing a two-player game as in the standard GAN, in Stackelberg GAN there are players with two firms — one discriminator and generators. One can make an analogy between the discriminator (generators) in the Stackelberg GAN and the leader (followers) in the Stackelberg competition.
Stackelberg GAN is a general framework which can be built on top of all variants of standard GANs. The objective function is simply an ensemble of losses w.r.t. all possible pairs of generators and discriminator: . Thus it is very easy to implement. The Stackelberg GAN therefore solves the following saddle point problems:
We term the minimax (duality) gap. We note that there are key differences between the naïve ensembling model and ours. In the naïve ensembling model, one trains multiple GAN models independently and averages their outputs. In contrast, our Stackelberg GAN shares a unique discriminator for various generators, thus requires jointly training. Figure 2 shows the architecture of our Stackelberg GAN.
How to generate samples from Stackelberg GAN? In the Stackelberg GAN, we expect that each generator learns only a few modes. In order to generate a sample that may come from all modes, we use a mixed model. In particular, we generate a uniformly random value from to and use the -th generator to obtain a new sample. Note that this procedure in independent of the training procedure.
3 Analysis of Stackelberg GAN
In this section, we develop our theoretical contributions and compare our results with the prior work.
3.1 Minimax Duality Gap
We begin with studying the minimax gap of Stackelberg GAN. Our main results show that the minimax gap shrinks as the number of generators increases.
To proceed, denote by where the conjugate operation is w.r.t. the second argument of . We clarify here that the subscript in indicates that the function is derived from the -th generator. The argument of should depend on , so we denote it by . Intuitively, serves as an approximate convexification of w.r.t the second argument due to the conjugate operation. Denote by the convex closure of :
represents the convex relaxation of because the epigraph of is exactly the convex hull of epigraph of by the definition of . Let
where and is the convex closure of w.r.t. argument . Therefore, measures the non-convexity of objective function w.r.t. argument . For example, it is equal to if and only if is concave and closed w.r.t. discriminator parameter .
We have the following guarantees on the minimax gap of Stackelberg GAN.
Let and . Denote by the number of parameters of discriminator, i.e., . Suppose that is continuous and is compact and convex. Then the duality gap can be bounded by
provided that the number of generators .
Theorem 1 makes mild assumption on the continuity of loss and no assumption on the model capacity of discriminator and generators. The analysis instead depends on their non-convexity as being parametrized by deep neural networks. In particular, measures the divergence between the function value of and its convex relaxation ; When is convex w.r.t. argument , is exactly . The constant is the maximal divergence among all generators, which does not grow with the increase of . This is because measures the divergence of only one generator and when each generator for example has the same architecture, we have . Similarly, the terms and and we have the following straightforward corollary about the minimax duality gap of Stackelberg GAN.
Under the settings of Theorem 1, when is concave and closed w.r.t. discriminator parameter and the number of generators , we have .
3.2 Existence of Approximate Equilibrium
The results of Theorem 1 and Corollary 2 are independent of model capacity of generators and discriminator. When we make assumptions on the expressive power of generator as in , we have the following guarantee (2) on the existence of -approximate equilibrium.
Under the settings of Theorem 1, suppose that for any , there exists a generator such that . Let the discriminator and generators be -Lipschitz w.r.t. inputs and parameters, and let be -Lipschitz. Then for any , there exist generators and a discriminator such that for some value ,
Related Work. While many efforts have been devoted to empirically investigating the performance of multi-generator GAN, little is known about how many generators are needed so as to achieve certain equilibrium guarantees. Probably the most relevant prior work to Theorem 3 is that of . In particular,  showed that there exist generators and one discriminator such that -approximate equilibrium can be achieved, provided that for all and any , there exists a generator such that . Hereby, is a global upper bound of function , i.e., . In comparison, Theorem 3 improves over this result in two aspects: a) the assumption on the expressive power of generators in  implies our condition . Thus our assumption is weaker. b) The required number of generators in Theorem 3 is as small as . We note that by the definition of . Therefore, Theorem 3 requires much fewer generators than that of .
4 Architecture, Capacity and Mode Collapse/Dropping
In this section, we empirically investigate the effect of network architecture and capacity on the mode collapse/dropping issues for various multi-generator architecture designs. Hereby, the mode dropping refers to the phenomenon that generative models simply ignore some hard-to-represent modes of real distributions, and the mode collapse means that some modes of real distributions are "averaged" by generative models. For GAN, it is widely believed that the two issues are caused by the large gap between the minimax and maximin objective function values (see page 35, ).
Our experiments verify that network capacity (change of width and depth) is not very crucial for resolving the mode collapse issue, though it can alleviate the mode dropping in certain senses. Instead, the choice of architecture of generators plays a key role. To visualize this discovery, we test the performance of varying architectures of GANs on a synthetic mixture of Gaussians dataset with 8 modes and 0.01 standard deviation. We observe the following phenomena:
Naïvely increasing capacity of one-generator architecture does not alleviate mode collapse. It shows that the multi-generator architecture in the Stackelberg GAN effectively alleviates the mode collapse issue. Though naïvely increasing capacity of one-generator architecture alleviates mode dropping issue, for more challenging mode collapse issue, the effect is not obvious (see Figure 3).
Stackelberg GAN outperforms multi-branch models. We compare performance of multi-branch GAN and Stackelberg GAN with objective functions:
Generators tend to learn balanced number of modes when they have same capacity. We observe that for varying number of generators, each generator in the Stackelberg GAN tends to learn equal number of modes when the modes are symmetric and every generator has same capacity (see Figure 5).
In this section, we verify our theoretical contributions by the experimental validation.
5.1 MNIST Dataset
We first show that Stackelberg GAN generates more diverse images on the MNIST dataset  than classic GAN. We follow the standard preprocessing step that each pixel is normalized via subtracting it by 0.5 and dividing it by . The detailed network setups of discriminator and generators are in Table 4.
Figure 6 shows the diversity of generated digits by Stackelberg GAN with varying number of generators. When there is only one generator, the digits are not very diverse with many "1"’s and much fewer "2"’s. As the number of generators increases, the images tend to be more diverse. In particular, for -generator Stackelberg GAN, each generator is associated with one or two digits without any digit being missed.
5.2 Fashion-MNIST Dataset
We also observe better performance by the Stackelberg GAN on the Fashion-MNIST dataset. Fashion-MNIST is a dataset which consists of 60,000 examples. Each example is a grayscale image associating with a label from 10 classes. We follow the standard preprocessing step that each pixel is normalized via subtracting it by 0.5 and dividing it by . We specify the detailed network setups of discriminator and generators in Table 4.
Figure 7 shows the diversity of generated fashions by Stackelberg GAN with varying number of generators. When there is only one generator, the generated images are not very diverse without any “bags” being found. However, as the number of generators increases, the generated images tend to be more diverse. In particular, for -generator Stackelberg GAN, each generator is associated with one class without any class being missed.
5.3 CIFAR-10 Dataset
We then implement Stackelberg GAN on the CIFAR-10 dataset. CIFAR-10 includes 60,000 3232 training images, which fall into 10 classes ). The architecture of generators and discriminator follows the design of DCGAN in . We train models with 5, 10, and 20 fixed-size generators. The results show that the model with 10 generators performs the best. We also train 10-generator models where each generator has 2, 3 and 4 convolution layers. We find that the generator with 2 convolution layers, which is the most shallow one, performs the best. So we report the results obtained from the model with 10 generators containing 2 convolution layers. Figure 7(a) shows the samples produced by different generators. The samples are randomly drawn instead of being cherry-picked to demonstrate the quality of images generated by our model.
For quantitative evaluation, we use Inception score and Fréchet Inception Distance (FID) to measure the difference between images generated by models and real images.
Results of Inception Score. The Inception score measures the quality of a generated image and is correlated well with human’s judgment . We report the Inception score obtained by our Stackelberg GAN and other baseline methods in Table 1. For fair comparison, we only consider the baseline models which are completely unsupervised model and do not need any label information. Instead of directly using the reported Inception scores by original papers, we replicate the experiment of MGAN using the code, architectures and parameters reported by their original papers, and evaluate the scores based on the new experimental results. Table 1 shows that our model achieves a score of 7.62 in CIFAR-10 dataset, which outperforms the state-of-the-art models. For fairness, we configure our Stackelberg GAN with the same capacity as MGAN, that is, the two models have comparative number of total parameters. When the capacity of our Stackelberg GAN is as small as DCGAN, our model improves over DCGAN significantly.
Results of Fréchet Inception Distance. We then evaluate the performance of models on CIFAR-10 dataset using the Fréchet Inception Distance (FID), which better captures the similarity between generated images and real ones . As Table 1 shows, under the same capacity as DCGAN, our model reduces the FID by . Meanwhile, under the same capacity as MGAN, our model reduces the FID by . This improvement further indicates that our Stackelberg GAN with multiple light-weight generators help improve the quality of the generated images.
|Model||Inception Score||Fréchet Inception Distance|
|Ours (capacity as DCGAN)||29.88|
|MAD-GAN (our run, capacity MGAN) ||34.10|
|MGAN (our run) ||31.34|
|Ours (capacity MGANDCGAN)||26.76|
5.4 Tiny ImageNet Dataset
We also evaluate the performance of Stackelberg GAN on the Tiny ImageNet dataset. The Tiny ImageNet is a large image dataset, where each image is labelled to indicate the class of the object inside the image. We resize the figures down tofollowing the procedure described in . Figure 7(b) shows the randomly picked samples generated by -generator Stackelberg GAN. Each row has samples generated from one generator. Since the types of some images in the Tiny ImageNet are also included in the CIFAR-10, we order the rows in the similar way as Figure 7(a).
In this work, we tackle the problem of instability during GAN training procedure, which is caused by the huge gap between minimax and maximin objective values. The core of our techniques is a multi-generator architecture. We show that the minimax gap shrinks to as the number of generators increases with rate , when the maximization problem w.r.t. the discriminator is concave. This improves over the best-known results of . Experiments verify the effectiveness of our proposed methods.
Acknowledgements. Part of this work was done while H.Z. and S.X. were summer interns at Petuum Inc. We thank Maria-Florina Balcan, Yingyu Liang, and David P. Woodruff for their useful discussions.
-  Martin Arjovsky, Soumith Chintala, and Léon Bottou. Wasserstein GAN. arXiv preprint arXiv:1701.07875, 2017.
-  Sanjeev Arora, Rong Ge, Yingyu Liang, Tengyu Ma, and Yi Zhang. Generalization and equilibrium in generative adversarial nets (GANs). In International Conference on Machine Learning, pages 224–232, 2017.
-  Sanjeev Arora, Andrej Risteski, and Yi Zhang. Do GANs learn the distribution? some theory and empirics. In International Conference on Learning Representations, 2018.
-  Anish Athalye, Nicholas Carlini, and David Wagner. Obfuscated gradients give a false sense of security: Circumventing defenses to adversarial examples. arXiv preprint arXiv:1802.00420, 2018.
-  Maria-Florina Balcan, Yingyu Liang, David P Woodruff, and Hongyang Zhang. Matrix completion and related problems via strong duality. In Innovations in Theoretical Computer Science, pages 5:1–5:22, 2018.
-  David Berthelot, Thomas Schumm, and Luke Metz. BEGAN: boundary equilibrium generative adversarial networks. arXiv preprint arXiv:1703.10717, 2017.
-  D Bertsekas. Min common/max crossing duality: A geometric view of conjugacy in convex optimization. Lab. for Information and Decision Systems, MIT, Tech. Rep. Report LIDS-P-2796, 2009.
-  Patryk Chrabaszcz, Ilya Loshchilov, and Frank Hutter. A downsampled variant of ImageNet as an alternative to the CIFAR datasets. arXiv preprint arXiv:1707.08819, 2017.
-  Vincent Dumoulin, Ishmael Belghazi, Ben Poole, Olivier Mastropietro, Alex Lamb, Martin Arjovsky, and Aaron Courville. Adversarially learned inference. In International Conference on Learning Representations, 2017.
-  Ishan Durugkar, Ian Gemp, and Sridhar Mahadevan. Generative multi-adversarial networks. arXiv preprint arXiv:1611.01673, 2016.
Arnab Ghosh, Viveka Kulharia, Vinay Namboodiri, Philip HS Torr, and Puneet K
Multi-agent diverse generative adversarial networks.
IEEE Conference on Computer Vision and Pattern Recognition, pages 8513–8521, 2017.
-  Ian Goodfellow. NIPS 2016 tutorial: Generative adversarial networks. arXiv preprint arXiv:1701.00160, 2016.
-  Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial nets. In Advances in neural information processing systems, pages 2672–2680, 2014.
-  Martin Heusel, Hubert Ramsauer, Thomas Unterthiner, Bernhard Nessler, and Sepp Hochreiter. GANs trained by a two time-scale update rule converge to a local Nash equilibrium. In Advances in Neural Information Processing Systems, pages 6626–6637, 2017.
-  Quan Hoang, Tu Dinh Nguyen, Trung Le, and Dinh Phung. MGAN: Training generative adversarial nets with multiple generators. In International Conference on Learning Representations, 2018.
-  Diederik P Kingma and Max Welling. Auto-encoding variational Bayes. arXiv preprint arXiv:1312.6114, 2013.
-  Alex Krizhevsky and Geoffrey Hinton. Learning multiple layers of features from tiny images. Technical report, Citeseer, 2009.
-  Yann LeCun, Léon Bottou, Yoshua Bengio, and Patrick Haffner. Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86(11):2278–2324, 1998.
-  Tu Nguyen, Trung Le, Hung Vu, and Dinh Phung. Dual discriminator generative adversarial nets. In Advances in Neural Information Processing Systems, pages 2670–2680, 2017.
-  Sebastian Nowozin, Botond Cseke, and Ryota Tomioka. f-GAN: Training generative neural samplers using variational divergence minimization. In Advances in Neural Information Processing Systems, pages 271–279, 2016.
-  Alec Radford, Luke Metz, and Soumith Chintala. Unsupervised representation learning with deep convolutional generative adversarial networks. arXiv preprint arXiv:1511.06434, 2015.
Ruslan Salakhutdinov and Hugo Larochelle.
Efficient learning of deep Boltzmann machines.
International Conference on Artificial Intelligence and Statistics, pages 693–700, 2010.
-  Tim Salimans, Ian Goodfellow, Wojciech Zaremba, Vicki Cheung, Alec Radford, and Xi Chen. Improved techniques for training GANs. In Advances in Neural Information Processing Systems, pages 2234–2242, 2016.
Pouya Samangouei, Maya Kabkab, and Rama Chellappa.
Defense-GAN: Protecting classifiers against adversarial attacks using generative models.In International Conference on Learning Representations, 2018.
-  Arunesh Sinha, Fei Fang, Bo An, Christopher Kiekintveld, and Milind Tambe. Stackelberg security games: Looking beyond a decade of success. In International Joint Conferences on Artificial Intelligence, pages 5494–5501, 2018.
-  Ross M Starr. Quasi-equilibria in markets with non-convex preferences. Econometrica: Journal of the Econometric Society, pages 25–38, 1969.
-  Ruohan Wang, Antoine Cully, Hyung Jin Chang, and Yiannis Demiris. MAGAN: Margin adaptation for generative adversarial networks. arXiv preprint arXiv:1704.03817, 2017.
-  Chaowei Xiao, Bo Li, Jun-Yan Zhu, Warren He, Mingyan Liu, and Dawn Song. Generating adversarial examples with adversarial networks. arXiv preprint arXiv:1801.02610, 2018.
-  Hongyang Zhang, Junru Shao, and Ruslan Salakhutdinov. Deep neural networks with multi-branch architectures are less non-convex. arXiv preprint arXiv:1806.01845, 2018.
Appendix A Supplementary Experiments
Appendix B Proofs of Main Results
Theorem 1 (restated). Let and . Denote by the number of parameters of discriminator, i.e., . Suppose that is continuous and is compact and convex. Then the duality gap can be bounded by
provided that the number of generators .
The statement is by the weak duality. Thus it suffices to prove the other side of the inequality. All notations in this section are defined in Section 3.1.
We first show that
We have the following lemma.
By the definition of , we have . Since is the convex closure of function (a.k.a. weak duality theorem), we have . We now show that Note that , where
So we have
By Lemma 4, it suffices to show . We have the following lemma.
Under the assumption in Theorem 1,
We note that
where . Therefore,
Consider the subset of :
Define the vector summation
Since is continuous and is compact, the set
is compact. So , , , and , are all compact sets. According to the definition of and the standard duality argument , we have
We are going to apply the following Shapley-Folkman lemma.
Lemma 6 (Shapley-Folkman, ).
Let be a collection of subsets of . Then for every , there is a subset of size at most such that
Applying the above Shapley-Folkman lemma to the set , we have that there are a subset of size and vectors
Representing elements of the convex hull of by Carathéodory theorem, we have that for each , there are vectors and scalars such that
Recall that we define
and . We have for ,
Therefore, we have
as desired. ∎
When is concave and closed w.r.t. discriminator parameter , we have . Thus, and . ∎