Disconnected Manifold Learning for Generative Adversarial Networks

06/03/2018 ∙ by Mahyar Khayatkhoei, et al. ∙ Rutgers University Verisk Analytics 6

Real images often lie on a union of disjoint manifolds rather than one globally connected manifold, and this can cause several difficulties for the training of common Generative Adversarial Networks (GANs). In this work, we first show that single generator GANs are unable to correctly model a distribution supported on a disconnected manifold, and investigate how sample quality, mode collapse and local convergence are affected by this. Next, we show how using a collection of generators can address this problem, providing new insights into the success of such multi-generator GANs. Finally, we explain the serious issues caused by considering a fixed prior over the collection of generators and propose a novel approach for learning the prior and inferring the necessary number of generators without any supervision. Our proposed modifications can be applied on top of any other GAN model to enable learning of distributions supported on disconnected manifolds. We conduct several experiments to illustrate the aforementioned shortcoming of GANs, its consequences in practice, and the effectiveness of our proposed modifications in alleviating these issues.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 8

page 15

page 16

page 17

page 18

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Consider two natural images, picture of a bird and picture of a cat for example, can we continuously transform the bird into the cat without ever generating a picture that is not neither bird nor cat? In other words, is there a continuous transformation between the two that never leaves the manifold of "real looking" images? It is often the case that real world data falls on a union of several disjoint manifolds and such a transformation does not exist, i.e. the real data distribution is supported on a disconnected manifold, and an effective generative model needs to be able to learn such manifolds.

Generative Adversarial Networks (GANs) Goodfellow et al. (2014), model the problem of finding the unknown distribution of real data as a two player game where one player, called the discriminator, tries to perfectly separate real data from the data generated by a second player, called the generator, while the second player tries to generate data that can perfectly fool the first player. Under certain conditions, Goodfellow et al. (2014) proved that this process will result in a generator that generates data from the real data distribution, hence finding the unknown distribution implicitly. However, later works uncovered several shortcomings of the original formulation, mostly due to violation of one or several of its assumptions in practice Arjovsky and Bottou (2017); Arjovsky et al. (2017); Metz et al. (2016); Srivastava et al. (2017). Most notably, the proof only works for when optimizing in the function space of generator and discriminator (and not in the parameter space) Goodfellow et al. (2014), the Jensen Shannon Divergence is maxed out when the generated and real data distributions have disjoint support resulting in vanishing or unstable gradient Arjovsky and Bottou (2017), and finally the mode dropping problem where the generator fails to correctly capture all the modes of the data distribution, for which to the best of our knowledge there is no definitive reason yet.

One major assumption for the convergence of GANs is that the generator and discriminator both have unlimited capacity Goodfellow et al. (2014); Arjovsky et al. (2017); Srivastava et al. (2017); Hoang et al. (2018)

, and modeling them with neural networks is then justified through the Universal Approximation Theorem. However, we should note that this theorem is only valid for continuous functions. Moreover, neural networks are far from universal approximators in practice. In fact, we often explicitly restrict neural networks through various regularizers to stabilize training and enhance generalization. Therefore, when generator and discriminator are modeled by stable regularized neural networks, they may no longer enjoy a good convergence as promised by the theory.

In this work, we focus on learning distributions with disconnected support, and show how limitations of neural networks in modeling discontinuous functions can cause difficulties in learning such distributions with GANs. We study why these difficulties arise, what consequences they have in practice, and how one can address these difficulties by using a collection of generators, providing new insights into the recent success of multi-generator models. However, while all such models consider the number of generators and the prior over them as fixed hyperparameters

Arora et al. (2017); Hoang et al. (2018); Ghosh et al. (2017), we propose a novel prior learning approach and show its necessity in effectively learning a distribution with disconnected support. We would like to stress that we are not trying to achieve state of the art performance in our experiments in the present work, rather we try to illustrate an important limitation of common GAN models and the effectiveness of our proposed modifications. We summarize the contributions of this work below:

  • We identify a shortcoming of GANs in modeling distributions with disconnected support, and investigate its consequences, namely mode dropping, worse sample quality, and worse local convergence (Section 2).

  • We illustrate how using a collection of generators can solve this shortcoming, providing new insights into the success of multi generator GAN models in practice (Section 3).

  • We show that choosing the number of generators and the probability of selecting them are important factors in correctly learning a distribution with disconnected support, and propose a novel prior learning approach to address these factors. (Section 

    3.1)

  • Our proposed model can effectively learn distributions with disconnected supports and infer the number of necessary disjoint components through prior learning. Instead of one large neural network as the generator, it uses several smaller neural networks, making it more suitable for parallel learning and less prone to bad weight initialization. Moreover, it can be easily integrated with any GAN model to enjoy their benefits as well (Section 5).

(a) Suboptimal Continuous
(b) Optimal
Figure 1: Illustrative example of continuous generator with prior , trying to capture real data coming from , a distribution supported on union of two disjoint manifolds. (a) shows an example of what a stable neural network is capable of learning for (a continuous and smooth function), (b) shows an optimal generator . Note that since is uniformly sampled, is necessarily generating off manifold samples (in ) due to its continuity.

2 Difficulties of Learning Disconnected Manifolds

A GAN as proposed by Goodfellow et al. (2014), and most of its successors (e.g. Arjovsky et al. (2017); Gulrajani et al. (2017)) learn a continuous , which receives samples from some prior as input and generates real data as output. The prior

is often a standard multivariate normal distribution

or a bounded uniform distribution

. This means that is supported on a globally connected subspace of . Since a continuous function always keeps the connectedness of space intact Kelley (2017)

, the probability distribution induced by

is also supported on a globally connected space. Thus , a continuous function by design, can not correctly model a union of disjoint manifolds in . We highlight this fact in Figure 1 using an illustrative example where the support of real data is . We will look at some consequences of this shortcoming in the next part of this section. For the remainder of this paper, we assume the real data is supported on a manifold which is a union of disjoint globally connected manifolds each denoted by ; we refer to each as a submanifold (note that we are overloading the topological definition of submanifolds in favor of brevity):

Sample Quality. Since GAN’s generator tries to cover all submanifolds of real data with a single globally connected manifold, it will inevitably generate off real-manifold samples. Note that to avoid off manifold regions, one should push the generator to learn a higher frequency function, the learning of which is explicitly avoided by stable training procedures and means of regularization. Therefore the GAN model in a stable training, in addition to real looking samples, will also generate low quality off real-manifold samples. See Figure 2 for an example of this problem.

Mode Dropping. In this work, we use the term mode dropping to refer to the situation where one or several submanifolds of real data are not completely covered by the support of the generated distribution. Note that mode collapse is a special case of this definition where all but a small part of a single submanifold are dropped. When the generator can only learn a distribution with globally connected support, it has to learn a cover of the real data submanifolds, in other words, the generator can not reduce the probability density of the off real-manifold space beyond a certain value. However, the generator can try to minimize the volume of the off real-manifold space to minimize the probability of generating samples there. For example, see how in Figure (b)b the learned globally connected manifold has minimum off real-manifold volume, for example it does not learn a cover that crosses the center (the same manifold is learned in 5 different runs). So, in learning the cover, there is a trade off between covering all real data submanifolds, and minimizing the volume of the off real-manifold space in the cover. This trade off means that the generator may sacrifice certain submanifolds, entirely or partially, in favor of learning a cover with less off real-manifold volume, hence mode dropping.

Local Convergence. Nagarajan and Kolter (2017) recently proved that the training of GANs is locally convergent when generated and real data distributions are equal near the equilibrium point, and Mescheder et al. (2018) showed the necessity of this condition on a prototypical example. Therefore when the generator can not learn the correct support of the real data distribution, as is in our discussion, the resulting equilibrium may not be locally convergent. In practice, this means the generator’s support keeps oscillating near the data manifold.

(a) Real Data
(b) WGAN-GP
(c) DMWGAN
(d) DMWGAN-PL
Figure 2: Comparing Wasserstein GAN (WGAN) and its Disconnected Manifold version with and without prior learning (DMWGAN-PL, DMWGAN) on disjoint line segments dataset when . Different colors indicate samples from different generators. Notice how WGAN-GP fails to capture the disconnected manifold of real data, learning a globally connected cover instead, and thus generating off real-manifold samples. DMWGAN also fails due to incorrect number of generators. In contrast, DMWGAN-PL is able to infer the necessary number of disjoint components without any supervision and learn the correct disconnected manifold of real data. Each figure shows 10K samples from the respective model. We train each model 5 times, the results shown are consistent across different runs.
(a) WGAN-GP
(b) DMWGAN
(c) DMWGAN-PL
Figure 3: Comparing WGAN-GP, DMWGAN and DMWGAN-PL convergence on unbalanced disjoint line segments dataset when . The real data is the same line segments as in Figure 2, except the top right line segment has higher probability. Different colors indicate samples from different generators. Notice how DMWGAN-PL (c) has vanished the contribution of redundant generators wihtout any supervision. Each figure shows 10K samples from the respective model. We train each model 5 times, the results shown are consistent across different runs.

3 Disconnected Manifold Learning

There are two ways to achieve disconnectedness in : making disconnected, or making discontinuous. The former needs considerations for how to make disconnected, for example adding discrete dimensions Chen et al. (2016), or using a mixture of Gaussians Gurumurthy et al. (2017). The latter solution can be achieved by introducing a collections of independent neural networks as . In this work, we investigate the latter solution since it is more suitable for parallel optimization and can be more robust to bad initialization.

We first introduce a set of generators instead of a single one, independently constructed on a uniform prior in the shared latent space . Each generator can therefore potentially learn a separate connected manifold. However, we need to encourage these generators to each focus on a different submanifold of the real data, otherwise they may all learn a cover of the submanifolds and experience the same issues of a single generator GAN. Intuitively, we want the samples generated by each generator to be perfectly unique to that generator, in other words, each sample should be a perfect indicator of which generator it came from. Naturally, we can achieve this by maximizing the mutual information , where is generator id and is generated sample. As suggested by Chen et al. (2016), we can implement this by maximizing a lower bound on mutual information between generator ids and generated images:

where is the distribution approximating , is induced by each generator ,

is the Kullback Leibler divergence, and the last equality is a consequence of Lemma 5.1 in 

Chen et al. (2016). Therefore, by modeling with a neural network , the encoder network, maximizing boils down to minimizing a cross entropy loss:

(1)

Utilizing the Wasserstein GAN Arjovsky et al. (2017) objectives, discriminator (critic) and generator maximize the following, where is the critic function:

(2)
(3)

We call this model Disconnected Manifold Learning WGAN (DMWGAN) in our experiments. We can similarly apply our modifications to the original GAN Goodfellow et al. (2014) to construct DMGAN. We add the single sided version of penalty gradient regularizer Gulrajani et al. (2017) to the discriminator/critic objectives of both models and all baselines. See Appendix A for details of our algorithm and the DMGAN objectives. See Appendix F for more details and experiments on the importance of the mutual information term.

The original convergence theorems of Goodfellow et al. (2014) and  Arjovsky et al. (2017) holds for the proposed DM versions respectively, because all our modifications concern the internal structure of the generator, and can be absorbed into the unlimited capacity assumption. More concretely, all generators together can be viewed as a unified generator where becomes the generator probability, and can be considered as a constraint on the generator function space incorporated using a Lagrange multiplier. While most multi-generator models consider as a uniform distribution over generators, this naive choice of prior can cause certain difficulties in learning a disconnected support. We will discuss this point, and also introduce and motivate the metrics we use for evaluations, in the next two subsections.

3.1 Learning the Generator’s Prior

In practice, we can not assume that the true number of submanifolds in real data is known a priori. So let us consider two cases regarding the number of generators , compared to the true number of submanifolds in data , under a fixed uniform prior . If then some generators have to cover several submanifolds of the real data, thus partially experiencing the same issues discussed in Section 2. If , then some generators have to share one real submanifold, and since we are forcing the generators to maintain disjoint supports, this results in partially covered real submanifolds, causing mode dropping. See Figures (c)c and (b)b for examples of this issue. Note that an effective solution to the latter problem reduces the former problem into a trade off: the more the generators, the better the cover. We can address the latter problem by learning the prior such that it vanishes the contribution of redundant generators. Even when , what if the distribution of data over submanifolds are not uniform? Since we are forcing each generator to learn a different submanifold, a uniform prior over the generators would result in a suboptimal distribution. This issue further shows the necessity of learning the prior over generators.

We are interested in finding the best prior over generators. Notice that is implicitly learning the probability of belonging to each generator , hence is approximating the true posterior . We can take an EM approach to learning the prior: the expected value of over the real data distribution gives us an approximation of (E step), which we can use to train the DMGAN model (M step). Instead of using empirical average to learn directly, we learn it with a model , which is a softmax function over parameters corresponding to each generator. This enables us to control the learning of , the advantage of which we will discuss shortly. We train by minimizing the cross entropy as follows:

Where is the cross entropy between model distribution and true posterior which we approximate by . However, learning the prior from the start, when the generators are still mostly random, may prevent most generators from learning by vanishing their probability too early. To avoid this problem, we add an entropy regularizer and decay its weight with time to gradually shift the prior away from uniform distribution. Thus the final loss for training becomes:

(4)

Where is the entropy of model distribution, is the decay rate, and is training timestep. The model is not very sensitive to and , any combination that insures a smooth transition away from uniform distribution is valid. We call this augmented model Disconnected Manifold Learning GAN with Prior Learning (DMGAN-PL) in our experiments. See Figures 2 and 3 for examples showing the advantage of learning the prior.

3.2 Choice of Metrics

We require metrics that can assess inter-mode variation, intra-mode variation and sample quality. The common metric, Inception Score Salimans et al. (2016), has several drawbacks Barratt and Sharma (2018); Lucic et al. (2017), most notably it is indifferent to intra-class variations and favors generators that achieve close to uniform distribution over classes of data. Instead, we consider more direct metrics together with FID score Heusel et al. (2017) for natural images.

For inter mode variation, we use the Jensen Shannon Divergence (JSD) between the class distribution of a pre-trained classifier over real data and generator’s data. This can directly tell us how well the distribution over classes are captured. JSD is favorable to KL due to being bounded and symmetric. For intra mode variation, we define mean square geodesic distance (MSD): the average squared geodesic distance between pairs of samples classified into each class. To compute the geodesic distance, Euclidean distance is used in a small neighborhood of each sample to construct the Isomap graph 

Yang (2002) over which a shortest path distance is calculated. This shortest path distance is an approximation to the geodesic distance on the true image manifold Tenenbaum et al. (2000)

. Note that average square distance, for Euclidean distance, is equal to twice the trace of the Covariance matrix, i.e. sum of the eigenvalues of covariance matrix, and therefore can be an indicator of the variance within each class:

In our experiments, we choose the smallest for which the constructed k nearest neighbors graph (Isomap) is connected in order to have a better approximation of the geodesic distance ().

Another concept we would like to evaluate is sample quality. Given a pretrained classifier with small test error, samples that are classified with high confidence can be reasonably considered good quality samples. We plot the ratio of samples classified with confidence greater than a threshold, versus the confidence threshold, as a measure of sample quality: the more off real-manifold samples, the lower the resulting curve. Note that the results from this plot are exclusively indicative of sample quality and should be considered in conjunction with the aforementioned metrics.

What if the generative model memorizes the dataset that it is trained on? Such a model would score perfectly on all our metrics, while providing no generalization at all. First, note that a single generator GAN model can not memorize the dataset because it can not learn a distribution supported on disjoint components as discussed in Section 2. Second, while our modifications introduces disconnnectedness to GANs, the number of generators we use in our proposed modifications are in the order of data submanifolds which is several orders of magnitude less than common dataset sizes. Note that if we were to assign one unique point of the space to each dataset sample, then a neural network could learn to memorize the dataset by mapping each selected to its corresponding real sample (we have introduced disjoint component in space in this case), however this is not how GANs are modeled. Therefore, the memorization issue is not of concern for common GANs and our proposed models (note that this argument is addressing the very narrow case of dataset memorization, not over-fitting in general).

4 Related Works

Several recent works have directly targeted the mode collapse problem by introducing a network that is trained to map back the data into the latent space prior . It can therefore provide a learning signal if the generated data has collapsed. ALI Dumoulin et al. (2016) and BiGAN Donahue et al. (2016) consider pairs of data and corresponding latent variable , and construct their discriminator to distinguish such pairs of real and generated data. VEEGAN Srivastava et al. (2017) uses the same discriminator, but also adds an explicit reconstruction loss . The main advantage of these models is to prevent loss of information by the generator (mapping several to a single ). However, in case of distributions with disconnected support, these models do not provide much advantage over common GANs and suffer from the same issues we discussed in Section 2 due to having a single generator.

Another set of recent works have proposed using multiple generators in GANs in order to improve their convergence. MIX+GAN Arora et al. (2017)

proposes using a collection of generators based on the well-known advantage of learning a mixed strategy versus a pure strategy in game theory. MGAN 

Hoang et al. (2018) similarly uses a collection of generators in order to model a mixture distribution, and train them together with a k-class classifier to encourage them to each capture a different component of the real mixture distribution. MAD-GAN Ghosh et al. (2017), also uses generators, together with a -class discriminator which is trained to correctly classify samples from each generator and true data (hence a classifier), in order to increase the diversity of generated images. While these models provide reasons for why multiple generators can model mixture distributions and achieve more diversity, they do not address why single generator GANs fail to do so. In this work, we explain why it is the disconnectedness of the support that single generator GANs are unable to learn, not the fact that real data comes from a mixture distribution. Moreover, all of these works use a fixed number of generators and do not have any prior learning, which can cause serious problems in learning of distributions with disconnected support as we discussed in Section 3.1 (see Figures (c)c and (b)b for examples of this issue).

Finally, several works have targeted the problem of learning the correct manifold of data. MDGAN Che et al. (2016), uses a two step approach to closely capture the manifold of real data. They first approximate the data manifold by learning a transformation from encoded real images into real looking images, and then train a single generator GAN to generate images similar to the transformed encoded images of previous step. However, MDGAN can not model distributions with disconnected supports. InfoGAN Chen et al. (2016) introduces auxiliary dimensions to the latent space , and maximizes the mutual information between these extra dimensions and generated images in order to learn disentangled representations in the latent space. DeLiGAN Gurumurthy et al. (2017) uses a fixed mixture of Gaussians as its latent prior, and does not have any mechanisms to encourage diversity. While InfoGAN and DeLiGAN can generate disconnected manifolds, they both assume a fixed number of discreet components equal to the number of underlying classes and have no prior learning over these components, thus suffering from the issues discussed in Section 3.1. Also, neither of these works discusses the incapability of single generator GANs to learn disconnected manifolds and its consequences.

Model JSD MNIST JSD Face-Bed FID Face-Bed
WGAN-GP
MIX+GAN
DMWGAN
DMWGAN-PL
Table 1:

Inter-class variation measured by Jensen Shannon Divergence (JSD) with true class distribution for MNIST and Face-Bedroom dataset, and FID score for Face-Bedroom (smaller is better). We run each model 5 times with random initialization, and report average values with one standard deviation interval

(a) Intra-class variation MNIST
(b) Sample quality MNIST
(c) Sample quality Face-Bed
Figure 4: (a) Shows intra-class variation in MNIST. Bars show the mean square distance (MSD) within each class of the dataset. On average, DMGAN-PL outperforms WGAN-GP in capturing intra class variation, as measured by MSD, with larger significance on certain classes. (b) Shows the sample quality in MNIST experiment. (c) Shows sample quality in Face-Bed experiment. Notice how DMWGAN-PL outperforms other models due to fewer off real-manifold samples. We run each model 5 times with random initialization, and report average values with one standard deviation intervals in both figures. 10K samples are used for metric evaluations.
(a) WGAN-GP
(b) DMWGAN
(c) DMWGAN-PL
Figure 5: Samples randomly generated by GAN models trained on Face-Bed dataset. Notice how WGAN-GP generates combined face-bedroom images (red boxes) in addition to faces and bedrooms, due to learning a connected cover of the real data support. DMWGAN does not generate such samples, however it generates completely off manifold samples (red boxes) due to having redundant generators and a fixed prior. DMWGAN-PL is able to correctly learn the disconnected support of real data. The samples and trained models are not cherry picked.

5 Experiments

In this section we present several experiments to investigate the issues and proposed solutions mentioned in Sections 2 and 3 respectively. The same network architecture is used for the discriminator and generator networks of all models under comparison, except we use

number of filters in each layer of multi-generator models compared to the single generator models, to control the effect of complexity. In all experiments, we train each model for a total of 200 epochs with a five to one update ratio between discriminator and generator.

, the encoder network, is built on top of discriminator’s last hidden layer, and is trained simultaneously with generators. Each data batch is constructed by first selecting 32 generators according to the prior , and then sampling each one using . See Appendix B for details of our networks and the hyperparameters.

Disjoint line segments. This dataset is constructed by sampling data with uniform distribution over four disjoint line segments to achieve a distribution supported on a union of disjoint low-dimensional manifolds. See Figure 2 for the results of experiments on this dataset. In Figure 3, an unbalanced version of this dataset is used, where 0.7 probability is placed on the top right line segment, and the other segments have 0.1 probability each. The generator and discriminator are both MLPs with two hidden layers, and 10 generators are used for multi-generator models. We choose WGAN-GP as the state of the art GAN model in these experiments (we observed similar or worse convergence with other flavors of single generator GANs). MGAN achieves similar results to DMWGAN.

MNIST dataset. MNIST LeCun (1998) is particularly suitable since samples with different class labels can be reasonably interpreted as lying on disjoint manifolds (with minor exceptions like certain s and s). The generator and discriminator are DCGAN like networks Radford et al. (2015) with three convolution layers. Figure 4 shows the mean squared geodesic distance (MSD) and Table 1 reports the corresponding divergences in order to compare their inter mode variation. 20 generators are used for multi-generator models. See Appendix C for experiments using modified GAN objective. Results demonstrate the advantage of adding our proposed modification on both GAN and WGAN. See Appendix D for qualitative results.

Face-Bed dataset. We combine 20K face images from CelebA dataset Liu et al. (2015) and 20K bedroom images from LSUN Bedrooms dataset Yu et al. (2015) to construct a natural image dataset supported on a disconnected manifold. We center crop and resize images to . 5 generators are used for multi-generator models. Figures (c)c8 and Table 1 show the results of this experiment. See Appendix E for more qualitative results.

(a)
(b)
(c)
(d)
Figure 6: DMWGAN-PL prior learning during training on MNIST with 20 generators (a,b) and on Face-Bed with 5 generators (c, d). (a, c) show samples from top generators with prior greater than and respectively. (b, d) show the probability of selecting each generator during training, each color denotes a different generator. The color identifying each generator in (b) and the border color of each image in (a) are corresponding, similarly for (d) and (c). Notice how prior learning has correctly learned probability of selecting each generators and dropped out redundant generators without any supervision.

6 Conclusion and Future Works

In this work we showed why the single generator GANs can not correctly learn distributions supported on disconnected manifolds, what consequences this shortcoming has in practice, and how multi-generator GANs can effectively address these issues. Moreover, we showed the importance of learning a prior over the generators rather than using a fixed prior in multi-generator models. However, it is important to highlight that throughout this work we assumed the disconnectedness of the real data support. Verifying this assumption in major datasets, and studying the topological properties of these datasets in general, are interesting future works. Extending the prior learning to other methods, such as learning a prior over shape of space, and also investigating the effects of adding diversity to discriminator as well as the generators, also remain as exciting future paths for research.

Acknowledgement

This work was supported by Verisk Analytics and NSF-USA award number 1409683.

References

Appendix A Algorithm

1: prior on , batch size, number of discriminator updates, number of generators, , and are weight coefficients, is decay rate, and
2:repeat
3:     for  do
4:           A batch from real data
5:           A batch from prior
6:           A batch from generator’s prior
7:           Generate batch using selected generators
8:          
9:           Adam(, ) Maximize wrt.
10:     end for
11:     
12:     
13:     
14:     
15:     for  do
16:           is short for
17:           Adam(, ) Maximize wrt.
18:     end for
19:     
20:      Adam(, ) Minimize wrt.
21:     
22:      Adam(, ) Minimize wrt.
23:     
24:until convergence.
Algorithm 1 Disconnected Manifold Learning WGAN with Prior Learning (DMWGAN-PL). Replace and according to Eq 5 and Eq 6 in lines 8 and 16 for the Modified GAN version (DMGAN-PL).

Utilizing the modified GAN objectives, discriminator and generator maximize the following:

(5)
(6)

Where is the distribution induced by the th generator modeled by a neural network , and is the discriminator function. We add the single sided version of penalty gradient regularizer with a weight to the discriminator/critic objectives of both versions of DMGAN and all our baselines:

(7)

Where is induced by uniformly sampling from the line connecting a sample from and a sample from .

Appendix B Network Architecture

Operation Kernel Strides Feature Maps BN Activation
G(z): 100
Fully Connected - ReLU
Nearest Up Sample -
Convolution - ReLU
Nearest Up Sample -
Convolution - ReLU
Nearest Up Sample -
Convolution - Tanh
D(x)
Convolution - Leaky ReLU
Convolution - Leaky ReLU
Convolution - Leaky ReLU
Fully Connected - Sigmoid
Batch size 32
Leaky ReLU slope 0.2
Gradient Penalty weight 10
Learning Rate 0.0002
Optimizer Adam
Weight Bias init Xavier, 0
Table 2: Single Generator Models MNIST
Operation Kernel Strides Feature Maps BN Activation Shared?
G(z): 100 N
Fully Connected - ReLU N
Nearest Up Sample - N
Convolution - ReLU N
Nearest Up Sample - N
Convolution - ReLU N
Nearest Up Sample - N
Convolution - Tanh N
Q(x), D(x)
Convolution - Leaky ReLU Y
Convolution - Leaky ReLU Y
Convolution - Leaky ReLU Y
D Fully Connected - Sigmoid N
Q Convolution Y Leaky ReLU N
Q Fully Connected - Softmax N
Batch size 32
Leaky ReLU slope 0.2
Gradient Penalty weight 10
Learning Rate 0.0002
Optimizer Adam
Weight Bias init Xavier, 0
Table 3: Multiple Generator Models MNIST
Operation Kernel Strides Feature Maps BN Activation
G(z): 100
Fully Connected - ReLU
Nearest Up Sample -
Convolution - ReLU
Nearest Up Sample -
Convolution - ReLU
Nearest Up Sample -
Convolution - Tanh
D(x)
Convolution - Leaky ReLU
Convolution - Leaky ReLU
Convolution - Leaky ReLU
Fully Connected - Sigmoid
Batch size 32
Leaky ReLU slope 0.2
Gradient Penalty weight 10
Learning Rate 0.0002
Optimizer Adam
Weight Bias init Xavier, 0
Table 4: Single Generator Models Face-Bed
Operation Kernel Strides Feature Maps BN Activation Shared?
G(z): 100 N
Fully Connected - ReLU N
Nearest Up Sample - N
Convolution - ReLU N
Nearest Up Sample - N
Convolution - ReLU N
Nearest Up Sample - N
Convolution - Tanh N
Q(x), D(x)
Convolution - Leaky ReLU Y
Convolution - Leaky ReLU Y
Convolution - Leaky ReLU Y
D Fully Connected - Sigmoid N
Q Convolution Y Leaky ReLU N
Q Fully Connected - Softmax N
Batch size 32
Leaky ReLU slope 0.2
Gradient Penalty weight 10
Learning Rate 0.0002
Optimizer Adam
Weight Bias init Xavier, 0
Table 5: Multiple Generator Models Face-Bed
Operation Kernel Strides Feature Maps BN Activation
G(z): 100
Fully Connected 128 - ReLU
Fully Connected 64 - ReLU
Fully Connected 2 -
D(x) 2
Fully Connected 64 - Leaky ReLU
Fully Connected 128 - Leaky ReLU
Fully Connected 1 - Sigmoid
Batch size 32
Leaky ReLU slope 0.2
Gradient Penalty weight 10
Learning Rate 0.0002
Optimizer Adam
Weight Bias init Xavier, 0
Table 6: Single Generator Models Disjoint Line Segments
Operation Kernel Strides Feature Maps BN Activation Shared?
G(z): 100 N
Fully Connected 32 - ReLU N
Fully Connected 16 - ReLU N
Fully Connected 2 - ReLU N
Q(x), D(x) 2
Fully Connected 64 - Leaky ReLU Y
Fully Connected 128 - Leaky ReLU Y
D Fully Connected 1 - Sigmoid N
Q Fully Connected 128 Y Sigmoid N
Q Fully Connected - Softmax N
Batch size 32
Leaky ReLU slope 0.2
Gradient Penalty weight 10
Learning Rate 0.0002
Optimizer Adam
Weight Bias init Xavier, 0
Table 7: Multiple Generator Models Disjoint Line Segments

The pre-trained classifier used for metrics is the ALL-CNN-B model from Springenberg et al. [2014] trained to test accuracy on MNIST and on Face-Bed.

Appendix C DMGAN on MNIST dataset

Model KL Reverse KL JSD
Real
GAN-GP
DMGAN-PL
Table 8: Inter-class variation in MNIST under GAN modified objective with gradient penalty Gulrajani et al. [2017]. We show the Kullback Leibler Divergence, KL( || ), its inverse KL( || ), and the Jensen Shannon Divergence, JSD( || ) for true data and each model, where and are the distribution of images over classes using ground truth labels and a pretrained classifier respectively. Each row corresponds to the retrieved from the respective model. The results show that DMGAN-PL captures the inter class variation better that GAN. We run each model 5 times with random initialization, and report average divergences with one standard deviation interval
(a) Intra-class variation
(b) Sample quality
Figure 7: MNIST experiment under GAN modified objective with gradient penalty. (a) Shows intra-class variation. Bars show the mean square distance (MSD) within each class of the dataset for real data, DMGAN-PL model, and GAN model, using the pretrained classifier to determine classes. On average, DMGAN-PL outperforms GAN in capturing intra class variation, as measured by MSD, with larger significance on certain classes. (b) Shows the ratio of samples classified with high confidence by the pretrained classifier, a measure of sample quality. We run each model 5 times with random initialization, and report average values with one standard deviation intervals in both figures. 10K samples are used for metric evaluations.

Appendix D MNIST qualitative results

(a) Real Data
(b) DMWGAN-PL
(c) WGAN-GP
Figure 8: Samples randomly generated from (a) MNIST dataset, (b) DMWGAN-PL model, and (c) WGAN-GP model. Notice the better variation in DMWGAN-PL and the off manifold samples in WGAN-GP (in 6th column, the 2nd and 4th row position for example). The samples and trained models are not cherry picked.

Appendix E Face-Bed qualitative results

(a) Real Data
(b) WGAN-GP
(c) MIX+GAN
(d) DMWGAN (MGAN)
(e) DMWGAN-PL
Figure 9: Samples randomly generated from each model. Notice how models without prior learning generate off real-manifold images, that is they generate samples that are combination of bedrooms and faces (hence neither face nor bedroom), in addition to correct face and bedroom images. The samples and trained models are not cherry picked.
(a) MIX+GAN
(b) DMWGAN (MGAN)
(c) DMWGAN-PL
Figure 10: Samples randomly generated from the generators of each model (each column corresponds to a different generator). Notice how MIX+GAN and DMWGAN both have good generators together with very low quality generators due to sharing the two real image manifolds among all their 5 generators (they uniformly generate samples from their generators). However, DMWGAN-PL has effectively dropped the training of redundant generators, only focusing on the two necessary ones, without any supervision (only selects samples from the last two). Samples and models are not cherry picked.

Appendix F Importance of Mutual Information

As discussed in Section 3, maximizing mutual information (MI) between generated samples and the generator ids helps prevent separate generators from learning the same submanfiolds of data and experiencing the same issues of a single generator model. It is important to note that even without the MI term in the objective, the generators are still "able" to learn the disconnected support correctly. However, since the optimization is non-convex, using the MI term to explicitly encourage disjoint supports for separate generators can help avoid undesirable local minima. We show the importance of MI term in practice by removing the term from the generator objective of DMWGAN-PL, we call this variant DMWGAN-PL-MI0.

(a)
(b)
(c)
(d)
Figure 11: (a, b) shows DMWGAN-PL-MI0 (without MI), at 30K and 500K training iterations respectively. (c, d) shows the same for DMWGAN-PL (with MI). See how MI encourages generators to learn disjoint supports, leading to learning the correct disconnected manifold.
(a) Intra-class variation
(b) Sample quality
Figure 12: MNIST experiment showing the effect of mutual information term in DMWGAN-PL. (a) Shows intra-class variation. Bars show the mean square distance (MSD) within each class of the dataset for real data, DMGAN-PL model, and DMWGAN-PL-MI0 model, using the pretrained classifier to determine classes. (b) Shows the ratio of samples classified with high confidence by the pretrained classifier, a measure of sample quality. We run each model 5 times with random initialization, and report average values with one standard deviation intervals in both figures. 10K samples are used for metric evaluations.
Model JSD MNIST JSD Face-Bed FID Face-Bed
DMWGAN-PL-MI0
DMWGAN-PL
Table 9: Inter-class variation measured by Jensen Shannon Divergence (JSD) with true class distribution for MNIST and Face-Bedroom dataset, and FID score for Face-Bedroom (smaller is better). We run each model 5 times with random initialization, and report average values with one standard deviation interval