alphaGAN
A PyTorch implementation of alphaGAN
view repo
Autoencoding generative adversarial networks (GANs) combine the standard GAN algorithm, which discriminates between real and modelgenerated data, with a reconstruction loss given by an autoencoder. Such models aim to prevent mode collapse in the learned generative model by ensuring that it is grounded in all the available training data. In this paper, we develop a principle upon which autoencoders can be combined with generative adversarial networks by exploiting the hierarchical structure of the generative model. The underlying principle shows that variational inference can be used a basic tool for learning, but with the in tractable likelihood replaced by a synthetic likelihood, and the unknown posterior distribution replaced by an implicit distribution; both synthetic likelihoods and implicit posterior distributions can be learned using discriminators. This allows us to develop a natural fusion of variational autoencoders and generative adversarial networks, combining the best of both these methods. We describe a unified objective for optimization, discuss the constraints needed to guide learning, connect to the wide range of existing work, and use a battery of tests to systematically and quantitatively assess the performance of our method.
READ FULL TEXT VIEW PDF
In this paper, we propose Orthogonal Generative Adversarial Networks
(O...
read it
Popular generative model learning methods such as Generative Adversarial...
read it
Generative source separation methods such as nonnegative matrix
factori...
read it
To address the challenges in learning deep generative models (e.g.,the
b...
read it
The quality of the generative models (Generative adversarial networks,
V...
read it
Autoencoders and generative neural network models have recently gained
p...
read it
Reconstruction error is a prevalent score used to identify anomalous sam...
read it
A PyTorch implementation of alphaGAN
Generative adversarial networks (GANs) (Goodfellow et al., 2014)
are one of the dominant approaches for learning generative models in contemporary machine learning research, which provide a flexible algorithm for learning in latent variable models. Directed latent variable models describe a data generating process in which a source of noise is transformed into a plausible data sample using a nonlinear function, and GANs drive learning by discriminating observed data from modelgenerated data. GANs allow for training on large datasets, are fast to simulate from, and when trained on image data, produce visually compelling sample images. But this flexibility comes with instabilities in optimization that leads to the problem of modecollapse, in which generated data does not reflect the diversity of the underlying data distribution. A large class of GAN variants that aim to address this problem are autoencoderbased GANs (AEGANs), that use an autoencoder to encourage the model to better represent
all the data it is trained with, thus discouraging modecollapse.Autoencoders have been successfully used to improve GAN training. For example, plug and play generative networks (PPGNs) (Nguyen et al., 2016)
produce stateoftheart samples by optimizing an objective that combines an autoencoder loss, a GAN loss, and a classification loss defined using a pretrained classifier. AEGANs can be broadly classified into three approaches: (1) those using an autoencoder as the discriminator, such as energybased GANs and boundaryequilibrium GANs
(Berthelot et al., 2017), (2) those using a denoising autoencoder to derive an auxiliary loss for the generator, such as denoising feature matching GANs (WardeFarley and Bengio, 2017), and (3) those combining ideas from VAEs and GANs. For example, the variational autoencoder GAN (VAEGAN) (Larsen et al., 2016) adds an adversarial loss to the variational evidence lower bound objective. More recent GAN variants, such as moderegularized GANs (MRGAN) (Che et al., 2016) and adversarial generator encoders (AGE) (Ulyanov et al., 2017) also use a separate encoder in order to stabilize GAN training. Such variants are interesting because they reveal interesting connections to VAEs, however the principles underlying the fusion of autoencoders and GANs remain unclear.In this paper, we develop a principled approach for hybrid AEGANs. By exploiting the hierarchical structure of the latent variable model learned by GANs, we show how another popular approach for learning latent variable models, variational autoencoders (VAEs), can be combined with GANs. This approach will be advantageous since it allows us to overcome the limitations of each of these methods. Whereas VAEs often produce blurry images when trained on images, they do not suffer from the problem of mode collapse experienced by GANs. GANs allow few distributional assumptions to be made about the model, whereas VAEs allow for inference of the latent variables which is useful for representation learning, visualization and explanation. The approach we will develop will combine the best of these two worlds, provide a unified objective for learning, is purely unsupervised, requires no pretraining or external classifiers, and can easily be extended to other generative modeling tasks.
We begin by exposing the tools that we acquire for dealing with intractable generative models from both GANs and VAEs in section 2, and then make the following contributions:
[leftmargin=3ex,topsep=0pt,itemsep=1ex,partopsep=1ex,parsep=1ex]
We show that variational inference applies equally well to GANs and how discriminators can be used for variational inference with implicit posterior approximations.
Likelihoodbased and likelihoodfree models can be combined when learning generative models. In the likelihoodfree setting, we develop variational inference with synthetic likelihoods that allows us to learn such models.
We develop a principled objective function for autoencoding GANs (GAN),^{1}^{1}1We use the Greek prefix for GAN, as AEGAN and most other Latin prefixes seem to have been taken https://deephunt.in/theganzoo79597dc8c347. and describe considerations needed to make it work in practice.
Evaluation is one of the major challenges in GAN research and we use a battery of evaluation measures to carefully assess the performance of our approach, comparing to DCGAN, Wasserstein GAN and adversarialgeneratorencoders (AGE). We emphasize the continuing challenge of evaluation in implicit generative models and show that our model performs well on these measures.
Latent Variable Models: Latent variable models describe a stochastic process by which modeled data is assumed to be generated (and thereby a process by which synthetic data can be simulated from the model distribution). In their simplest form, an unobserved quantity gives rise to a conditional distribution in the ambient space of the observed data, . In several recently proposed model families, is specified via a generator (or decoder), , a nonlinear function from with parameters . In this work we consider models with , unless otherwise specified.
In implicit latent variable models, or likelihoodfree models, we do not make any further assumptions about the data generating process and set the observation likelihood , which is the model class considered in many simulationbased models, and especially in generative adversarial networks (GANs) (Goodfellow et al., 2014). In prescribed latent variable models we make a further assumption of observation noise, and any likelihood function that is appropriate to the data can be used.
In both implicit and prescribed models (such as GANs and VAEs, respectively) an important quantity that describes the quality of the model is the marginal likelihood , in which the latent variables have been integrated over. We learn about the parameters of the model by minimizing an divergence between the model likelihood and the true data distribution , such as the KLdivergence . But in both types of models, the marginal likelihood is intractable, requiring us to find solutions by which we can overcome this intractability in order to learn the model parameters.
Generative Adversarial Networks: One way to overcome the intractability of the marginal likelihood is to never compute it, and instead to learn about the model parameters using a tool that gives us indirect information about it. Generative adversarial networks (GANs) (Goodfellow et al., 2014) do this by learning a suitably powerful discriminator that learns to distinguish samples from the true distribution and the model . The ability of the discriminator (or lack thereof) to distinguish between real and generated data is the learning signal that drives the optimization of the model parameters: when this discriminator is unable to distinguish between real and simulated data, we have learned all we can about the observed data. This is a principle of learning known under various names, including adversarial training Goodfellow et al. (2014)
, estimationbycomparison
(Gutmann and Hyvärinen, 2012; Gutmann et al., 2014), and unsupervisedassupervised learning
(Hastie et al., 2013).Let denote a binary label corresponding to data samples from the real data distribution and for simulated data , and a discriminator
that gives the probability that an input
is from the real distribution, with discriminator parameters . At any time point, we update the discriminator by drawing samples from the real data and from the model and minimize the binary cross entropy (1). The generator parameters are then updated by maximizing the probability that samples from are classified as real. Goodfellow et al. (2014) suggests an alternative loss in (2), which provides stronger gradients. The optimization is then an alternating minimization w.r.t. and .(1)  
(2) 
GANs are especially interesting as a way of learning in latent variable models, since they do not require inference of the latent variables , and are applicable to both implicit and prescribed models. GANs are based on an underlying principle of density ratio estimation (Goodfellow et al., 2014; Goodfellow, 2014; Uehara et al., 2016; Mohamed and Lakshminarayanan, 2016) and thus provide us with an important tool for overcoming intractable distributions.
The Density Ratio Trick: By introducing the labels for real data and for simulated data in GANs, we reexpress the data and model distributions in conditional form, i.e. for the true distribution, and for the model. The density ratio between the true distribution and model distribution can be computed using these conditional distributions as:
(3) 
where we used Bayes’ rule in the second last step and assumed that the marginal class probabilities are equal, i.e. . This tells us that whenever we wish to compute a density ratio, we can simply draw samples from the two distributions and implement a binary classifier of the two sets of samples. By using the density ratio, GANs account for the intractability of the marginal likelihood by looking only at its relative behavior with respect to the true distribution. This trick only requires samples from the two distributions and never access to their analytical forms, making it particularly wellsuited for dealing with implicit distributions or likelihoodfree models. Since we are required to build a classifier, we can use all the knowledge we have about building stateoftheart classifiers. This trick is widespread (Goodfellow et al., 2014; Goodfellow, 2014; Makhzani et al., 2015; Mescheder et al., 2017; Karaletsos, 2016; Huszár, 2017; Tran et al., 2017). While using class probability estimation is amongst the most popular, the density ratio can also be computed in several other ways including by divergence minimization and densityratio matching (Sugiyama et al., 2012; Mohamed and Lakshminarayanan, 2016).
Variational Inference: A second approach for dealing with intractable likelihoods is to approximate them. There are several ways to approximate the marginal likelihood, but one of the most popular is to derive a lower bound to it by transforming the marginal likelihood into an expectation over a new variational distribution , whose variational parameters can be optimized to ensure that a tight bound can be found. The bound obtained is the popular variational lower bound : 8
(4) 
Variational autoencoders (VAEs) (Rezende et al., 2014; Kingma and Welling, 2013) provide one way of implementing variational inference in which the variational distribution is represented as an encoder, and the variational and model parameters are jointly optimized using the pathwise stochastic gradient estimator (also known as the reparameterization trick) (Fu, 2006; Kingma and Welling, 2013; Rezende et al., 2014). The variational lower bound (4) is a description applicable to both implicit and prescribed models, and gives us a further tool for dealing with intractable distributions, which is to introduce an encoder to invert the generative process and optimize a lower bound on the marginal likelihood.
Synthetic Likelihoods: When the likelihood function is unknown, the variational lower bound (4) cannot directly be used for learning. One further tool with which to overcome this, is to replace the likelihood with a substitute, or synthetic likelihood . The original formulation of the synthetic likelihood (Wood, 2010) is based on a Gaussian assumption, but we use the term here to mean any general substitute for the likelihood that maintains its asymptotic properties. The synthetic likelihood form we use here was proposed by Dutta et al. (2016) for approximate Bayesian computation (ABC). The idea is to introduce a synthetic likelihood into the likelihood term of (4) by dividing and multiplying by the true data distribution :
(5) 
The first term in (5) contains the synthetic likelihood . Any estimate of the ratio is an estimate of the likelihood since they are proportional (and the normalizing constant is independent of ). Wherever an intractable likelihood appears, we can instead use this ratio. The synthetic likelihood can be estimated using the density ratio trick by training a discriminator to distinguish between samples from the marginal and the conditional where is drawn from . The second term in (5) is independent of and can be ignored for optimization purposes.
GANs and VAEs have given us useful tools for learning and inference in generative models and we now use these tools to build new hybrid inference methods. The VAE forms our generic starting point, and we will gradually transform it to be more GANlike.
Implicit Variational Distributions: The major task in variational inference is the choice of the variational distribution . Common approaches, such as meanfield variational inference, assume simple distributions like a Gaussian, but we would like not to make a restrictive choice of distribution. If we treat this distribution as implicit—we do not know its distribution but are able to generate from it—then we can use the density ratio trick to replace the KLdivergence term in (4).
(6) 
We will thus introduce a latent classifier that discriminates between latent variables
produced by an encoder network and variables sampled from a standard Gaussian distribution. For optimization, the expectation in (
6) is evaluated by Monte Carlo integration. Replacing the KLdivergence with a discriminator was first proposed by Makhzani et al. (2015), and a similar idea was used by Mescheder et al. (2017) for adversarial variational Bayes.Likelihood Choice: If we make the explicit choice of a likelihood in the model, the we can substitute our chosen likelihood into (4). We choose a zeromean Laplace distribution with scale parameter , which corresponds to using a variational autoencoder with an reconstruction loss; this is a highly popular choice and used in many related autoencoder GAN variants, such as AGE, BEGAN, cycle GAN and PPGN (Ulyanov et al., 2017; Berthelot et al., 2017; Nguyen et al., 2016; Zhu et al., 2017).
In GANs the effective likelihood is unknown and intractable. We can again use our tools for intractable inference by replacing the intractable likelihood by its synthetic substitute. Using the synthetic likelihood (5) introduces a new syntheticlikelihood classifier that discriminates between data sampled from the conditional and marginal distributions of the model. The reconstruction term in (4) can be either:
(7) 
These two choices have different behaviors. Using the synthetic discriminatorbased likelihood means that this model will have the ability to use the adversarial game to learn the data distribution, although it may still be subject to modecollapse. This is where an explicit choice of likelihood can be used to ensure that we assign mass to all parts of the output support and prevent collapse. When forming a final loss we can make use of a weighted sum of the two to get the benefits of both types of behavior.
Hybrid Loss Functions
: An hybrid objective function that combines all these choices is:(8) 
We are required to build four networks: the classifier is trained to discriminate between reconstructions from an autoencoder and real data points; a second classifier is trained to discriminate between latent samples produced by the encoder and samples from a standard Gaussian; we must implement the deep generative model , and also the encoder network , which can be implemented using any type of deep network. The densityratio estimators and can be trained using any loss for density ratio estimation described in section 2, hence their loss functions are not shown in (8). We refer to training using (8) as GAN. Our algorithm alternates between updates of the parameters of the generator , encoder , synthetic likelihood discriminator , and the latent code discriminator ; see algorithm 1.
Improved Techniques: Equation (8) provides a principled starting point for optimization based on losses obtained by the combination of insights from VAEs and GANs. To improve the stability of optimization and speed of learning we make two modifications. Firstly, following the insights from Mohamed and Lakshminarayanan (2016), we consider the reverse KL loss formulation for both the latent discriminator and the synthetic likelihood discriminator, where we replace with while training the generator as it provides nonsaturating gradients. The minimization of the generator parameters becomes:
(9) 
which shows that we have are using the GAN updates for the generator, with the addition of a reconstruction term, that discourages mode collapse as needs to be able to reconstruct every input .
Secondly, we found that passing the samples to the discriminator as fake samples, in addition to the reconstructions, helps improve performance. One way to justify the use of samples is to apply Jensen’s inequality, that is, , and replace this with a synthetic likelihood, as done for reconstructions. Instead of training two separate discriminators, we train a single discriminator which treats samples and reconstructions as fake, and as real.
Figure 1 summarizes our architecture and the architectures we compare with in the experimental section. Hybrids of VAEs and GANs can be classified by whether the density ratio trick is applied only to likelihood, prior approximation or both. Table 1 reveals the connections to related approaches (see also (Huszár, 2017, Table 1)). DCGAN (Radford et al., 2015) and WGANGP (Gulrajani et al., 2017) are pure GAN variants; they do not use an autoencoder loss nor do they do inference. WGANGP shares the attributes of DCGAN, except that it uses a critic that approximates the Wasserstein distance (Arjovsky et al., 2017) instead of a density ratio estimator. AGE uses an approximation of KL term, however it does not use a synthetic likelihood, but instead uses observed likelihoods  reconstruction losses  for both latent codes and data. The adversarial component of AGE arises form the opposing goals of the encoder and decoder: the encoder tries to compress data into codes drawn from the prior, while compressing samples into codes which do not match the prior; at the same time the decoder wants to generate samples that when encoded by the encoder will generate codes which match the prior distribution. VAE uses the observation likelihood and an analytic KL term, however it tends to produce blurry images, hence we do not consider it here. To solve the blurriness issue, VAEGAN change the VAE loss function by replacing the observed likelihood on pixels with an adversarial loss together with a reconstruction metric in discriminator feature space. Unlike our work, VAEGAN still uses the analytical KL loss to minimize the distance between the prior and the posterior of the latents, and they do not discuss the connection to density ratio estimation. Similar to VAEGAN, Dosovitskiy and Brox (2016) replace the observed likelihood term in the variational lower bound with a weighted sum of a feature matching loss (here the features matched are those of a pretrained classifier) and an adversarial loss, but instead of using the analytical KL, they use a numerical approximation. We explore the same approximation (also used by AGE) in Section D in the Appendix. By not using a pretrained classifier or a feature matching loss, GAN is trained endtoend, completely unsupervised and maximizes a lower bound on the true data likelihood.
ALI (Dumoulin et al., 2016), BiGAN (Donahue et al., 2016) perform inference by creating an adversarial game between the encoder and decoder via a discriminator that operates on space. The discriminator learns to distinguish between inputoutput pairs of the encoder (where is a sample from the data distribution and is a sample from the conditional posterior ) and decoder (where is a sample from the latent prior and is a sample from the conditional ). Unlike GAN , their approach operates jointly, without exploiting the structure of the model. CycleGAN (Zhu et al., 2017)
was proposed for imagetoimage translation, but applying the underlying
cycle consistency principle to imagetocode translation reveals an interesting connection with GAN. This method has become popular for imagetoimage translation, with similar approaches having proposed(Kim et al., 2017). Recall that in space, we both use a pointwise reconstruction term term as well as a loss to match the distributions of and . In space, we only match the distributions of and in GAN. Adding pointwise code reconstruction loss would make it similar to CycleGAN. We note however that the CycleGAN authors used the least square GAN loss, while the traditional GAN loss needs to be used to obtain the variational lower bound in (4).In mode regularized GANs (MRGANs) (Che et al., 2016) the generator is part of an autoencoder, hence it learns how to produce reconstructions from the posterior over latent codes and also independently learns how to produce samples from codes drawn from the prior over latents. MRGANs employ two discriminators, one to distinguish between data and reconstructions and one to distinguish between data and samples. As described in Section 3, in GAN we also pass both samples and reconstructions through the discriminator (which learns to distinguish between them and data). However, we only need one discriminator, as we explicitly match the latent prior and the latent posterior given by the model using KL term in (4), which encourages the distributions of reconstructions and sample to be similar.
Algorithm  Likelihood  Prior  
Observer  Ratio estimator ("synthetic")  KL (analytic)  KL (approximate)  Ratio estimator  
VAE  ✓  ✓  
DCGAN  ✓  
VAEGAN  ✓  *  ✓  
AGE  ✓  ✓  
GAN (ours)  ✓  ✓  ✓ 
Evaluating generative models is challenging (Theis et al., 2015). In particular, evaluating GANs is difficult due to the lack of likelihood. Multiple proxy metrics have been proposed, and we explore some of them in this work and assess their strengths and weaknesses in the experiments section.
Inception score: The inception score was proposed by Salimans et al. (2016)
and has been widely adopted since. The inception score uses a pretrained neural network classifier to capture to two desirable properties of generated samples: highly classifiable and diverse with respect to class labels. It does so by computing the average of the KL divergences between the conditional label distributions of samples (expected to have low entropy for easily classifiable samples) and the marginal distribution obtained from all the samples (expected to have high entropy if all classes are equally represented in the set of samples). As the name suggests, the classifier network used to compute the inception score was originally an Inception network
(Szegedy et al., 2016)trained on the ImageNet dataset. For comparison to previous work, we report scores using this network. However, when reporting CIFAR10 results we also report metrics obtained using a VGG style convolutional neural network, trained on the same dataset, which obtained 5.5% error (see section
H.5 in the details on this network).Multiscale structural similarity (MSSSIM): The inception score fails to capture mode collapse inside a class: the inception score of a model that generates the same image for a class and the inception score of a model that is able to capture diversity inside a class are the same. To address this issue, Odena et al. (2016) assess the similarity between classconditional generated samples using MSSSIM (Wang et al., 2003)
, an image similarity metric that has been shown to correlate well with human judgement. MSSSIM ranges between 0.0 (low similarity) and 1.0 (high similarity). By computing the average pairwise MSSSIM score between images in a given set, we can determine how similar the images are, and in particular, we can compare with the similarity obtained on a reference set (the training set, for example). Since our models are not class conditional, we only used MSSSIM to evaluate models on CelebA
(Liu et al., 2015), a dataset of faces, since the variability of the data there is smaller. For datasets with very distinct labels, using MSSSIM would not give us a good metric, since there will be high variability between classes. We report sample diversity score as 1MSSSIM. The reported results on this metric need to be seen relative to the diversity obtained on the input dataset: too much diversity can mean failure to capture the data distribution. To illustrate this, we computed the diversity on images from the input dataset to which we add normal noise, and it is higher than the diversity of the original data. We report this value as another baseline for this metric.Independent Wasserstein critic: Danihelka et al. (2017) proposed training an independent Wasserstein GAN critic to distinguish between held out validation data and generated samples.^{2}^{2}2Danihelka et al. (2017) used the original WGAN (Arjovsky et al., 2017), whereas we use improved WGANGP proposed in (Gulrajani et al., 2017). This metric measures both overfitting and mode collapse: if the generator memorizes the training set, the critic trained on validation data will be able to distinguish between samples and data; if mode collapse occurs, the critic will have an easy task distinguishing between data and samples. The Wasserstein distance does not saturate when the two distributions do not overlap (Arjovsky et al., 2017), and the magnitude of the distance represents how easy it is for the critic to distinguish between data and samples. To be consistent with the other metrics, we report the negative of the Wasserstein distance between the test set and generator, hence higher values are better. Since the critic is trained independently for evaluation only, and thus does not affect the training of the generator, this evaluation technique can be used irrespective of the training criteria used (Danihelka et al., 2017). To ensure that the independent critic does not overfit to the validation data, we only start training it half way through the training of our model and examined the learning curves during training (see Appendix E in the supplementary material for learning curves).
To better understand the importance of autoencoder based methods in the GAN landscape, we implemented and compared the proposed
GAN with another hybrid model, AGE, as well as pure GAN variants such as DCGAN and WGANGP, across three datasets: ColorMNIST (Metz et al., 2016), CelebA (Liu et al., 2015) and CIFAR10 (Krizhevsky, 2009). We complement the visual inspection of samples with a battery of numerical test using the metrics above to get an insight of both on the models and on the metrics themselves. For a comprehensive analysis, we report both the best values obtained by each algorithm, as well as the quartiles obtained by each hyperparameter sweep for each model, to assess the sensitivity to hyperparameters. On all metrics, we report box plot for all the hyperparameters we considered with the best 10 jobs indicated by black circles (for Inception Scores and Independent Wasserstein critic, higher is better; for sample diversity, the best reported jobs are those with the smallest distance from the reference value computed on the test set). To the best of our knowledge, we are the first to do such an analysis of the GAN landscape.
For details of the training procedure used in all our experiments, including the hyperparameter sweeps, we refer to Appendix H in the supplementary material. Note that the models considered here are all unconditional and do not make use of label information, hence it is not appropriate to compare our results with those obtained using conditional GANs (Odena et al., 2016) and semisupervised GANs (Salimans et al., 2016).
Results on ColorMNIST : We compare the values of an independent Wasserstein critic in Figure 2(a), where higher values are better. On this metric, most of hyperparameters tried achieve a higher value than the best DCGAN results. This is supported by the generated samples shown in Figure 3. However, WGANGP produces the samples rated best by this metric.
Results on CelebA: The CelebA dataset consists of pixel images of faces of celebrities. We show samples from the four models in Figure 4. We also compare the models using the independent Wasserstein critic in Figure 2(b) and sample diversity score in Figure 5(a). GAN is competitive with WGANGP and AGE, but has a wider spread than the WGANGP model, which produces the best results.
Unlike WGAN and DCGAN, an advantage of GAN and AGE is the ability to reconstruct inputs. Appendix C shows that GAN produces better reconstructions than AGE.
Results on CIFAR10: We show samples from the various models in Figure 6. We evaluate GAN using the independent critic, shown in Figure 2(c), where WGANGP is the best performing model. We also compare the ImageNetbased inception score in Figures 5(b), where it has the best performance, and with the CIFAR10 based inception score in Figure 5(c). While our model produces the best Inception score result on the ImageNetbased inception score, it has wide spread on the CIFAR10 Inception score, where WGANGP both performs best, and has less hyperparameter spread. This shows that the two metrics widely differ, and that evaluating CIFAR10 samples using the ImageNet based inception score can lead to erroneous conclusions. To understand more of the importance of the model used to evaluate the Inception score, we looked at the relationship between the Inception score measured with the Inception net trained on ImageNet (introduced by (Salimans et al., 2016)) and the VGG style net trained on CIFAR10, the same dataset on which we train the generative models. We observed that 15% of the jobs in a hyperparameter sweep were ranked as being in the top 50% by the ImageNet Inception score while ranked in the bottom 50% by the CIFAR10 Inception score. Hence, using the Inception score of a model trained on a different dataset than the generative model is evaluated on can be misleading when ranking models.
The best reported ImageNetbased inception score on CIFAR for unsupervised models is by DFMGAN (WardeFarley and Bengio, 2017), who also report for ALI (Dumoulin et al., 2016), however these are trained on different architectures and may not be directly comparable.
Experimental insights: Irrespective of the algorithm used, we found that two factors can contribute significantly to the quality of the results:
[leftmargin=2ex,topsep=0pt,itemsep=1ex,partopsep=1ex,parsep=1ex]
The network architectures. We noticed that the most decisive factor in the lies in the architectures chosen for the discriminator and generator. We found that given enough capacity, DCGAN (which uses the traditional GAN (Goodfellow et al., 2014)) can be very robust, and does not suffer from obvious mode collapse on the datasets we tried. All models reported are sensitive to changes in the architectures, with minor changes resulting in catastrophic mode collapse, regardless of other hyperparameters.
The number of updates performed by the individual components of the model. For DCGAN, we update the generator twice for each discriminator update following https://github.com/carpedm20/DCGANtensorflow; we found it stabilizes training and produces significantly better samples, contrary to GAN theory which suggests training discriminator multiple times instead. Our findings are also consistent with the updates performed by the AGE model, where the generator is updated multiple times for each encoder update. Similarly, for GAN, we update the encoder (which can be seen as the latent code generator) and the generator twice for each discriminator and code discriminator update. On the other hand, for WGANGP, we update the discriminator 5 times for each generator update following Arjovsky et al. (2017); Gulrajani et al. (2017).
While the independent Wasserstein critic does not directly measure sample diversity, we notice a high correlation between its estimate of the negative Wasserstein distance and sample similarity (see Appendix G). Note however that the measures are not perfectly correlated, and if used to rank the best performing jobs in a hyperparameter sweep they give different results.
In this paper we have combined the variational lower bound on the data likelihood with the density ratio trick, allowing us to better understand the connection between variational autoencoders and generative adversarial networks. From the newly introduced lower bound on the likelihood we derived a new training criteria for generative models, named GAN. GAN combines an adversarial loss with a data reconstruction loss. This can be seen in two ways: from the VAE perspective, it can solve the blurriness of samples via the (learned) adversarial loss; from the GAN perspective, it can solve mode collapse by grounding the generator using a perceptual similarity metric on the data  the reconstruction loss. In a quest to understand how GAN compares to other GAN models (including autoencoder based ones), we deployed a set of metrics on 3 datasets as well as compared samples visually. While the picture of evaluating GANs is far from being completed, we show that the metrics employed are complementary and assess different failure modes of GANs (mode collapse, overfitting to the training data and poor learning of the data distribution).
The prospect of marrying the two approaches (VAEs and GANs) comes with multiple benefits: autoencoder based methods can be used to reconstruct data and thus can be used for inpainting (Nguyen et al., 2016) (Xie et al., 2012); having an inference network allows our model to be used for representation learning (Bengio et al., 2013), where we can learn disentangled representations by choosing an appropriate latent prior. We thus believe VAEGAN hybrids such as
GAN can be used in unsupervised, supervised and reinforcement learning settings, which leads the way to directions of research for future work.
Acknowledgements. We thank Ivo Danihelka and Chris Burgess for helpful feedback and discussions.
Proceedings of International Conference on Computer Vision (ICCV)
, 2015.Stochastic backpropagation and approximate inference in deep generative models.
In The 31st International Conference on Machine Learning (ICML), 2014.Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition
, pages 2818–2826, 2016.The overall training procedure is summarized in Algorithm 1.
(10)  
(11) 
(12)  
(13) 
(14)  
(15) 
(16)  
(17) 
We show reconstructions obtained using GAN and AGE for the CelebA dataset in Figure 8 and on CIFAR10 in Figure 9.




We have shown that we can estimate the KL term in (4) using the density ratio trick. In the case of a normal prior, another way to estimate the KL divergence on a minibatch of latents each of dimension
, with per dimension sample mean and variance denoted by
and () respectively, is^{3}^{3}3This approximation was also used by Ulyanov et al. [2017] in AGE.:(18) 
In order to understand how the two different ways of estimating the KL term compare, we replaced the code discriminator in GAN with the KL approximation in (18). We then compared the results both by visual inspection (see CelebA and CIFAR10 samples in Figure 10) and by evaluating how well the prior was matched. In order to avoid be able to use the same hyperparameters for different latent sizes, we divide the approximation in (18) by the latent size. To also understand the effects of the two methods on the resulting autoencoder codes, we plot the means (Figure 11) and the covariance matrix (Figure 12) obtained from the a set of saved latent codes. By assessing the statistics of the final codes obtained by models trained using both approaches, we see that the two models of enforcing the prior have different side effects: the latent codes obtained using the code discriminator are decorrelated, while the ones obtained using the empirical KL are entangled; this is expected, since the correlation of latent dimensions is not modeled by (18), while the code discriminator can pick up that highly correlated codes are not from the same distribution as the prior. While the code discriminator achieves better disentangling, the means obtained using the empirical KL are closer to 0, the mean of the prior distribution for each latent. We leave investigating these affects and combining the two approaches for future work.
To ensure that the independent Wasserstein critic does overfit during training to the validation data, we monitor the difference in performance between training and test (see Figure 13).
Figure 14 shows the best samples on CelebA according to different metrics.
We assess the correlation between sample quality and how good a model is according to a independent Wasserstein critic in Figure 15.
For all our models, we kept a fixed learning rate throughout training. We note the difference with AGE, where the authors decayed the learning rate during training, and changed the loss coefficients during training^{4}^{4}4As per advice found here: https://github.com/DmitryUlyanov/AGE/.). The exact learning rate sweeps are defined in Table 2. We used the Adam optimizer [Kingma and Ba, 2014] with and and a batch size of 64 for all our experiments. We used batch normalization [Ioffe and Szegedy, 2015] for all our experiments. We trained all ColorMNIST models for 100000 iterations, and CelebA and CIFAR10 models for 200000 iterations.
Model  

Network  DCGAN  WGANGP  GAN  AGE 
Generator/Encoder  
Discriminator  
Code discriminator 
We used the following sweeps for the models which have combined losses with different coefficients (for all our baselines, we took the sweep ranges from the original papers):
WGANGP
The gradient penalty of the discriminator loss function: 10.
AGE
Data reconstruction loss for the encoder: sweep over 100, 500, 1000, 2000.
Code reconstruction loss for the generator: 10.
GAN
Data reconstruction loss for the encoder: sweep over 1, 5, 10, 50.
Data reconstruction loss for the generator: sweep over 1, 5, 10, 50.
Adversarial loss for the generator (coming from the data discriminator): 1.0.
Adversarial loss for the encoder (coming from the code discriminator): 1.0.
For AGE, we used the loss as the data reconstruction loss, and we used the cosine distance for the code reconstruction loss. For GAN , we used as the data reconstruction loss and the traditional GAN loss for the data and code discriminator.
We use a normal prior for all models, apart from AGE [Ulyanov et al., 2017] which uses a uniform unit ball as the prior, and thus we project the output of the encoder to the unit ball.
For all our baselines, we used the same discriminator and generator architectures, and we controlled the number of latents for a fair comparison. For AGE we used the encoder architecture suggested by the authors^{5}^{5}5Code at: https://github.com/DmitryUlyanov/AGE/, which is very similar to the DCGAN discriminator architecture. For
GAN , the encoder is always set as a convolutional network, formed by transposing the generator (we do not use any activation function after the encoder). All discriminators and the AGE encoder use leaky units with a slope of 0.2, and all generators used ReLUs. For all our experiments using
GAN , we used as a code discriminator a 3 layer MLP, each layer containing 750 hidden units. We did not tune the size of this network, and we postulate that since the prior latent distributions are similar (multi variate normals) between datasets, the impact of the architecture of the code discriminator is of less importance than the architecture of the data discriminator, which has to change from dataset to dataset (with the complexity of the data distribution). However, one could improve on our results by carefully tuning this architecture too.For all our models trained on ColorMNIST, we swept over the latent sizes 10, 50 and 75. Tables 3 and 4 describe the discriminator and generator architectures respectively.
Operation  Kernel  Strides  Feature maps 

Convolution  
Convolution  
Convolution  
Convolution  
Convolution  
Linear adv  N/A  N/A  
Linear class  N/A  N/A 
Operation  Kernel  Strides  Feature maps 

Linear  N/A  N/A  
Transposed Convolution  
Transposed Convolution  
Transposed Convolution 
The discriminator and generator architectures used for CelebA and CIFAR10 were the same as the ones used by Gulrajani et al. [2017] for WGAN.^{6}^{6}6Code at: https://github.com/martinarjovsky/WassersteinGAN/blob/master/models/dcgan.py. Note that the WGANGP paper reports Inception Scores computed on a different architecture, using 101Resnet blocks.
We used a VGG style [Simonyan and Zisserman, 2014] convnet trained on CIFAR10 as the classifier network used to report the inception score in Section 5. The architecture is described in Table 5. We use batch normalization after each convolutional layer. The data is rescaled to be in range , and during training the input images are randomly cropped to size . We used a momentum optimizer with learning rate starting at 0.1 and decaying by 0.1 at timesteps 40000 and 60000, with momentum set at 0.9. We used an regularization penalty of
. The network was trained for 80000 epochs, using a batch size of 256 (8 synchronous workers, each having a batch size of 32). The resulting network achieves an accuracy of 5.5% on the official CIFAR10 test set.
Operation  Kernel  Strides  Feature maps 

Convolution  
Convolution  
Convolution  
Convolution  
Convolution  
Convolution  
Convolution  
Convolution  
Convolution  
Convolution  
Convolution  
Average pooling  N/A  N/A  N/A 
Linear class  N/A  N/A 
Comments
There are no comments yet.