1 Introduction
Deep generative models define distributions over a set of variables organized in multiple layers. Early forms of such models dated back to works on hierarchical Bayesian models (Neal, 1992)
and neural network models such as Helmholtz machines
(Dayan et al., 1995), originally studied in the context of unsupervised learning, latent space modeling, etc. Such models are usually trained via an EM style framework, using either a variational inference
(Jordan et al., 1999) or a data augmentation (Tanner and Wong, 1987) algorithm. Of particular relevance to this paper is the classic wakesleep algorithm dates by Hinton et al. (1995) for training Helmholtz machines, as it explored an idea of minimizing a pair of KL divergences in opposite directions of the posterior and its approximation.In recent years there has been a resurgence of interests in deep generative modeling. The emerging approaches, including Variational Autoencoders (VAEs) (Kingma and Welling, 2013), Generative Adversarial Networks (GANs) (Goodfellow et al., 2014)
, Generative Moment Matching Networks (GMMNs)
(Li et al., 2015; Dziugaite et al., 2015), autoregressive neural networks (Larochelle and Murray, 2011; Oord et al., 2016), and so forth, have led to impressive results in a myriad of applications, such as image and text generation
(Radford et al., 2015; Hu et al., 2017; van den Oord et al., 2016), disentangled representation learning
(Chen et al., 2016; Kulkarni et al., 2015), and semisupervised learning
(Salimans et al., 2016; Kingma et al., 2014).The deep generative model literature has largely viewed these approaches as distinct model training paradigms. For instance, GANs aim to achieve an equilibrium between a generator and a discriminator; while VAEs are devoted to maximizing a variational lower bound of the data loglikelihood. A rich array of theoretical analyses and model extensions have been developed independently for GANs (Arjovsky and Bottou, 2017; Arora et al., 2017; Salimans et al., 2016; Nowozin et al., 2016) and VAEs (Burda et al., 2015; Chen et al., 2017; Hu et al., 2017), respectively. A few works attempt to combine the two objectives in a single model for improved inference and sample generation (Mescheder et al., 2017; Larsen et al., 2015; Makhzani et al., 2015; Sønderby et al., 2017). Despite the significant progress specific to each method, it remains unclear how these apparently divergent approaches connect to each other in a principled way.
In this paper, we present a new formulation of GANs and VAEs that connects them under a unified view, and links them back to the classic wakesleep algorithm. We show that GANs and VAEs involve minimizing opposite KL divergences of respective posterior and inference distributions, and extending the sleep and wake phases, respectively, for generative model learning. More specifically, we develop a reformulation of GANs that interprets generation of samples as performing posterior inference, leading to an objective that resembles variational inference as in VAEs. As a counterpart, VAEs in our interpretation contain a degenerated adversarial mechanism that blocks out generated samples and only allows real examples for model training.
The proposed interpretation provides a useful tool to analyze the broad class of recent GAN and VAEbased algorithms, enabling perhaps a more principled and unified view of the landscape of generative modeling. For instance, one can easily extend our formulation to subsume InfoGAN (Chen et al., 2016)
that additionally infers hidden representations of examples, VAE/GAN joint models
(Larsen et al., 2015; Che et al., 2017a) that offer improved generation and reduced mode missing, and adversarial domain adaptation (ADA) (Ganin et al., 2016; Purushotham et al., 2017) that is traditionally framed in the discriminative setting.The close parallelisms between GANs and VAEs further ease transferring techniques that were originally developed for improving each individual class of models, to in turn benefit the other class. We provide two examples in such spirit: 1) Drawn inspiration from importance weighted VAE (IWAE) (Burda et al., 2015), we straightforwardly derive importance weighted GAN (IWGAN) that maximizes a tighter lower bound on the marginal likelihood compared to the vanilla GAN. 2) Motivated by the GAN adversarial game we activate the originally degenerated discriminator in VAEs, resulting in a fullfledged model that adaptively leverages both real and fake examples for learning. Empirical results show that the techniques imported from the other class are generally applicable to the base model and its variants, yielding consistently better performance.
2 Related Work
There has been a surge of research interest in deep generative models in recent years, with remarkable progress made in understanding several class of algorithms. The wakesleep algorithm (Hinton et al., 1995) is one of the earliest general approaches for learning deep generative models. The algorithm incorporates a separate inference model for posterior approximation, and aims at maximizing a variational lower bound of the data loglikelihood, or equivalently, minimizing the KL divergence of the approximate posterior and true posterior. However, besides the wake phase that minimizes the KL divergence w.r.t the generative model, the sleep phase is introduced for tractability that minimizes instead the reversed KL divergence w.r.t the inference model. Recent approaches such as NVIL (Mnih and Gregor, 2014) and VAEs (Kingma and Welling, 2013)
are developed to maximize the variational lower bound w.r.t both the generative and inference models jointly. To reduce the variance of stochastic gradient estimates, VAEs leverage reparametrized gradients. Many works have been done along the line of improving VAEs.
Burda et al. (2015) develop importance weighted VAEs to obtain a tighter lower bound. As VAEs do not involve a sleep phaselike procedure, the model cannot leverage samples from the generative model for model training. Hu et al. (2017) combine VAEs with an extended sleep procedure that exploits generated samples for learning.Another emerging family of deep generative models is the Generative Adversarial Networks (GANs) (Goodfellow et al., 2014), in which a discriminator is trained to distinguish between real and generated samples and the generator to confuse the discriminator. The adversarial approach can be alternatively motivated in the perspectives of approximate Bayesian computation (Gutmann et al., 2014) and density ratio estimation (Mohamed and Lakshminarayanan, 2016)
. The original objective of the generator is to minimize the log probability of the discriminator correctly recognizing a generated sample as fake. This is equivalent to
minimizing a lower bound on the JensenShannon divergence (JSD) of the generator and data distributions (Goodfellow et al., 2014; Nowozin et al., 2016; Huszar, ; Li, 2016). Besides, the objective suffers from vanishing gradient with strong discriminator. Thus in practice people have used another objective which maximizes the log probability of the discriminator recognizing a generated sample as real (Goodfellow et al., 2014; Arjovsky and Bottou, 2017). The second objective has the same optimal solution as with the original one. We base our analysis of GANs on the second objective as it is widely used in practice yet few theoretic analysis has been done on it. Numerous extensions of GANs have been developed, including combination with VAEs for improved generation (Larsen et al., 2015; Makhzani et al., 2015; Che et al., 2017a), and generalization of the objectives to minimize other divergence criteria beyond JSD (Nowozin et al., 2016; Sønderby et al., 2017). The adversarial principle has gone beyond the generation setting and been applied to other contexts such as domain adaptation (Ganin et al., 2016; Purushotham et al., 2017), and Bayesian inference
(Mescheder et al., 2017; Tran et al., 2017; Huszár, 2017; Rosca et al., 2017) which uses implicit variational distributions in VAEs and leverage the adversarial approach for optimization. This paper starts from the basic models of GANs and VAEs, and develops a general formulation that reveals underlying connections of different classes of approaches including many of the above variants, yielding a unified view of the broad set of deep generative modeling.This paper considerably extends the conference version (Hu et al., 2018) by generalizing the unified framework to a broader set of GAN and VAEvariants, providing a more complete and consistent view of the various models and algorithms, adding more discussion of the symmetric view of generation and inference, and reorganizing thbe presentation to make the theory development clearer.
3 Bridging the Gap
The structures of GANs and VAEs are at the first glance quite different from each other. VAEs are based on the variational inference approach, and include an explicit inference model that reverses the generative process defined by the generative model. On the contrary, in traditional view GANs lack an inference model, but instead have a discriminator that judges generated samples. In this paper, a key idea to bridge the gap is to interpret the generation of samples in GANs as performing inference, and the discrimination as a generative process that produces real/fake labels. The resulting new formulation reveals the connections of GANs to traditional variational inference. The reversed generationinference interpretations between GANs and VAEs also expose their correspondence to the two learning phases in the classic wakesleep algorithm.
For ease of presentation and to establish a systematic notation for the paper, we start with a new interpretation of Adversarial Domain Adaptation (ADA) (Ganin et al., 2016), the application of adversarial approach in the domain adaptation context. We then show GANs are a special case of ADA, followed with a series of analysis linking GANs, VAEs, and their variants in our formulation.
3.1 Adversarial Domain Adaptation (ADA)
Given two domains, one source domain with labeled data and one target domain without labels, ADA aims to transfer prediction knowledge learned from the source domain to the target domain, by learning domaininvariant features (Ganin et al., 2016; Qin et al., 2017; Purushotham et al., 2017). That is, it learns a feature extractor whose output cannot be distinguished by a discriminator between the source and target domains.
We first review the conventional formulation of ADA. Figure 1(a) illustrates the computation flow. Let be a data example either in the source or target domain, and the domain indicator with indicating the target domain and the source domain. The domainspecific data distributions can then be denoted as a conditional distribution . The feature extractor parameterized with maps data to feature . To enforce domain invariance of feature , a discriminator is learned. Specifically, outputs the probability that comes from the source domain. The discriminator is trained to maximize the binary classification accuracy of recognizing the domains:
(1) 
The feature extractor is then trained to fool the discriminator:
(2) 
We omit the additional loss on that improves the accuracy of the original classification problem based on sourcedomain features (Ganin et al., 2016).
With the background of conventional formulation, we now frame our new interpretation of ADA. The data distribution and deterministic transformation together form an implicit distribution over , denoted as :
(3) 
The distribution is intractable for evaluating likelihood but easy to sample from. Let be the distribution of the domain indicator
, e.g., a uniform distribution as in Eqs.(
1)(2). The discriminator defines a conditional distribution . Let be the reversed distribution over domains. The objectives of ADA can then be rewritten as (omitting the constant scale factor ):(4)  
(5) 
Note that is encapsulated in the implicit distribution (Eq.3). The only difference of the objectives for from is the replacement of with . This is where the adversarial mechanism comes about. We defer deeper interpretation of the new objectives in the next subsection.
3.2 Generative Adversarial Networks (GANs)
GANs (Goodfellow et al., 2014) can be seen as a special case of ADA. Taking image generation for example, intuitively, we want to transfer the properties of real image (source domain) to generated image (target domain), making them indistinguishable to the discriminator. Figure 1(b) shows the conventional view of GANs.
Formally, now denotes a real example or a generated sample, is the respective latent code. For the generated sample domain (), the implicit distribution is defined by the prior of and the generator (Eq.3), which is also denoted as in the literature. For the real example domain (), the code space and generator are degenerated, and we are directly presented with a fixed distribution , which is just the real data distribution . Note that is also an implicit distribution and allows efficient empirical sampling. In summary, the conditional distribution over is constructed as
(6) 
Here, free parameters are only associated with of the generated sample domain, while is constant. As in ADA, discriminator is simultaneously trained to infer the probability that comes from the real data domain. That is, .
With the established correspondence between GANs and ADA, we can see that the objectives of GANs are precisely expressed as Eqs.(4)(5). To make this clearer, we recover the classical form by unfolding over and plugging in conventional notations. For instance, the objective of the generative parameters in Eq.(5) is translated into
(7) 
where is uniform and results in the constant scale factor . As noted in section 2, we focus on the unsaturated objective for the generator (Goodfellow et al., 2014), as it is commonly used in practice yet still lacks systematic analysis.
3.2.1 New Interpretation of GANs
Let us take a closer look into the form of Eqs.(4)(5). If we treat as a visible variable while as latent (as in ADA), Eq.(4) closely resembles the Mstep in a variational EM (Beal and Ghahramani, 2003) learning procedure. That is, we are “reconstructing” the real/fake indicator with the “generative distribution” , conditioning on inferred from the “variational distribution” . Similarly, Eq.(5) is in analogue with the Estep with the “generative distribution” now being , except that the KL divergence regularization between the “variational distribution” and some “prior” over , i.e., is missing. We take this view and reveal the connections between GANs and variational learning further in the following.
Schematic graphical model representation
Before going a step further, we first illustrate such generative and inference processes in GANs in Figure 1(c). We have introduced new visual elements to augment the conventional graphical model representation, for example, hollow arrows for expressing implicit distributions, and blue arrows for adversarial objectives. We found such a graphical representation can precisely express various deep generative models in our new perspective, and make the connections between them clearer. We will see more such graphical representations later.
We continue to analyze the objective for (Eq.5). Let be the state of the parameters from the last iteration. At current iteration, a natural idea is to treat the marginal distribution over at as the “prior”:
(8) 
As above, in Eq.(5) can be interpreted as the “likelihood” function in variational EM. We then can construct the “posterior”:
(9) 
We have the following results in terms of the gradient w.r.t :
Lemma 1
Let be the uniform distribution. The update of at has
(10) 
where and are the KullbackLeibler and JensenShannon Divergences, respectively.
Proofs are provided in the supplements (section A). The result offers new insights into the GAN generative model learning:
[10pt]0pt

Resemblance to variational inference. As discussed above, we see as the latent variable and the variational distribution that approximates the true “posterior” . Therefore, optimizing the generator is equivalent to minimizing the KL divergence between the inference distribution and the posterior (a standard from of variational inference), minus a JSD between the distributions and . The interpretation also helps to reveal the connections between GANs and VAEs, as discussed later.

The JSD term. The negative JSD term is due to the introduction of the prior . This term pushes away from , which acts oppositely from the KLD term. However, we show in the supplementary that the JSD term is upper bounded by the KL term (section B). Thus, if the KL term is sufficiently minimized, the magnitude of the JSD also decreases. Note that we do not mean the JSD is insignificant or negligible. Instead any conclusions drawn from Eq.(10) should take the JSD term into account.

Training dynamics. The component with in the KL divergence term is
(11) which is a constant. The active component associated with the parameter to optimize is with , i.e.,
(12) On the other hand, by definition, is a mixture of and , and thus the posterior is also a mixture of and with mixing weights induced from the discriminator . Thus, minimizing the KL divergence in effect drives to a mixture of and . Since is fixed, gets closer to . Figure 2 illustrates the training dynamics schematically.

Explanation of missing mode issue. JSD is a symmetric divergence measure while KL is asymmetric. The missing mode behavior widely observed in GANs (Metz et al., 2017; Che et al., 2017a) is thus explained by the asymmetry of the KL which tends to concentrate to large modes of and ignore smaller ones. See Figure 2 for the illustration. Concentration to few large modes also facilitates GANs to generate sharp and realistic samples.

Optimality assumption of the discriminator. Previous theoretical works have typically assumed (near) optimal discriminator (Goodfellow et al., 2014; Arjovsky and Bottou, 2017):
(13) which can be unwarranted in practice due to limited expressiveness of the discriminator (Arora et al., 2017). In contrast, our result does not rely on the optimality assumptions. Indeed, our result is a generalization of the previous theorem in (Arjovsky and Bottou, 2017), which is recovered by plugging Eq.(13) into Eq.(10):
(14) which gives simplified explanations of the training dynamics and the missing mode issue, but only when the discriminator meets certain optimality criteria. Our generalized result enables understanding of broader situations. For instance, when the discriminator distribution gives uniform guesses, or when that is indistinguishable by the discriminator, the gradients of the KL and JSD terms in Eq.(10) cancel out, which stops the generator learning.
3.2.2 InfoGAN
Chen et al. (2016) developed InfoGAN that additionally recovers the code given sample . This can straightforwardly be formulated in our framework by introducing an extra conditional parameterized by . As discussed above, GANs assume a degenerated code space for real examples, thus is defined to be fixed without free parameters to learn, and is only associated to the case. Further, as in Figure 1(c), is treated as a visible variable. Thus augments the generative process, leading to a full likelihood . The InfoGAN is then recovered by extending Eqs.(4)(5) as follows:
(15) 
Again, note that is encapsulated in the implicit distribution . The model is expressed as the schematic graphical model in Figure 1(d).
The resulting augmented posterior is . The result in the form of Lemma.1 still holds:
(16) 
3.2.3 Adversarial Autoencoder (AAE) and CycleGAN
The new formulation is also generally applicable to other GANrelated variants, such as Adversarial Autoencoder (AAE) (Makhzani et al., 2015), Predictability Minimization (Schmidhuber, 1992), and cycleGAN (Zhu et al., 2017).
Specifically, AAE is recovered by simply swapping the code variable and the data variable of InfoGAN in the graphical model, as shown in Figure 3(c). In other words, AAE is precisely an InfoGAN that treats the code as a latent variable to be adversarially regularized, and the data/generation variable as a visible variable. To make this clearer, in the supplementary we demonstrate how the schematic graphical model of Figure 3(c) can directly be translated into the mathematical formula of AAE (Makhzani et al., 2015). Predictability Minimization (PM) (Schmidhuber, 1992) resembles AAE and is also discussed in the supplementary materials.
InfoGAN and AAE thus are a symmetric pair that exchanges data and code spaces. Further, instead of considering and as data and code spaces respectively, if we use both and to model data spaces of two modalities, and combine the objectives of InfoGAN and AAE as a joint model, we recover the cycleGAN model (Zhu et al., 2017) that performs transformation between the two modalities. In particular, the objectives of AAE (Eq.35 in the supplementary) are precisely the objectives that train the cycleGAN model to translate into , and the objectives of InfoGAN (Eq.15) are used to train the reversed translation from to .
3.3 Variational Autoencoders (VAEs)
We next explore the second family of deep generative modeling, namely, the VAEs (Kingma and Welling, 2013). The resemblance of GAN learning to variational inference (Lemma.1) suggests strong relations between VAEs and GANs. We build correspondence between them, and show that VAEs involve minimizing a KLD in an opposite direction to that of GANs, with a degenerated adversarial discriminator.
The conventional definition of VAEs is written as:
(17) 
where is the generative model, the inference model, and the prior. The parameters to learn are intentionally denoted with the notations of corresponding modules in GANs. VAEs appear to differ from GANs greatly as they use only real examples and lack adversarial mechanism.
To connect to GANs, we assume a perfect discriminator that always predicts with probability given real examples, and given generated samples. Again, for notational simplicity, let be the reversed distribution. Precisely as for GANs, in our formulation, the code space for real examples are degenerated, and we are presented with the real data distribution directly over . The composite conditional distribution of is thus constructed as:
(18) 
We can see the distribution differs slightly from its GAN counterpart in Eq.(6) and additionally accounts for the uncertainty of generating given . In analogue to InfoGAN, we have the conditional over , namely, , in which is constant due to the degenerated code space for , and is associated with the free parameter . Finally, we extend the prior over to define such that and is again irrelevant.
We are now ready to reformulate the VAE objective in Eq.(17):
Lemma 2
Let . The VAE objective in Eq.(17) is equivalent to (omitting the constant scale factor ):
(19) 
We provide the proof in the supplementary materials (section D).
Figure 3(b) shows the schematic graphical model of the new interpretation of VAEs, where the only difference from InfoGAN is swapping the solidline arrows (generative process) and dashedline arrows (inference). That is, InfoGAN and VAEs are a symmetric pair in the sense of exchanging the generative and inference process.
Given a fake sample from , the reversed perfect discriminator always predicts with probability , and the loss on fake samples is therefore degenerated to a constant (irrelevant to the free parameters). This blocks out fake samples from contributing to the model learning.
Components  ADA  GANs / InfoGAN  VAEs 

features  data/generations  data/generations  
domain indicator  real/fake indicator  real/fake indicator (degenerated)  
data examples  code vector 
code vector  
feature distr.  [I] generator, Eq.6  [G] , generator, Eq.18  
discriminator  [G] discriminator  [I] , discriminator (degenerated)  
—  [G] infer net (InfoGAN)  [I] infer net  
KLD to min  same as GANs 
3.4 Connecting the Two Families of GANs and VAEs
Table 1 summarizes the correspondence between the various methods. Lemma.1 and Lemma.2 have revealed that both GANs and VAEs involve minimizing a KLD of respective inference and posterior distributions. Specifically, GANs involve minimizing the while VAEs the . This exposes new connections between the two model classes in multiple aspects, each of which in turn leads to a set of existing research or can inspire new research directions: [10pt]0pt

[label*=0)]

As discussed in Lemma.1, GANs now also relate to the variational inference algorithm as with VAEs, revealing a unified statistical view of the two classes. Moreover, the new perspective naturally enables many of the extensions of VAEs and vanilla variational inference algorithm to be transferred to GANs. We show an example in the next section.

The generator parameters are placed in the opposite directions in the two KLDs. The asymmetry of KLD leads to distinct model behaviors. For instance, as discussed in Lemma.1, GANs are able to generate sharp images but tend to collapse to one or few modes of the data (i.e., mode missing). In contrast, the KLD of VAEs tends to drive generator to cover all modes of the data distribution but also smalldensity regions (i.e., mode covering), which tend to result in blurred samples. Such opposite behaviors naturally inspires combination of the two objectives to remedy the asymmetry of each of the KLDs, as discussed below.

VAEs within our formulation also include an adversarial mechanism as in GANs. The discriminator is perfect and degenerated, disabling generated samples to help with learning. This inspires activating the adversary to allow learning from samples. We present a simple possible way in the next section.

GANs and VAEs have inverted latentvisible treatments of and , since we interpret sample generation in GANs as posterior inference. Such inverted treatments strongly relates to the symmetry of the sleep and wake phases in the wakesleep algorithm, as presented shortly. In section 6, we provide a more general discussion on a symmetric view of generation and inference.
3.4.1 VAE/GAN Joint Models
Previous work has explored combination of VAEs and GANs into joint models. As above, this can be naturally motivated by the opposite asymmetric behaviors of the KLDs that the two algorithms optimize respectively. Specifically, Larsen et al. (2015); Pu et al. (2017) improve the sharpness of VAE generated images by adding the GAN objective that forces the generative model to focus on meaningful data modes. On the other hand, augmenting GANs with the VAE objective helps addressing the mode missing problem, which is studied in (Che et al., 2017a).
3.4.2 Implicit Variational Inference
Another recent line of research extends VAEs by using an implicit model as the variational distribution (Mescheder et al., 2017; Tran et al., 2017; Huszár, 2017; Rosca et al., 2017). The idea naturally matches GANs under the unified view. In particular, in Eq.(10), the “variational distribution”
of GANs is also an implicit model. Such implicit variational distribution does not assume a particular distribution family (e.g., Gaussian distributions) and thus provides improved flexibility. Compared to GANs, the implicit variational inference in VAEs additionally forces the variational distribution to be close to a prior distribution. This is usually achieved by minimizing an adversarial loss between the two distribution, as in AAE (section
3.2.3).3.5 Connecting to Wake Sleep Algorithm (WS)
Wakesleep algorithm (Hinton et al., 1995) was proposed for learning deep generative models such as Helmholtz machines (Dayan et al., 1995). WS consists of wake phase and sleep phase, which optimize the generative model and inference model, respectively. We follow the above notations, and introduce new notations to denote general latent variables and to denote general parameters. The wake sleep algorithm is thus written as:
(20) 
Briefly, the wake phase updates the generator parameters by fitting to the real data and hidden code inferred by the inference model . On the other hand, the sleep phase updates the parameters based on the generated samples from the generator. Hu et al. (2017) have briefly discussed the relations between the WS, VAEs, and GANs algorithms. We formalize the discussion in the below.
The relations between WS and VAEs are clear in previous discussions (Bornschein and Bengio, 2014; Kingma and Welling, 2013). Indeed, WS was originally proposed to minimize the variational lower bound as in VAEs (Eq.17) with the sleep phase approximation (Hinton et al., 1995). Alternatively, VAEs can be seen as extending the wake phase. Specifically, if we let be and be , the wake phase objective recovers VAEs (Eq.17) in terms of generator optimization (i.e., optimizing ). Therefore, we can see VAEs as generalizing the wake phase by also optimizing the inference model , with additional prior regularization on code .
On the other hand, GANs resemble the sleep phase. To make this clearer, let be and be . This results in a sleep phase objective identical to that of optimizing the discriminator in Eq.(4), which is to reconstruct given sample . We thus can view GANs as generalizing the sleep phase by also optimizing the generative model to reconstruct reversed . InfoGAN (Eq.15) further extends to reconstruct the code .
3.6 The Relation Graph
We have presented the unified view that connects GANs and VAEs to classic variational EM and wakesleep algorithms, and subsumes a broad set of their variants and extensions. Figure 4 depicts the essential relations between the various deep generative models and algorithms under our unified perspective. The generality of the proposed formulation offers a unified statistical insight of the broad landscape of deep generative modeling.
4 Applications: Transferring Techniques
The above new formulations not only reveal the connections underlying the broad set of existing approaches, but also facilitate to exchange ideas and transfer techniques across the different families of models and algorithms. For instance, existing enhancements on VAEs can straightforwardly be applied to improve GANs, and vice versa. This section gives two examples. Here we only outline the main intuitions and resulting models, while providing the details in the supplement materials.
4.1 Importance Weighted GANs (IWGAN)
Burda et al. (2015) proposed importance weighted autoencoder (IWAE) that maximizes a tighter lower bound on the marginal likelihood. Within our framework it is straightforward to develop importance weighted GANs by copying the derivations of IWAE side by side, with little adaptations. Specifically, the variational inference interpretation in Lemma.1 suggests GANs can be viewed as maximizing a lower bound of the marginal likelihood on (putting aside the negative JSD term):
(21) 
Following (Burda et al., 2015), we can derive a tighter lower bound through a sample importance weighting estimate of the marginal likelihood. With necessary approximations for tractability, optimizing the tighter lower bound results in the following update rule for the generator learning:
(22) 
As in GANs, only (i.e., generated samples) is effective for learning parameters . Compared to the vanilla GAN update (Eq.(10)), the only difference here is the additional importance weight which is the normalization of over samples. Intuitively, the algorithm assigns higher weights to samples that are more realistic and fool the discriminator better, which is consistent to IWAE that emphasizes more on code states providing better reconstructions. Hjelm et al. (2017); Che et al. (2017b) developed a similar sample weighting scheme for generator training, while their generator of discrete data depends on explicit conditional likelihood. In practice, the samples correspond to sample minibatch in standard GAN update. Thus the only computational cost added by the importance weighting method is by evaluating the weight for each sample, and is negligible. The discriminator is trained in the same way as in standard GANs.
4.2 Adversary Activated VAEs (AAVAE)
By Lemma.2, VAEs include a degenerated discriminator which blocks out generated samples from contributing to model learning. We enable adaptive incorporation of fake samples by activating the adversarial mechanism. Specifically, we replace the perfect discriminator in VAEs with a discriminator network parameterized with , resulting in an adapted objective of Eq.(19):
(23) 
As detailed in the supplementary material, the discriminator is trained in the same way as in GANs.
The activated discriminator enables an effective data selection mechanism. First, AAVAE uses not only real examples, but also generated samples for training. Each sample is weighted by the inverted discriminator , so that only those samples that resemble real data and successfully fool the discriminator will be incorporated for training. This is consistent with the importance weighting strategy in IWGAN. Second, real examples are also weighted by . An example receiving large weight indicates it is easily recognized by the discriminator, which means the example is hard to be simulated from the generator. That is, AAVAE emphasizes more on harder examples.
GAN  IWGAN  

MNIST  8.34.03  8.45.04 
SVHN  5.18.03  5.34.03 
CIFAR10  7.86.05  7.89 .04 
CGAN  IWCGAN  

MNIST  0.985.002  0.987.002 
SVHN  0.797.005  0.798.006 
SVAE  AASVAE  

1%  0.9412  0.9425 
10%  0.9768  0.9797 
Train Data Size  VAE  AAVAE  CVAE  AACVAE  SVAE  AASVAE 

1%  122.89  122.15  125.44  122.88  108.22  107.61 
10%  104.49  103.05  102.63  101.63  99.44  98.81 
100%  92.53  92.42  93.16  92.75  —  — 
5 Experiments
We conduct preliminary experiments to demonstrate the generality and effectiveness of the importance weighting (IW) and adversarial activating (AA) techniques. In this paper we do not aim at achieving stateoftheart performance, but leave it for future work. In particular, we show the IW and AA extensions improve the standard GANs and VAEs, as well as several of their variants, respectively. We present the results here, and provide details of experimental setups in the supplements.
5.1 Importance Weighted GANs
We extend both vanilla GANs and classconditional GANs (CGAN) with the IW method. The base GAN model is implemented with the DCGAN architecture and hyperparameter setting
(Radford et al., 2015). Hyperparameters are not tuned for the IW extensions. We use MNIST, SVHN, and CIFAR10 for evaluation. For vanilla GANs and its IW extension, we measure inception scores (Salimans et al., 2016) on the generated samples. For CGANs we evaluate the accuracy of conditional generation (Hu et al., 2017)with a pretrained classifier. Please see the supplements for more details.
Table 2, left panel, shows the inception scores of GANs and IWGAN, and the middle panel gives the classification accuracy of CGAN and and its IW extension. We report the averaged results
one standard deviation over 5 runs. The IW strategy gives consistent improvements over the base models.
5.2 Adversary Activated VAEs
We apply the AA method on vanilla VAEs, classconditional VAEs (CVAE), and semisupervised VAEs (SVAE) (Kingma et al., 2014), respectively. We evaluate on the MNIST data. We measure the variational lower bound on the test set, with varying number of real training examples. For each batch of real examples, AA extended models generate equal number of fake samples for training.
6 Discussions: Symmetric View of Generation and Inference
Our new interpretations of GANs and VAEs have revealed strong connections between them. One of the key ideas in our formulation is to interpret sample generation in GANs as performing posterior inference. This section provides a more general discussion of this point.
Traditional modeling approaches usually distinguish between latent and visible variables clearly and treat them in very different ways. One of the key thoughts in our formulation is that it is not necessary to make clear boundary between the two types of variables (and between generation and inference), but instead, treating them as a symmetric pair helps with modeling and understanding (Figure 5). For instance, we treat the generation space in GANs as latent, which immediately reveals the connection between GANs and adversarial domain adaptation, and provides a variational inference interpretation of the generation. A second example is the classic wakesleep algorithm, where the wake phase reconstructs visibles conditioned on latents, while the sleep phase reconstructs latents conditioned on visibles (i.e., generated samples). Hence, visible and latent variables are treated in a completely symmetric manner.
The (empirical) data distributions over visible variables are usually implicit, i.e., easy to sample from but intractable for evaluating likelihood. In contrast, the prior distributions over latent variables are usually defined as explicit distributions, amenable to likelihood evaluation. Fortunately, the adversarial approach in GANs, and other techniques such as density ratio estimation (Mohamed and Lakshminarayanan, 2016) and approximate Bayesian computation (Beaumont et al., 2002), have provided useful tools to bridge the gap. For instance, implicit generative models such as GANs require only simulation of the generative process without explicit likelihood evaluation, hence the prior distributions over latent variables are used in the same way as the empirical data distributions, namely, simulating samples. For explicit likelihoodbased models, adversarial autoencoder (AAE) leverages the adversarial approach to allow implicit prior distributions over latent space. Likewise, the implicit variational inference methods (section 3.4.2) either do not require explicit distributions as priors. In these methods, adversarial loss is used to replace intractable KL divergence loss between the variational distributions and the priors. In sum, with the new tools like the adversarial loss, prior distributions over latent variables can be (used as) implicit distributions precisely the same as the empirical data distribution.
The second difference between the visible and latent variables involves the complexity of the two spaces. Visible space is usually complex while latent space tends (or is designed) to be simpler. The complexity difference guides us to choose appropriate tools (e.g., adversarial loss v.s. maximum likelihood loss, etc) to minimize the distance between distributions to learn and their targets. For instance, VAEs and adversarial autoencoder both regularize the model by minimizing the distance between the variational posterior and the prior, though VAEs choose KL divergence loss while AAE selects adversarial loss.
We can further extend the symmetric treatment of visible/latent / pair to data/label / pair, leading to a unified view of the generative and discriminative paradigms for unsupervised and semisupervised learning. Specifically, conditional generative models create (data, label) pairs by generating data given label . These pairs can be used for classifier training (Hu et al., 2017; Odena et al., 2017). In parallel, discriminative approaches such as knowledge distillation (Hinton et al., 2015; Hu et al., 2016a, b) create (data, label) pairs by generating label given data . With the symmetric view of the  and spaces, and neural networkbased blackbox mappings between spaces, we can see the two approaches are essentially the same.
References
 Arjovsky and Bottou (2017) M. Arjovsky and L. Bottou. Towards principled methods for training generative adversarial networks. In ICLR, 2017.
 Arora et al. (2017) S. Arora, R. Ge, Y. Liang, T. Ma, and Y. Zhang. Generalization and equilibrium in generative adversarial nets (GANs). arXiv preprint arXiv:1703.00573, 2017.
 Beal and Ghahramani (2003) M. J. Beal and Z. Ghahramani. The variational bayesian EM algorithm for incomplete data: with application to scoring graphical model structures. Bayesian statistics, 7:453–464, 2003.
 Beaumont et al. (2002) M. A. Beaumont, W. Zhang, and D. J. Balding. Approximate Bayesian computation in population genetics. Genetics, 162(4):2025–2035, 2002.
 Bornschein and Bengio (2014) J. Bornschein and Y. Bengio. Reweighted wakesleep. arXiv preprint arXiv:1406.2751, 2014.
 Burda et al. (2015) Y. Burda, R. Grosse, and R. Salakhutdinov. Importance weighted autoencoders. arXiv preprint arXiv:1509.00519, 2015.
 Che et al. (2017a) T. Che, Y. Li, A. P. Jacob, Y. Bengio, and W. Li. Mode regularized generative adversarial networks. ICLR, 2017a.
 Che et al. (2017b) T. Che, Y. Li, R. Zhang, R. D. Hjelm, W. Li, Y. Song, and Y. Bengio. Maximumlikelihood augmented discrete generative adversarial networks. arXiv preprint:1702.07983, 2017b.
 Chen et al. (2016) X. Chen, Y. Duan, R. Houthooft, J. Schulman, I. Sutskever, and P. Abbeel. InfoGAN: Interpretable representation learning by information maximizing generative adversarial nets. In NIPS, 2016.
 Chen et al. (2017) X. Chen, D. P. Kingma, T. Salimans, Y. Duan, P. Dhariwal, J. Schulman, I. Sutskever, and P. Abbeel. Variational lossy autoencoder. ICLR, 2017.
 Dayan et al. (1995) P. Dayan, G. E. Hinton, R. M. Neal, and R. S. Zemel. The helmholtz machine. Neural computation, 7(5):889–904, 1995.
 Dziugaite et al. (2015) G. K. Dziugaite, D. M. Roy, and Z. Ghahramani. Training generative neural networks via maximum mean discrepancy optimization. arXiv preprint arXiv:1505.03906, 2015.
 Ganin et al. (2016) Y. Ganin, E. Ustinova, H. Ajakan, P. Germain, H. Larochelle, F. Laviolette, M. Marchand, and V. Lempitsky. Domainadversarial training of neural networks. JMLR, 2016.
 Goodfellow et al. (2014) I. Goodfellow, J. PougetAbadie, M. Mirza, B. Xu, D. WardeFarley, S. Ozair, A. Courville, and Y. Bengio. Generative adversarial nets. In NIPS, pages 2672–2680, 2014.
 Gutmann et al. (2014) M. U. Gutmann, R. Dutta, S. Kaski, and J. Corander. Statistical inference of intractable generative models via classification. arXiv preprint arXiv:1407.4981, 2014.
 Hinton et al. (2015) G. Hinton, O. Vinyals, and J. Dean. Distilling the knowledge in a neural network. arXiv preprint arXiv:1503.02531, 2015.
 Hinton et al. (1995) G. E. Hinton, P. Dayan, B. J. Frey, and R. M. Neal. The" wakesleep" algorithm for unsupervised neural networks. Science, 268(5214):1158, 1995.
 Hjelm et al. (2017) R. D. Hjelm, A. P. Jacob, T. Che, K. Cho, and Y. Bengio. Boundaryseeking generative adversarial networks. arXiv preprint arXiv:1702.08431, 2017.
 Hu et al. (2016a) Z. Hu, X. Ma, Z. Liu, E. Hovy, and E. Xing. Harnessing deep neural networks with logic rules. In ACL, 2016a.
 Hu et al. (2016b) Z. Hu, Z. Yang, R. Salakhutdinov, and E. P. Xing. Deep neural networks with massive learned knowledge. In EMNLP, 2016b.
 Hu et al. (2017) Z. Hu, Z. Yang, X. Liang, R. Salakhutdinov, and E. P. Xing. Toward controlled generation of text. In ICML, 2017.
 Hu et al. (2018) Z. Hu, Z. Yang, R. Salakhutdinov, and E. P. Xing. On unifying deep generative models. In ICLR, 2018.
 (23) F. Huszar. InfoGAN: using the variational bound on mutual information (twice). Blogpost.
 Huszár (2017) F. Huszár. Variational inference using implicit distributions. arXiv preprint arXiv:1702.08235, 2017.
 Jordan et al. (1999) M. I. Jordan, Z. Ghahramani, T. S. Jaakkola, and L. K. Saul. An introduction to variational methods for graphical models. Machine learning, 37(2):183–233, 1999.
 Kingma and Welling (2013) D. P. Kingma and M. Welling. Autoencoding variational Bayes. arXiv preprint arXiv:1312.6114, 2013.
 Kingma et al. (2014) D. P. Kingma, S. Mohamed, D. J. Rezende, and M. Welling. Semisupervised learning with deep generative models. In NIPS, pages 3581–3589, 2014.
 Kulkarni et al. (2015) T. D. Kulkarni, W. F. Whitney, P. Kohli, and J. Tenenbaum. Deep convolutional inverse graphics network. In NIPS, pages 2539–2547, 2015.
 Larochelle and Murray (2011) H. Larochelle and I. Murray. The neural autoregressive distribution estimator. In AISTATS, 2011.
 Larsen et al. (2015) A. B. L. Larsen, S. K. Sønderby, H. Larochelle, and O. Winther. Autoencoding beyond pixels using a learned similarity metric. arXiv preprint arXiv:1512.09300, 2015.
 Li (2016) Y. Li. GANs, mutual information, and possibly algorithm selection? Blogpost, 2016.
 Li et al. (2015) Y. Li, K. Swersky, and R. Zemel. Generative moment matching networks. In ICML, 2015.
 Makhzani et al. (2015) A. Makhzani, J. Shlens, N. Jaitly, I. Goodfellow, and B. Frey. Adversarial autoencoders. arXiv preprint arXiv:1511.05644, 2015.
 Mescheder et al. (2017) L. Mescheder, S. Nowozin, and A. Geiger. Adversarial variational Bayes: Unifying variational autoencoders and generative adversarial networks. arXiv preprint arXiv:1701.04722, 2017.
 Metz et al. (2017) L. Metz, B. Poole, D. Pfau, and SohlDickstein. Unrolled generative adversarial networks. ICLR, 2017.
 Mnih and Gregor (2014) A. Mnih and K. Gregor. Neural variational inference and learning in belief networks. arXiv preprint arXiv:1402.0030, 2014.
 Mohamed and Lakshminarayanan (2016) S. Mohamed and B. Lakshminarayanan. Learning in implicit generative models. arXiv preprint arXiv:1610.03483, 2016.
 Neal (1992) R. M. Neal. Connectionist learning of belief networks. Artificial intelligence, 56(1):71–113, 1992.
 Nowozin et al. (2016) S. Nowozin, B. Cseke, and R. Tomioka. fGAN: Training generative neural samplers using variational divergence minimization. In NIPS, pages 271–279, 2016.
 Odena et al. (2017) A. Odena, C. Olah, and J. Shlens. Conditional image synthesis with auxiliary classifier GANs. ICML, 2017.
 Oord et al. (2016) A. v. d. Oord, N. Kalchbrenner, and K. Kavukcuoglu. Pixel recurrent neural networks. arXiv preprint arXiv:1601.06759, 2016.
 Pu et al. (2017) Y. Pu, L. Chen, S. Dai, W. Wang, C. Li, and L. Carin. Symmetric variational autoencoder and connections to adversarial learning. arXiv preprint arXiv:1709.01846, 2017.
 Purushotham et al. (2017) S. Purushotham, W. Carvalho, T. Nilanon, and Y. Liu. Variational recurrent adversarial deep domain adaptation. In ICLR, 2017.
 Qin et al. (2017) L. Qin, Z. Zhang, H. Zhao, Z. Hu, and E. P. Xing. Adversarial connectiveexploiting networks for implicit discourse relation classification. arXiv preprint arXiv:1704.00217, 2017.
 Radford et al. (2015) A. Radford, L. Metz, and S. Chintala. Unsupervised representation learning with deep convolutional generative adversarial networks. arXiv preprint arXiv:1511.06434, 2015.
 Rosca et al. (2017) M. Rosca, B. Lakshminarayanan, D. WardeFarley, and S. Mohamed. Variational approaches for autoencoding generative adversarial networks. arXiv preprint arXiv:1706.04987, 2017.
 Salimans et al. (2016) T. Salimans, I. Goodfellow, W. Zaremba, V. Cheung, A. Radford, and X. Chen. Improved techniques for training GANs. In NIPS, pages 2226–2234, 2016.
 Schmidhuber (1992) J. Schmidhuber. Learning factorial codes by predictability minimization. Neural Computation, 1992.

Sønderby et al. (2017)
C. K. Sønderby, J. Caballero, L. Theis, W. Shi, and F. Huszár.
Amortised MAP inference for image superresolution.
ICLR, 2017.  Tanner and Wong (1987) M. A. Tanner and W. H. Wong. The calculation of posterior distributions by data augmentation. JASA, 82(398):528–540, 1987.
 Tran et al. (2017) D. Tran, R. Ranganath, and D. M. Blei. Deep and hierarchical implicit models. arXiv preprint arXiv:1702.08896, 2017.
 van den Oord et al. (2016) A. van den Oord, N. Kalchbrenner, L. Espeholt, O. Vinyals, A. Graves, and K. Kavukcuoglu. Conditional image generation with pixelCNN decoders. In NIPS, 2016.
 Zhu et al. (2017) J.Y. Zhu, T. Park, P. Isola, and A. A. Efros. Unpaired imagetoimage translation using cycleconsistent adversarial networks. arXiv preprint arXiv:1703.10593, 2017.
Appendix A Proof of Lemma 1
Appendix B Proof of JSD Upper Bound in Lemma 1
We show that, in Lemma.1 (Eq.10), the JSD term is upper bounded by the KL term, i.e.,
(30) 
Appendix C Schematic Graphical Models and AAE/PM/CycleGAN
Adversarial Autoencoder (AAE) (Makhzani et al., 2015) can be obtained by swapping code variable and data variable of InfoGAN in the graphical model, as shown in Figure 6. To see this, we directly write down the objectives represented by the graphical model in the right panel, and show they are precisely the original AAE objectives proposed in (Makhzani et al., 2015). We present detailed derivations, which also serve as an example for how one can translate a graphical model representation to the mathematical formulations. Readers can do similarly on the schematic graphical models of GANs, InfoGANs, VAEs, and many other relevant variants and write down the respective objectives conveniently.
We stick to the notational convention in the paper that parameter is associated with the distribution over , parameter with the distribution over , and parameter with the distribution over . Besides, we use to denote the distributions over , and the distributions over and .
From the graphical model, the inference process (dashedline arrows) involves implicit distribution (where is encapsulated). As in the formulations of GANs (Eq.4 in the paper) and VAEs (Eq.13 in the paper), indicates the real distribution we want to approximate and indicates the approximate distribution with parameters to learn. So we have
(33) 
where, as is the hidden code, is the prior distribution over ^{1}^{1}1See section 6 of the paper for the detailed discussion on prior distributions of hidden variables and empirical distribution of visible variables, and the space of is degenerated. Here is the implicit distribution such that
(34) 
where is a deterministic transformation parameterized with that maps data to code . Note that as is a visible variable, the prefixed distribution of is the empirical data distribution.
On the other hand, the generative process (solidline arrows) involves (here means we will swap between and ). As the space of is degenerated given , thus is fixed without parameters to learn, and is only associated to .
With the above components, we maximize the log likelihood of the generative distributions
conditioning on the variable inferred by . Adding the prior distributions, the objectives are then written as
Comments
There are no comments yet.