On Unifying Deep Generative Models

06/02/2017 ∙ by Zhiting Hu, et al. ∙ Petuum, Inc. Carnegie Mellon University 0

Deep generative models have achieved impressive success in recent years. Generative Adversarial Networks (GANs) and Variational Autoencoders (VAEs), as powerful frameworks for deep generative model learning, have largely been considered as two distinct paradigms and received extensive independent study respectively. This paper establishes formal connections between deep generative modeling approaches through a new formulation of GANs and VAEs. We show that GANs and VAEs are essentially minimizing KL divergences of respective posterior and inference distributions with opposite directions, extending the two learning phases of classic wake-sleep algorithm, respectively. The unified view provides a powerful tool to analyze a diverse set of existing model variants, and enables to exchange ideas across research lines in a principled way. For example, we transfer the importance weighting method in VAE literatures for improved GAN learning, and enhance VAEs with an adversarial mechanism for leveraging generated samples. Quantitative experiments show generality and effectiveness of the imported extensions.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Deep generative models define distributions over a set of variables organized in multiple layers. Early forms of such models dated back to works on hierarchical Bayesian models (Neal, 1992)

and neural network models such as Helmholtz machines 

(Dayan et al., 1995)

, originally studied in the context of unsupervised learning, latent space modeling, etc. Such models are usually trained via an EM style framework, using either a variational inference 

(Jordan et al., 1999) or a data augmentation (Tanner and Wong, 1987) algorithm. Of particular relevance to this paper is the classic wake-sleep algorithm dates by Hinton et al. (1995) for training Helmholtz machines, as it explored an idea of minimizing a pair of KL divergences in opposite directions of the posterior and its approximation.

In recent years there has been a resurgence of interests in deep generative modeling. The emerging approaches, including Variational Autoencoders (VAEs) (Kingma and Welling, 2013), Generative Adversarial Networks (GANs) (Goodfellow et al., 2014)

, Generative Moment Matching Networks (GMMNs) 

(Li et al., 2015; Dziugaite et al., 2015), auto-regressive neural networks (Larochelle and Murray, 2011; Oord et al., 2016)

, and so forth, have led to impressive results in a myriad of applications, such as image and text generation 

(Radford et al., 2015; Hu et al., 2017; van den Oord et al., 2016)

, disentangled representation learning 

(Chen et al., 2016; Kulkarni et al., 2015)

, and semi-supervised learning 

(Salimans et al., 2016; Kingma et al., 2014).

The deep generative model literature has largely viewed these approaches as distinct model training paradigms. For instance, GANs aim to achieve an equilibrium between a generator and a discriminator; while VAEs are devoted to maximizing a variational lower bound of the data log-likelihood. A rich array of theoretical analyses and model extensions have been developed independently for GANs (Arjovsky and Bottou, 2017; Arora et al., 2017; Salimans et al., 2016; Nowozin et al., 2016) and VAEs (Burda et al., 2015; Chen et al., 2017; Hu et al., 2017), respectively. A few works attempt to combine the two objectives in a single model for improved inference and sample generation (Mescheder et al., 2017; Larsen et al., 2015; Makhzani et al., 2015; Sønderby et al., 2017). Despite the significant progress specific to each method, it remains unclear how these apparently divergent approaches connect to each other in a principled way.

In this paper, we present a new formulation of GANs and VAEs that connects them under a unified view, and links them back to the classic wake-sleep algorithm. We show that GANs and VAEs involve minimizing opposite KL divergences of respective posterior and inference distributions, and extending the sleep and wake phases, respectively, for generative model learning. More specifically, we develop a reformulation of GANs that interprets generation of samples as performing posterior inference, leading to an objective that resembles variational inference as in VAEs. As a counterpart, VAEs in our interpretation contain a degenerated adversarial mechanism that blocks out generated samples and only allows real examples for model training.

The proposed interpretation provides a useful tool to analyze the broad class of recent GAN- and VAE-based algorithms, enabling perhaps a more principled and unified view of the landscape of generative modeling. For instance, one can easily extend our formulation to subsume InfoGAN (Chen et al., 2016)

that additionally infers hidden representations of examples, VAE/GAN joint models 

(Larsen et al., 2015; Che et al., 2017a) that offer improved generation and reduced mode missing, and adversarial domain adaptation (ADA) (Ganin et al., 2016; Purushotham et al., 2017) that is traditionally framed in the discriminative setting.

The close parallelisms between GANs and VAEs further ease transferring techniques that were originally developed for improving each individual class of models, to in turn benefit the other class. We provide two examples in such spirit: 1) Drawn inspiration from importance weighted VAE (IWAE) (Burda et al., 2015), we straightforwardly derive importance weighted GAN (IWGAN) that maximizes a tighter lower bound on the marginal likelihood compared to the vanilla GAN. 2) Motivated by the GAN adversarial game we activate the originally degenerated discriminator in VAEs, resulting in a full-fledged model that adaptively leverages both real and fake examples for learning. Empirical results show that the techniques imported from the other class are generally applicable to the base model and its variants, yielding consistently better performance.

2 Related Work

There has been a surge of research interest in deep generative models in recent years, with remarkable progress made in understanding several class of algorithms. The wake-sleep algorithm (Hinton et al., 1995) is one of the earliest general approaches for learning deep generative models. The algorithm incorporates a separate inference model for posterior approximation, and aims at maximizing a variational lower bound of the data log-likelihood, or equivalently, minimizing the KL divergence of the approximate posterior and true posterior. However, besides the wake phase that minimizes the KL divergence w.r.t the generative model, the sleep phase is introduced for tractability that minimizes instead the reversed KL divergence w.r.t the inference model. Recent approaches such as NVIL (Mnih and Gregor, 2014) and VAEs (Kingma and Welling, 2013)

are developed to maximize the variational lower bound w.r.t both the generative and inference models jointly. To reduce the variance of stochastic gradient estimates, VAEs leverage reparametrized gradients. Many works have been done along the line of improving VAEs.

Burda et al. (2015) develop importance weighted VAEs to obtain a tighter lower bound. As VAEs do not involve a sleep phase-like procedure, the model cannot leverage samples from the generative model for model training. Hu et al. (2017) combine VAEs with an extended sleep procedure that exploits generated samples for learning.

Another emerging family of deep generative models is the Generative Adversarial Networks (GANs) (Goodfellow et al., 2014), in which a discriminator is trained to distinguish between real and generated samples and the generator to confuse the discriminator. The adversarial approach can be alternatively motivated in the perspectives of approximate Bayesian computation (Gutmann et al., 2014) and density ratio estimation (Mohamed and Lakshminarayanan, 2016)

. The original objective of the generator is to minimize the log probability of the discriminator correctly recognizing a generated sample as fake. This is equivalent to

minimizing a lower bound on the Jensen-Shannon divergence (JSD) of the generator and data distributions (Goodfellow et al., 2014; Nowozin et al., 2016; Huszar, ; Li, 2016). Besides, the objective suffers from vanishing gradient with strong discriminator. Thus in practice people have used another objective which maximizes the log probability of the discriminator recognizing a generated sample as real (Goodfellow et al., 2014; Arjovsky and Bottou, 2017). The second objective has the same optimal solution as with the original one. We base our analysis of GANs on the second objective as it is widely used in practice yet few theoretic analysis has been done on it. Numerous extensions of GANs have been developed, including combination with VAEs for improved generation (Larsen et al., 2015; Makhzani et al., 2015; Che et al., 2017a), and generalization of the objectives to minimize other -divergence criteria beyond JSD (Nowozin et al., 2016; Sønderby et al., 2017). The adversarial principle has gone beyond the generation setting and been applied to other contexts such as domain adaptation (Ganin et al., 2016; Purushotham et al., 2017)

, and Bayesian inference 

(Mescheder et al., 2017; Tran et al., 2017; Huszár, 2017; Rosca et al., 2017) which uses implicit variational distributions in VAEs and leverage the adversarial approach for optimization. This paper starts from the basic models of GANs and VAEs, and develops a general formulation that reveals underlying connections of different classes of approaches including many of the above variants, yielding a unified view of the broad set of deep generative modeling.

This paper considerably extends the conference version (Hu et al., 2018) by generalizing the unified framework to a broader set of GAN- and VAE-variants, providing a more complete and consistent view of the various models and algorithms, adding more discussion of the symmetric view of generation and inference, and re-organizing thbe presentation to make the theory development clearer.

3 Bridging the Gap

The structures of GANs and VAEs are at the first glance quite different from each other. VAEs are based on the variational inference approach, and include an explicit inference model that reverses the generative process defined by the generative model. On the contrary, in traditional view GANs lack an inference model, but instead have a discriminator that judges generated samples. In this paper, a key idea to bridge the gap is to interpret the generation of samples in GANs as performing inference, and the discrimination as a generative process that produces real/fake labels. The resulting new formulation reveals the connections of GANs to traditional variational inference. The reversed generation-inference interpretations between GANs and VAEs also expose their correspondence to the two learning phases in the classic wake-sleep algorithm.

For ease of presentation and to establish a systematic notation for the paper, we start with a new interpretation of Adversarial Domain Adaptation (ADA) (Ganin et al., 2016), the application of adversarial approach in the domain adaptation context. We then show GANs are a special case of ADA, followed with a series of analysis linking GANs, VAEs, and their variants in our formulation.

3.1 Adversarial Domain Adaptation (ADA)

Given two domains, one source domain with labeled data and one target domain without labels, ADA aims to transfer prediction knowledge learned from the source domain to the target domain, by learning domain-invariant features (Ganin et al., 2016; Qin et al., 2017; Purushotham et al., 2017). That is, it learns a feature extractor whose output cannot be distinguished by a discriminator between the source and target domains.

We first review the conventional formulation of ADA. Figure 1(a) illustrates the computation flow. Let be a data example either in the source or target domain, and the domain indicator with indicating the target domain and the source domain. The domain-specific data distributions can then be denoted as a conditional distribution . The feature extractor parameterized with maps data to feature . To enforce domain invariance of feature , a discriminator is learned. Specifically, outputs the probability that comes from the source domain. The discriminator is trained to maximize the binary classification accuracy of recognizing the domains:

(1)

The feature extractor is then trained to fool the discriminator:

(2)

We omit the additional loss on that improves the accuracy of the original classification problem based on source-domain features (Ganin et al., 2016).

With the background of conventional formulation, we now frame our new interpretation of ADA. The data distribution and deterministic transformation together form an implicit distribution over , denoted as :

(3)

The distribution is intractable for evaluating likelihood but easy to sample from. Let be the distribution of the domain indicator

, e.g., a uniform distribution as in Eqs.(

1)-(2). The discriminator defines a conditional distribution . Let be the reversed distribution over domains. The objectives of ADA can then be rewritten as (omitting the constant scale factor ):

(4)
(5)

Note that is encapsulated in the implicit distribution (Eq.3). The only difference of the objectives for from is the replacement of with . This is where the adversarial mechanism comes about. We defer deeper interpretation of the new objectives in the next subsection.

Figure 1: (a) Conventional view of ADA. To make direct correspondence to GANs, we use to denote the data and the feature. Subscripts src and tgt denote the source and target domains, respectively. (b) Conventional view of GANs. The code space of the real data domain is degenerated. (c) Schematic graphical model of both ADA and GANs (Eqs.4-5). Arrows with solid lines denote generative process; arrows with dashed lines denote inference; hollow arrows denote deterministic transformation leading to implicit distributions; and blue arrows denote adversarial mechanism that involves respective conditional distribution and its reverse , e.g., and (denoted as for short). Note that in GANs we have interpreted as a latent variable and as visible.

3.2 Generative Adversarial Networks (GANs)

GANs (Goodfellow et al., 2014) can be seen as a special case of ADA. Taking image generation for example, intuitively, we want to transfer the properties of real image (source domain) to generated image (target domain), making them indistinguishable to the discriminator. Figure 1(b) shows the conventional view of GANs.

Formally, now denotes a real example or a generated sample, is the respective latent code. For the generated sample domain (), the implicit distribution is defined by the prior of and the generator (Eq.3), which is also denoted as in the literature. For the real example domain (), the code space and generator are degenerated, and we are directly presented with a fixed distribution , which is just the real data distribution . Note that is also an implicit distribution and allows efficient empirical sampling. In summary, the conditional distribution over is constructed as

(6)

Here, free parameters are only associated with of the generated sample domain, while is constant. As in ADA, discriminator is simultaneously trained to infer the probability that comes from the real data domain. That is, .

With the established correspondence between GANs and ADA, we can see that the objectives of GANs are precisely expressed as Eqs.(4)-(5). To make this clearer, we recover the classical form by unfolding over and plugging in conventional notations. For instance, the objective of the generative parameters in Eq.(5) is translated into

(7)

where is uniform and results in the constant scale factor . As noted in section 2, we focus on the unsaturated objective for the generator (Goodfellow et al., 2014), as it is commonly used in practice yet still lacks systematic analysis.

3.2.1 New Interpretation of GANs

Let us take a closer look into the form of Eqs.(4)-(5). If we treat as a visible variable while as latent (as in ADA), Eq.(4) closely resembles the M-step in a variational EM (Beal and Ghahramani, 2003) learning procedure. That is, we are “reconstructing” the real/fake indicator with the “generative distribution” , conditioning on inferred from the “variational distribution” . Similarly, Eq.(5) is in analogue with the E-step with the “generative distribution” now being , except that the KL divergence regularization between the “variational distribution” and some “prior” over , i.e., is missing. We take this view and reveal the connections between GANs and variational learning further in the following.

Schematic graphical model representation

Before going a step further, we first illustrate such generative and inference processes in GANs in Figure 1(c). We have introduced new visual elements to augment the conventional graphical model representation, for example, hollow arrows for expressing implicit distributions, and blue arrows for adversarial objectives. We found such a graphical representation can precisely express various deep generative models in our new perspective, and make the connections between them clearer. We will see more such graphical representations later.

We continue to analyze the objective for (Eq.5). Let be the state of the parameters from the last iteration. At current iteration, a natural idea is to treat the marginal distribution over at as the “prior”:

(8)

As above, in Eq.(5) can be interpreted as the “likelihood” function in variational EM. We then can construct the “posterior”:

(9)

We have the following results in terms of the gradient w.r.t :

Lemma 1

Let be the uniform distribution. The update of at has

(10)

where and are the Kullback-Leibler and Jensen-Shannon Divergences, respectively.

Proofs are provided in the supplements (section A). The result offers new insights into the GAN generative model learning:

Figure 2: One optimization step of the parameter through Eq.(10) at point . The posterior is a mixture of (blue) and (red in the left panel) with the mixing weights induced from . Minimizing the KLD drives towards the respective mixture (green), resulting in a new state where (red in the right panel) gets closer to . Due to the asymmetry of KLD, missed the smaller mode of the mixture which is a mode of .

[-10pt]0pt

  • Resemblance to variational inference. As discussed above, we see as the latent variable and the variational distribution that approximates the true “posterior” . Therefore, optimizing the generator is equivalent to minimizing the KL divergence between the inference distribution and the posterior (a standard from of variational inference), minus a JSD between the distributions and . The interpretation also helps to reveal the connections between GANs and VAEs, as discussed later.

  • The JSD term. The negative JSD term is due to the introduction of the prior . This term pushes away from , which acts oppositely from the KLD term. However, we show in the supplementary that the JSD term is upper bounded by the KL term (section B). Thus, if the KL term is sufficiently minimized, the magnitude of the JSD also decreases. Note that we do not mean the JSD is insignificant or negligible. Instead any conclusions drawn from Eq.(10) should take the JSD term into account.

  • Training dynamics. The component with in the KL divergence term is

    (11)

    which is a constant. The active component associated with the parameter to optimize is with , i.e.,

    (12)

    On the other hand, by definition, is a mixture of and , and thus the posterior is also a mixture of and with mixing weights induced from the discriminator . Thus, minimizing the KL divergence in effect drives to a mixture of and . Since is fixed, gets closer to . Figure 2 illustrates the training dynamics schematically.

  • Explanation of missing mode issue. JSD is a symmetric divergence measure while KL is asymmetric. The missing mode behavior widely observed in GANs (Metz et al., 2017; Che et al., 2017a) is thus explained by the asymmetry of the KL which tends to concentrate to large modes of and ignore smaller ones. See Figure 2 for the illustration. Concentration to few large modes also facilitates GANs to generate sharp and realistic samples.

  • Optimality assumption of the discriminator. Previous theoretical works have typically assumed (near) optimal discriminator (Goodfellow et al., 2014; Arjovsky and Bottou, 2017):

    (13)

    which can be unwarranted in practice due to limited expressiveness of the discriminator (Arora et al., 2017). In contrast, our result does not rely on the optimality assumptions. Indeed, our result is a generalization of the previous theorem in (Arjovsky and Bottou, 2017), which is recovered by plugging Eq.(13) into Eq.(10):

    (14)

    which gives simplified explanations of the training dynamics and the missing mode issue, but only when the discriminator meets certain optimality criteria. Our generalized result enables understanding of broader situations. For instance, when the discriminator distribution gives uniform guesses, or when that is indistinguishable by the discriminator, the gradients of the KL and JSD terms in Eq.(10) cancel out, which stops the generator learning.

Figure 3: (a) Schematic graphical model of InfoGAN (Eq.15), which, compared to GANs, adds conditional generative process of code with distribution . (b) VAEs (Eq.19), which is obtained by swapping the generative and inference processes of InfoGAN, i.e., in terms of the schematic graphical model, swapping solid-line arrows (generative process) and dashed-line arrows (inference) of (a). (c) Adversarial Autoencoder (AAE), which is obtained by swapping data and code in InfoGAN.

3.2.2 InfoGAN

Chen et al. (2016) developed InfoGAN that additionally recovers the code given sample . This can straightforwardly be formulated in our framework by introducing an extra conditional parameterized by . As discussed above, GANs assume a degenerated code space for real examples, thus is defined to be fixed without free parameters to learn, and is only associated to the case. Further, as in Figure 1(c), is treated as a visible variable. Thus augments the generative process, leading to a full likelihood . The InfoGAN is then recovered by extending Eqs.(4)-(5) as follows:

(15)

Again, note that is encapsulated in the implicit distribution . The model is expressed as the schematic graphical model in Figure 1(d).

The resulting -augmented posterior is . The result in the form of Lemma.1 still holds:

(16)

3.2.3 Adversarial Autoencoder (AAE) and CycleGAN

The new formulation is also generally applicable to other GAN-related variants, such as Adversarial Autoencoder (AAE) (Makhzani et al., 2015), Predictability Minimization (Schmidhuber, 1992), and cycleGAN (Zhu et al., 2017).

Specifically, AAE is recovered by simply swapping the code variable and the data variable of InfoGAN in the graphical model, as shown in Figure 3(c). In other words, AAE is precisely an InfoGAN that treats the code as a latent variable to be adversarially regularized, and the data/generation variable as a visible variable. To make this clearer, in the supplementary we demonstrate how the schematic graphical model of Figure 3(c) can directly be translated into the mathematical formula of AAE (Makhzani et al., 2015). Predictability Minimization (PM) (Schmidhuber, 1992) resembles AAE and is also discussed in the supplementary materials.

InfoGAN and AAE thus are a symmetric pair that exchanges data and code spaces. Further, instead of considering and as data and code spaces respectively, if we use both and to model data spaces of two modalities, and combine the objectives of InfoGAN and AAE as a joint model, we recover the cycleGAN model (Zhu et al., 2017) that performs transformation between the two modalities. In particular, the objectives of AAE (Eq.35 in the supplementary) are precisely the objectives that train the cycleGAN model to translate into , and the objectives of InfoGAN (Eq.15) are used to train the reversed translation from to .

3.3 Variational Autoencoders (VAEs)

We next explore the second family of deep generative modeling, namely, the VAEs (Kingma and Welling, 2013). The resemblance of GAN learning to variational inference (Lemma.1) suggests strong relations between VAEs and GANs. We build correspondence between them, and show that VAEs involve minimizing a KLD in an opposite direction to that of GANs, with a degenerated adversarial discriminator.

The conventional definition of VAEs is written as:

(17)

where is the generative model, the inference model, and the prior. The parameters to learn are intentionally denoted with the notations of corresponding modules in GANs. VAEs appear to differ from GANs greatly as they use only real examples and lack adversarial mechanism.

To connect to GANs, we assume a perfect discriminator that always predicts with probability given real examples, and given generated samples. Again, for notational simplicity, let be the reversed distribution. Precisely as for GANs, in our formulation, the code space for real examples are degenerated, and we are presented with the real data distribution directly over . The composite conditional distribution of is thus constructed as:

(18)

We can see the distribution differs slightly from its GAN counterpart in Eq.(6) and additionally accounts for the uncertainty of generating given . In analogue to InfoGAN, we have the conditional over , namely, , in which is constant due to the degenerated code space for , and is associated with the free parameter . Finally, we extend the prior over to define such that and is again irrelevant.

We are now ready to reformulate the VAE objective in Eq.(17):

Lemma 2

Let . The VAE objective in Eq.(17) is equivalent to (omitting the constant scale factor ):

(19)

We provide the proof in the supplementary materials (section D).

Figure 3(b) shows the schematic graphical model of the new interpretation of VAEs, where the only difference from InfoGAN is swapping the solid-line arrows (generative process) and dashed-line arrows (inference). That is, InfoGAN and VAEs are a symmetric pair in the sense of exchanging the generative and inference process.

Given a fake sample from , the reversed perfect discriminator always predicts with probability , and the loss on fake samples is therefore degenerated to a constant (irrelevant to the free parameters). This blocks out fake samples from contributing to the model learning.

Components ADA GANs / InfoGAN VAEs
features data/generations data/generations
domain indicator real/fake indicator real/fake indicator (degenerated)
data examples

code vector

code vector
feature distr. [I]  generator, Eq.6 [G] , generator, Eq.18
discriminator [G] discriminator [I]   , discriminator (degenerated)
[G] infer net (InfoGAN) [I]   infer net
KLD to min same as GANs
Table 1: Correspondence between different approaches in the proposed formulation. The label “[G]” in bold indicates the respective component is involved in the generative process within our interpretation, while “[I]” indicates inference process. This is also expressed in the schematic graphical models in Figure 1.

3.4 Connecting the Two Families of GANs and VAEs

Table 1 summarizes the correspondence between the various methods. Lemma.1 and Lemma.2 have revealed that both GANs and VAEs involve minimizing a KLD of respective inference and posterior distributions. Specifically, GANs involve minimizing the while VAEs the . This exposes new connections between the two model classes in multiple aspects, each of which in turn leads to a set of existing research or can inspire new research directions: [-10pt]0pt

  1. [label*=0)]

  2. As discussed in Lemma.1, GANs now also relate to the variational inference algorithm as with VAEs, revealing a unified statistical view of the two classes. Moreover, the new perspective naturally enables many of the extensions of VAEs and vanilla variational inference algorithm to be transferred to GANs. We show an example in the next section.

  3. The generator parameters are placed in the opposite directions in the two KLDs. The asymmetry of KLD leads to distinct model behaviors. For instance, as discussed in Lemma.1, GANs are able to generate sharp images but tend to collapse to one or few modes of the data (i.e., mode missing). In contrast, the KLD of VAEs tends to drive generator to cover all modes of the data distribution but also small-density regions (i.e., mode covering), which tend to result in blurred samples. Such opposite behaviors naturally inspires combination of the two objectives to remedy the asymmetry of each of the KLDs, as discussed below.

  4. VAEs within our formulation also include an adversarial mechanism as in GANs. The discriminator is perfect and degenerated, disabling generated samples to help with learning. This inspires activating the adversary to allow learning from samples. We present a simple possible way in the next section.

  5. GANs and VAEs have inverted latent-visible treatments of and , since we interpret sample generation in GANs as posterior inference. Such inverted treatments strongly relates to the symmetry of the sleep and wake phases in the wake-sleep algorithm, as presented shortly. In section 6, we provide a more general discussion on a symmetric view of generation and inference.

3.4.1 VAE/GAN Joint Models

Previous work has explored combination of VAEs and GANs into joint models. As above, this can be naturally motivated by the opposite asymmetric behaviors of the KLDs that the two algorithms optimize respectively. Specifically, Larsen et al. (2015); Pu et al. (2017) improve the sharpness of VAE generated images by adding the GAN objective that forces the generative model to focus on meaningful data modes. On the other hand, augmenting GANs with the VAE objective helps addressing the mode missing problem, which is studied in (Che et al., 2017a).

3.4.2 Implicit Variational Inference

Another recent line of research extends VAEs by using an implicit model as the variational distribution (Mescheder et al., 2017; Tran et al., 2017; Huszár, 2017; Rosca et al., 2017). The idea naturally matches GANs under the unified view. In particular, in Eq.(10), the “variational distribution”

of GANs is also an implicit model. Such implicit variational distribution does not assume a particular distribution family (e.g., Gaussian distributions) and thus provides improved flexibility. Compared to GANs, the implicit variational inference in VAEs additionally forces the variational distribution to be close to a prior distribution. This is usually achieved by minimizing an adversarial loss between the two distribution, as in AAE (section 

3.2.3).

3.5 Connecting to Wake Sleep Algorithm (WS)

Wake-sleep algorithm (Hinton et al., 1995) was proposed for learning deep generative models such as Helmholtz machines (Dayan et al., 1995). WS consists of wake phase and sleep phase, which optimize the generative model and inference model, respectively. We follow the above notations, and introduce new notations to denote general latent variables and to denote general parameters. The wake sleep algorithm is thus written as:

(20)

Briefly, the wake phase updates the generator parameters by fitting to the real data and hidden code inferred by the inference model . On the other hand, the sleep phase updates the parameters based on the generated samples from the generator. Hu et al. (2017) have briefly discussed the relations between the WS, VAEs, and GANs algorithms. We formalize the discussion in the below.

The relations between WS and VAEs are clear in previous discussions (Bornschein and Bengio, 2014; Kingma and Welling, 2013). Indeed, WS was originally proposed to minimize the variational lower bound as in VAEs (Eq.17) with the sleep phase approximation (Hinton et al., 1995). Alternatively, VAEs can be seen as extending the wake phase. Specifically, if we let be and be , the wake phase objective recovers VAEs (Eq.17) in terms of generator optimization (i.e., optimizing ). Therefore, we can see VAEs as generalizing the wake phase by also optimizing the inference model , with additional prior regularization on code .

On the other hand, GANs resemble the sleep phase. To make this clearer, let be and be . This results in a sleep phase objective identical to that of optimizing the discriminator in Eq.(4), which is to reconstruct given sample . We thus can view GANs as generalizing the sleep phase by also optimizing the generative model to reconstruct reversed . InfoGAN (Eq.15) further extends to reconstruct the code .

3.6 The Relation Graph

We have presented the unified view that connects GANs and VAEs to classic variational EM and wake-sleep algorithms, and subsumes a broad set of their variants and extensions. Figure 4 depicts the essential relations between the various deep generative models and algorithms under our unified perspective. The generality of the proposed formulation offers a unified statistical insight of the broad landscape of deep generative modeling.

One of the key ideas in our formulation is to treat the sample generation in GANs as performing posterior inference. Treating inference and generation as a symmetric pair leads to the triangular relation in blue in Figure 4. We provide more discussion of the symmetric treatment in section 6.

Figure 4: Relations between deep generative models and algorithms discussed in the paper. The triangular relation in blue highlights the backbone of the unified framework. IW-GAN and AA-VAE are two extensions inspired from the connections between GANs and VAEs (section 4).

4 Applications: Transferring Techniques

The above new formulations not only reveal the connections underlying the broad set of existing approaches, but also facilitate to exchange ideas and transfer techniques across the different families of models and algorithms. For instance, existing enhancements on VAEs can straightforwardly be applied to improve GANs, and vice versa. This section gives two examples. Here we only outline the main intuitions and resulting models, while providing the details in the supplement materials.

4.1 Importance Weighted GANs (IWGAN)

Burda et al. (2015) proposed importance weighted autoencoder (IWAE) that maximizes a tighter lower bound on the marginal likelihood. Within our framework it is straightforward to develop importance weighted GANs by copying the derivations of IWAE side by side, with little adaptations. Specifically, the variational inference interpretation in Lemma.1 suggests GANs can be viewed as maximizing a lower bound of the marginal likelihood on (putting aside the negative JSD term):

(21)

Following (Burda et al., 2015), we can derive a tighter lower bound through a -sample importance weighting estimate of the marginal likelihood. With necessary approximations for tractability, optimizing the tighter lower bound results in the following update rule for the generator learning:

(22)

As in GANs, only (i.e., generated samples) is effective for learning parameters . Compared to the vanilla GAN update (Eq.(10)), the only difference here is the additional importance weight which is the normalization of over samples. Intuitively, the algorithm assigns higher weights to samples that are more realistic and fool the discriminator better, which is consistent to IWAE that emphasizes more on code states providing better reconstructions. Hjelm et al. (2017); Che et al. (2017b) developed a similar sample weighting scheme for generator training, while their generator of discrete data depends on explicit conditional likelihood. In practice, the samples correspond to sample minibatch in standard GAN update. Thus the only computational cost added by the importance weighting method is by evaluating the weight for each sample, and is negligible. The discriminator is trained in the same way as in standard GANs.

4.2 Adversary Activated VAEs (AAVAE)

By Lemma.2, VAEs include a degenerated discriminator which blocks out generated samples from contributing to model learning. We enable adaptive incorporation of fake samples by activating the adversarial mechanism. Specifically, we replace the perfect discriminator in VAEs with a discriminator network parameterized with , resulting in an adapted objective of Eq.(19):

(23)

As detailed in the supplementary material, the discriminator is trained in the same way as in GANs.

The activated discriminator enables an effective data selection mechanism. First, AAVAE uses not only real examples, but also generated samples for training. Each sample is weighted by the inverted discriminator , so that only those samples that resemble real data and successfully fool the discriminator will be incorporated for training. This is consistent with the importance weighting strategy in IWGAN. Second, real examples are also weighted by . An example receiving large weight indicates it is easily recognized by the discriminator, which means the example is hard to be simulated from the generator. That is, AAVAE emphasizes more on harder examples.

GAN IWGAN
MNIST 8.34.03 8.45.04
SVHN 5.18.03 5.34.03
CIFAR10 7.86.05 7.89 .04
CGAN IWCGAN
MNIST 0.985.002 0.987.002
SVHN 0.797.005 0.798.006
SVAE AASVAE
1% 0.9412 0.9425
10% 0.9768 0.9797
Table 2: Left: Inception scores of GANs and the importance weighted extension. Middle: Classification accuracy of the generations by conditional GANs and the IW extension. Right: Classification accuracy of semi-supervised VAEs and the AA extension on MNIST test set, with and real labeled training data.
Train Data Size VAE AA-VAE CVAE AA-CVAE SVAE AA-SVAE
1% -122.89 -122.15 -125.44 -122.88 -108.22 -107.61
10% -104.49 -103.05 -102.63 -101.63 -99.44 -98.81
100% -92.53 -92.42 -93.16 -92.75
Table 3: Variational lower bounds on MNIST test set, trained on , and training data, respectively. In the semi-supervised VAE (SVAE) setting, remaining training data are used for unsupervised training.

5 Experiments

We conduct preliminary experiments to demonstrate the generality and effectiveness of the importance weighting (IW) and adversarial activating (AA) techniques. In this paper we do not aim at achieving state-of-the-art performance, but leave it for future work. In particular, we show the IW and AA extensions improve the standard GANs and VAEs, as well as several of their variants, respectively. We present the results here, and provide details of experimental setups in the supplements.

5.1 Importance Weighted GANs

We extend both vanilla GANs and class-conditional GANs (CGAN) with the IW method. The base GAN model is implemented with the DCGAN architecture and hyperparameter setting 

(Radford et al., 2015). Hyperparameters are not tuned for the IW extensions. We use MNIST, SVHN, and CIFAR10 for evaluation. For vanilla GANs and its IW extension, we measure inception scores (Salimans et al., 2016) on the generated samples. For CGANs we evaluate the accuracy of conditional generation (Hu et al., 2017)

with a pre-trained classifier. Please see the supplements for more details.

Table 2, left panel, shows the inception scores of GANs and IW-GAN, and the middle panel gives the classification accuracy of CGAN and and its IW extension. We report the averaged results

one standard deviation over 5 runs. The IW strategy gives consistent improvements over the base models.

5.2 Adversary Activated VAEs

We apply the AA method on vanilla VAEs, class-conditional VAEs (CVAE), and semi-supervised VAEs (SVAE) (Kingma et al., 2014), respectively. We evaluate on the MNIST data. We measure the variational lower bound on the test set, with varying number of real training examples. For each batch of real examples, AA extended models generate equal number of fake samples for training.

Table 3 shows the results of activating the adversarial mechanism in VAEs. Generally, larger improvement is obtained with smaller set of real training data. Table 2, right panel, shows the improved accuracy of AA-SVAE over the base semi-supervised VAE.

6 Discussions: Symmetric View of Generation and Inference

Figure 5: Symmetric view of generation and inference. There is little difference of the two processes in terms of formulation: with implicit distribution modeling, both processes only need to perform simulation through black-box neural transformations between the latent and visible spaces.

Our new interpretations of GANs and VAEs have revealed strong connections between them. One of the key ideas in our formulation is to interpret sample generation in GANs as performing posterior inference. This section provides a more general discussion of this point.

Traditional modeling approaches usually distinguish between latent and visible variables clearly and treat them in very different ways. One of the key thoughts in our formulation is that it is not necessary to make clear boundary between the two types of variables (and between generation and inference), but instead, treating them as a symmetric pair helps with modeling and understanding (Figure 5). For instance, we treat the generation space in GANs as latent, which immediately reveals the connection between GANs and adversarial domain adaptation, and provides a variational inference interpretation of the generation. A second example is the classic wake-sleep algorithm, where the wake phase reconstructs visibles conditioned on latents, while the sleep phase reconstructs latents conditioned on visibles (i.e., generated samples). Hence, visible and latent variables are treated in a completely symmetric manner.

The (empirical) data distributions over visible variables are usually implicit, i.e., easy to sample from but intractable for evaluating likelihood. In contrast, the prior distributions over latent variables are usually defined as explicit distributions, amenable to likelihood evaluation. Fortunately, the adversarial approach in GANs, and other techniques such as density ratio estimation (Mohamed and Lakshminarayanan, 2016) and approximate Bayesian computation (Beaumont et al., 2002), have provided useful tools to bridge the gap. For instance, implicit generative models such as GANs require only simulation of the generative process without explicit likelihood evaluation, hence the prior distributions over latent variables are used in the same way as the empirical data distributions, namely, simulating samples. For explicit likelihood-based models, adversarial autoencoder (AAE) leverages the adversarial approach to allow implicit prior distributions over latent space. Likewise, the implicit variational inference methods (section 3.4.2) either do not require explicit distributions as priors. In these methods, adversarial loss is used to replace intractable KL divergence loss between the variational distributions and the priors. In sum, with the new tools like the adversarial loss, prior distributions over latent variables can be (used as) implicit distributions precisely the same as the empirical data distribution.

The second difference between the visible and latent variables involves the complexity of the two spaces. Visible space is usually complex while latent space tends (or is designed) to be simpler. The complexity difference guides us to choose appropriate tools (e.g., adversarial loss v.s. maximum likelihood loss, etc) to minimize the distance between distributions to learn and their targets. For instance, VAEs and adversarial autoencoder both regularize the model by minimizing the distance between the variational posterior and the prior, though VAEs choose KL divergence loss while AAE selects adversarial loss.

We can further extend the symmetric treatment of visible/latent / pair to data/label / pair, leading to a unified view of the generative and discriminative paradigms for unsupervised and semi-supervised learning. Specifically, conditional generative models create (data, label) pairs by generating data given label . These pairs can be used for classifier training (Hu et al., 2017; Odena et al., 2017). In parallel, discriminative approaches such as knowledge distillation (Hinton et al., 2015; Hu et al., 2016a, b) create (data, label) pairs by generating label given data . With the symmetric view of the - and -spaces, and neural network-based black-box mappings between spaces, we can see the two approaches are essentially the same.

References

Appendix A Proof of Lemma 1

Proof.
(24)

where

(25)

Note that , and . Let . Eq.(25) can be simplified as:

(26)

On the other hand,

(27)

Note that

(28)

Taking derivatives of Eq.(26) w.r.t at we get

(29)

Taking derivatives of the both sides of Eq.(24) at w.r.t at and plugging the last equation of Eq.(29), we obtain the desired results. ∎

Appendix B Proof of JSD Upper Bound in Lemma 1

We show that, in Lemma.1 (Eq.10), the JSD term is upper bounded by the KL term, i.e.,

(30)
Proof.

From Eq.(24), we have

(31)

From Eq.(26) and Eq.(27), we have

(32)

Eq.(31) and Eq.(32) lead to Eq.(30). ∎

Figure 6: Left: Graphical model of InfoGAN. Right: Graphical model of Adversarial Autoencoder (AAE), which is obtained by swapping data and code in InfoGAN.

Appendix C Schematic Graphical Models and AAE/PM/CycleGAN

Adversarial Autoencoder (AAE) (Makhzani et al., 2015) can be obtained by swapping code variable and data variable of InfoGAN in the graphical model, as shown in Figure 6. To see this, we directly write down the objectives represented by the graphical model in the right panel, and show they are precisely the original AAE objectives proposed in  (Makhzani et al., 2015). We present detailed derivations, which also serve as an example for how one can translate a graphical model representation to the mathematical formulations. Readers can do similarly on the schematic graphical models of GANs, InfoGANs, VAEs, and many other relevant variants and write down the respective objectives conveniently.

We stick to the notational convention in the paper that parameter is associated with the distribution over , parameter with the distribution over , and parameter with the distribution over . Besides, we use to denote the distributions over , and the distributions over and .

From the graphical model, the inference process (dashed-line arrows) involves implicit distribution (where is encapsulated). As in the formulations of GANs (Eq.4 in the paper) and VAEs (Eq.13 in the paper), indicates the real distribution we want to approximate and indicates the approximate distribution with parameters to learn. So we have

(33)

where, as is the hidden code, is the prior distribution over 111See section 6 of the paper for the detailed discussion on prior distributions of hidden variables and empirical distribution of visible variables, and the space of is degenerated. Here is the implicit distribution such that

(34)

where is a deterministic transformation parameterized with that maps data to code . Note that as is a visible variable, the pre-fixed distribution of is the empirical data distribution.

On the other hand, the generative process (solid-line arrows) involves (here means we will swap between and ). As the space of is degenerated given , thus is fixed without parameters to learn, and is only associated to .

With the above components, we maximize the log likelihood of the generative distributions
conditioning on the variable inferred by . Adding the prior distributions, the objectives are then written as