Symmetric Variational Autoencoder and Connections to Adversarial Learning

09/06/2017 ∙ by Liqun Chen, et al. ∙ Duke University 0

A new form of the variational autoencoder (VAE) is proposed, based on the symmetric Kullback-Leibler divergence. It is demonstrated that learning of the resulting symmetric VAE (sVAE) has close connections to previously developed adversarial-learning methods. This relationship helps unify the previously distinct techniques of VAE and adversarially learning, and provides insights that allow us to ameliorate shortcomings with some previously developed adversarial methods. In addition to an analysis that motivates and explains the sVAE, an extensive set of experiments validate the utility of the approach.



There are no comments yet.


page 7

page 8

page 11

page 12

page 13

page 14

page 15

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Generative models that are descriptive

of data have been widely employed in statistics and machine learning. Factor models (FMs) represent one commonly used generative model

(Tipping and Bishop, 1999), and mixtures of FMs have been employed to account for more-general data distributions (Ghahramani and Hinton, 1997). These models typically have latent variables (e.g., factor scores) that are inferred given observed data; the latent variables are often used for a down-stream goal, such as classification (Carvalho et al., 2008). After training, such models are useful for inference tasks given subsequent observed data. However, when one draws from such models, by drawing latent variables from the prior and pushing them through the model to synthesize data, the synthetic data typically do not appear to be realistic. This suggests that while these models may be useful for analyzing observed data in terms of inferred latent variables, they are also capable of describing a large set of data that do not appear to be real.

The generative adversarial network (GAN) (Goodfellow et al., 2014)

represents a significant recent advance toward development of generative models that are capable of synthesizing realistic data. Such models also employ latent variables, drawn from a simple distribution analogous to the aforementioned prior, and these random variables are fed through a (deep) neural network. The neural network acts as a functional transformation of the original random variables, yielding a model capable of representing sophisticated distributions. Adversarial learning discourages the network from yielding synthetic data that are unrealistic, from the perspective of a learned neural-network-based classifier. However, GANs are notoriously difficult to train, and multiple generalizations and techniques have been developed to improve learning performance

(Salimans et al., 2016), for example Wasserstein GAN (WGAN) (Arjovsky and Bottou, 2017; Arjovsky et al., 2017) and energy-based GAN (EB-GAN) (Zhao et al., 2017).

While the original GAN and variants were capable of synthesizing highly realistic data (e.g., images), the models lacked the ability to infer the latent variables given observed data. This limitation has been mitigated recently by methods like adversarial learned inference (ALI) (Dumoulin et al., 2017), and related approaches. However, ALI appears to be inadequate from the standpoint of inference, in that, given observed data and associated inferred latent variables, the subsequently synthesized data often do not look particularly close to the original data.

The variational autoencoder (VAE) (Kingma and Welling, 2014) is a class of generative models that precedes GAN. VAE learning is based on optimizing a variational lower bound, connected to inferring an approximate posterior distribution on latent variables; such learning is typically not performed in an adversarial manner. VAEs have been demonstrated to be effective models for inferring latent variables, in that the reconstructed data do typically look like the original data, albeit in a blurry manner (Dumoulin et al., 2017). The form of the VAE has been generalized recently, in terms of the adversarial variational Bayesian (AVB) framework (Mescheder et al., 2016). This model yields general forms of encoders and decoders, but it is based on the original variational Bayesian (VB) formulation. The original VB framework yields a lower bound on the log likelihood of the observed data, and therefore model learning is connected to maximum-likelihood (ML) approaches. From the perspective of designing generative models, it has been recognized recently that ML-based learning has limitations (Arjovsky and Bottou, 2017)

: such learning tends to yield models that match observed data, but also have a high probability of generating unrealistic synthetic data.

The original VAE employs the Kullback-Leibler divergence to constitute the variational lower bound. As is well known, the KL distance metric is asymmetric. We demonstrate that this asymmetry encourages design of decoders (generators) that often yield unrealistic synthetic data when the latent variables are drawn from the prior. From a different but related perspective, the encoder infers latent variables (across all training data) that only encompass a subset of the prior. As demonstrated below, these limitations of the encoder and decoder within conventional VAE learning are intertwined.

We consequently propose a new symmetric VAE (sVAE), based on a symmetric form of the KL divergence and associated variational bound. The proposed sVAE is learned using an approach related to that employed in the AVB (Mescheder et al., 2016), but in a new manner connected to the symmetric variational bound. Analysis of the sVAE demonstrates that it has close connections to ALI (Dumoulin et al., 2017), WGAN (Arjovsky et al., 2017) and to the original GAN (Goodfellow et al., 2014) framework; in fact, ALI is recovered exactly, as a special case of the proposed sVAE. This provides a new and explicit linkage between the VAE (after it is made symmetric) and a wide class of adversarially trained generative models. Additionally, with this insight, we are able to ameliorate much of the aforementioned limitations of ALI, from the perspective of data reconstruction. In addition to analyzing properties of the sVAE, we demonstrate excellent performance on an extensive set of experiments.

2 Review of Variational Autoencoder

2.1 Background

Assume observed data samples , where is the true and unknown distribution we wish to approximate. Consider , a model with parameters and latent code . With prior on the codes, the modeled generative process is , with . We may marginalize out the latent codes, and hence the model is . To learn , we typically seek to maximize the expected log likelihood: , where one typically invokes the approximation assuming iid observed samples .

It is typically intractable to evaluate directly, as generally doesn’t have a closed form. Consequently, a typical approach is to consider a model for the posterior of the latent code given observed , characterized by parameters . Distribution is often termed an encoder, and is a decoder (Kingma and Welling, 2014); both are here stochastic, vis-à-vis their deterministic counterparts associated with a traditional autoencoder (Vincent et al., 2010). Consider the variational expression


In practice the expectation wrt is evaluated via sampling, assuming observed samples . One typically must also utilize sampling from to evaluate the corresponding expectation in (1). Learning is effected as , and a model so learned is termed a variational autoencoder (VAE) (Kingma and Welling, 2014).

It is well known that . Alternatively, the variational expression may be represented as


where , and . One may readily show that


where . To maximize , we seek minimization of . Hence, from (3) the goal is to align with , while from (4) the goal is to align with . The other terms seek to match the respective conditional distributions. All of these conditions are implied by minimizing . However, the KL divergence is asymmetric, which yields limitations wrt the learned model.

2.2 Limitations of the VAE

The support of a distribution is defined as the member of the set with minimum size . We are typically interested in . For notational convenience we replace with , with the understanding is small. We also define as the largest set for which , and hence . For simplicity of exposition, we assume and are unique; the meaning of the subsequent analysis is unaffected by this assumption.

Consider , which from (2) and (3) we seek to make large when learning . The following discussion borrows insights from (Arjovsky et al., 2017), although that analysis was different, in that it was not placed within the context of the VAE. Since , , and . If , there is a strong (negative) penalty introduced by , and therefore maximization of encourages . By contrast, there is not a substantial penalty to .

Summarizing these conditions, the goal of maximizing encourages . This implies that can synthesize all that may be drawn from , but additionally there is (often) high probability that will synthesize that will not be drawn from .

Similarly, encourages , and the commensurate goal of increasing differential entropy encourages that be as large as possible.

Hence, the goal of large and are saying the same thing, from different perspectives: () seeking large implies that there is a high probability that drawn from will be different from those drawn from , and () large implies that drawn from are likely to be different from those drawn from , with responsible for the that are inconsistent with . These properties are summarized in Fig. 1.

Figure 1: Characteristics of the encoder and decoder of the conventional VAE , for which the support of the distributions satisfy and , implying that the generative model has a high probability of generating unrealistic draws.
Figure 2: Characteristics of the new VAE expression, .

Considering the remaining terms in (3) and (4), and using similar logic on , the model encourages . From , the model also encourages . The differential entropies and encourage that and be as large as possible. Since , it is anticipated that

will under-estimate the variance of

, as is common with the variational approximation to the posterior (Blei et al., 2017).

3 Refined VAE: Imposition of Symmetry

3.1 Symmetric KL divergence

Consider the new variational expression


where . Using logic analogous to that applied to , maximization of encourages distribution supports reflected in Fig. 2.

Defining , we have


where , and the symmetric KL divergence is . Maximization of seeks minimizing , which simultaneously imposes the conditions summarized in Figs. 1 and 2.

One may show that


Considering the representation in (9), the goal of small encourages and , and hence that . Further, since , maximization of seeks to minimize the cross-entropy between and , encouraging a complete matching of the distributions and , not just shared support. From (8), a match is simultaneously encouraged between and . Further, the respective conditional distributions are also encouraged to match.

3.2 Adversarial solution

Assuming fixed , and using logic analogous to Proposition 1 in (Mescheder et al., 2016), we consider


where . The scalar function is represented by a deep neural network with parameters , and network inputs . For fixed , the parameters that maximize yield


and hence


Hence, to optimize we consider the cost function


Assuming (11) holds, we have


and the goal is to achieve through joint optimization of . Model learning consists of alternating between (10) and (14), maximizing (10) wrt with fixed, and maximizing (14) wrt with fixed.

The expectations in (10) and (14) are approximated by averaging over samples, and therefore to implement this solution we need only be able to sample from and , and we do not require explicit forms for these distributions. For example, a draw from may be constituted as , where is implemented as a neural network with parameters and .

3.3 Interpretation in terms of LRT statistic

In (10) a classifier is designed to distinguish between samples drawn from and from . Implicit in that expression is that there is equal probability that either of these distributions are selected for drawing , i.e., that . Under this assumption, given observed , the probability of it being drawn from is , and the probability of it being drawn from is (Goodfellow et al., 2014). Since the denominator is shared by these distributions, and assuming function is known, an observed is inferred as being drawn from the underlying distributions as


This is the well-known likelihood ratio test (LRT) (Trees, 2001), and is reflected by (11). We have therefore derived a learning procedure based on the log-LRT, as reflected in (14). The solution is “adversarial,” in the sense that when optimizing the objective in (14

) seeks to “fool” the LRT test statistic, while for fixed

maximization of (10) wrt corresponds to updating the LRT. This adversarial solution comes as a natural consequence of symmetrizing the traditional VAE learning procedure.

4 Connections to Prior Work

4.1 Adversarially Learned Inference

The adversarially learned inference (ALI) (Dumoulin et al., 2017) framework seeks to learn both an encoder and decoder, like the approach proposed above, and is based on optimizing


This has similarities to the proposed approach, in that the term is identical to our maximization of (10) wrt . However, in the proposed approach, rather than directly then optimizing wrt , as in (18), in (14) the result from this term is used to define , which is then employed in (14) to subsequently optimize over .

Note that is a monotonically increasing function, and therefore we may replace (14) as


and note . Maximizing (19) wrt with fixed corresponds to the minimization wrt reflected in (18). Hence, the proposed approach is exactly ALI, if in (14) we replace with .

4.2 Original GAN

The proposed approach assumed both a decoder and an encoder , and we considered the symmetric . We now simplify the model for the case in which we only have a decoder, and the synthesized data are drawn with , and we wish to learn such that data synthesized in this manner match observed data . Consider the symmetric


where for fixed


We consider a simplified form of (10), specifically


which we seek to maximize wrt with fixed , with optimal solution as in (21). We optimize seeking to maximize , as where


with independent of the update parameter . We observe that in seeking to maximize , parameters are updated as to “fool” the log-LRT . Learning consists of iteratively updating by maximizing and updating by maximizing .

Recall that is a monotonically increasing function, and therefore we may replace (23) as


Using the same logic as discussed above in the context of ALI, maximizing wrt may be replaced by minimization, by transforming . With this simple modification, minimizing the modified (24) wrt and maximizing (22) wrt , we exactly recover the original GAN (Goodfellow et al., 2014), for the special (but common) case of a sigmoidal discriminator.

4.3 Wasserstein GAN

The Wasserstein GAN (WGAN) (Arjovsky et al., 2017) setup is represented as


where must be a 1-Lipschitz function. Typically is represented by a neural network with parameters , with parameter clipping or regularization on the weights (to constrain the amplitude of ). Note that WGAN is closely related to (23), but in WGAN doesn’t make an explicit connection to the underlying likelihood ratio, as in (21).

It is believed that the current paper is the first to consider symmetric variational learning, introducing , from which we have made explicit connections to previously developed adversarial-learning methods. Previous efforts have been made to match to , which is a consequence of the proposed symmetric VAE (sVAE). For example, (Makhzani et al., 2016) introduced a modification to the original VAE formulation, but it loses connection to the variational lower bound (Mescheder et al., 2016).

4.4 Amelioration of vanishing gradients

As discussed in (Arjovsky et al., 2017), a key distinction between the WGAN framework in (25) and the original GAN (Goodfellow et al., 2014) is that the latter uses a binary discriminator to distinguish real and synthesized data; the in WGAN is a 1-Lipschitz function, rather than an explicit discriminator. A challenge with GAN is that as the discriminator gets better at distinguishing real and synthetic data, the gradients wrt the discriminator parameters vanish, and learning is undermined. The WGAN was designed to ameliorate this problem (Arjovsky et al., 2017).

From the discussion in Section 4.1, we note that the key distinction between the proposed sVAE and ALI is that the latter uses a binary discriminator to distinguish manifested via the generator from manifested via the encoder. By contrast, the sVAE uses a log-LRT, rather than a binary classifier, with it inferred in an adversarial manner. ALI is therefore undermined by vanishing gradients as the binary discriminator gets better, with this avoided by sVAE. The sVAE brings the same intuition associated with WGAN (addressing vanishing gradients) to a generalized VAE framework, with a generator and a decoder; WGAN only considers a generator. Further, as discussed in Section 4.3

, unlike WGAN, which requires gradient clipping or other forms of regularization to approximate 1-Lipschitz functions, in the proposed sVAE the

arises naturally from the symmetrized VAE and we do not require imposition of Lipschitz conditions. As discussed in Section 6, this simplification has yielded robustness in implementation.

5 Model Augmentation

A significant limitation of the original ALI setup is an inability to accurately reconstruct observed data via the process (Dumoulin et al., 2017). With the proposed sVAE, which is intimately connected to ALI, we may readily address this shortcoming. The variational expressions discussed above may be written as and . In both of these expressions, the first term to the right of the equality enforces model fit, and the second term penalizes the posterior distribution for individual data samples for being dissimilar from the prior (i.e., penalizes from being dissimilar from , and likewise wrt and ). The proposed sVAE encourages the cumulative distributions and to match and , respectively. By simultaneously encouraging more peaked and , we anticipate better “cycle consistency” (Zhu et al., 2017) and hence more accurate reconstructions.

To encourage that are more peaked in the space of for individual , and also to consider more peaked , we may augment the variational expressions as


where . For the original variational expressions are retained, and for , and are allowed to diverge more from and , respectively, while placing more emphasis on the data-fit terms. Defining , we have


Model learning is the same as discussed in Sec. 3.2, with the modification


A disadvantage of this approach is that it requires explicit forms for and , while the setup in Sec. 3.2 only requires the ability to sample from these distributions.

We can now make a connection to additional related work, particularly (Pu et al., 2017), which considered a similar setup to (26) and (27), for the special case of . While (Pu et al., 2017) had a similar idea of using a symmetrized VAE, they didn’t make the theoretical justification presented in Section 3. Further, and more importantly, the way in which learning was performed in (Pu et al., 2017) is distinct from that applied here, in that (Pu et al., 2017) required an additional adversarial learning step, increasing implementation complexity. Consequently, (Pu et al., 2017) did not use adversarial learning to approximate the log-LRT, and therefore it cannot make the explicit connection to ALI and WGAN that were made in Sections 4.1 and 4.3, respectively.

6 Experiments

In addition to evaluating our model on a toy dataset, we consider MNIST, CelebA and CIFAR-10 for both reconstruction and generation tasks. As done for the model ALI with Cross Entropy regularization (ALICE) (Li et al., 2017), we also add the augmentation term ( as discussed in Sec. 5) to sVAE as a regularizer, and denote the new model as sVAE-r. More specifically, we show the results based on the two models: ) sVAE: the model is developed in Sec. 3 to optimize in (10) and in (14). ) sVAE-r: the model is sVAE with regularization term to optimize in (10) and in (29). The quantitative evaluation is based on the mean square error (MSE) of reconstructions, log-likelihood calculated via the annealed importance sampling (AIS) (Wu et al., 2016), and inception score (IS) (Salimans et al., 2016).

All parameters are initialized with Xavier (Glorot and Bengio, 2010) and optimized using Adam (Kingma and Ba, 2015) with learning rate of 0.0001. No dataset-specific tuning or regularization, other than dropout (Srivastava et al., 2014), is performed. The architectures for the encoder, decoder and discriminator are detailed in the Appendix. All experimental results were performed on a single NVIDIA TITAN X GPU.

Figure 3: sVAE results on toy dataset. Top: Inception Score for ALI and sVAE with . Bottom: Mean Squared Error (MSE).

6.1 Toy Data

In order to show the robustness and stability of our model, we test sVAE and sVAE-r on a toy dataset designed in the same manner as the one in ALICE (Li et al., 2017). In this dataset, the true distribution of data

is a two-dimensional Gaussian mixture model with five components. The latent code

is a standard Gaussian distribution

. To perform the test, we consider using different values of for both sVAE-r and ALICE. For each , experiments with different choices of architecture and hyper-parameters are conducted. In all experiments, we use mean square error (MSE) and inception score (IS) to evaluate the performance of the two models. Figure 3 shows the histogram results for each model. As we can see, both ALICE and sVAE-r are able to reconstruct images when , while sVAE-r provides better overall inception score.

Figure 4: sVAE results on MNIST. (a) and (b) are generated sample images by sVAE and sVAE-r, respectively. (c) is reconstructed images by sVAE-r: in each block, column one is ground-truth and column two is reconstructed images. Note that is set to for sVAE-r.

6.2 Mnist

The results of image generation and reconstruction for sVAE, as applied to the MNIST dataset, are shown in Figure 4. By adding the regularization term, sVAE overcomes the limitation of image reconstruction in ALI. The log-likelihood of sVAE shown in Table 1

is calculated using the annealed importance sampling method on the binarized MNIST dataset, as proposed in

(Wu et al., 2016)

. Note that in order to compare the model performance on binarized data, the output of the decoder is considered as a Bernoulli distribution instead of the Gaussian approach from the original paper. Our model achieves -79.26 nats, outperforming normalizing flow (-85.1 nats) while also being competitive to the state-of-the-art result (-79.2 nats). In addition, sVAE is able to provide compelling generated images, outperforming GAN

(Goodfellow et al., 2014) and WGAN-GP (Ishaan Gulrajani, 2017) based on the inception scores.

Model IS
NF (k=80) (Rezende et al., 2015) -85.1 -
PixelRNN (Oord et al., 2016) -79.2 -
AVB (Mescheder et al., 2016) -79.5 -
ASVAE (Pu et al., 2017) -81.14 -
GAN (Goodfellow et al., 2014) -114.25 8.34
WGAN-GP (Ishaan Gulrajani, 2017) -79.92 8.45
DCGAN (Radford et al., 2016) -79.47 8.93
sVAE (ours) -80.42 8.81
sVAE-r (ours) -79.26 9.12
Table 1: Quantitative Results on MNIST. is calculated using AIS. is reported in (Hu et al., 2017).
Figure 5: CelebA generation results. Left block: sVAE-r generation. Right block: ALICE generation. and from left to right in each block.
Figure 6: CelebA reconstruction results. Left column: The ground truth. Middle block: sVAE-r reconstruction. Right block: ALICE reconstruction. and from left to right in each block.
Figure 5: CelebA generation results. Left block: sVAE-r generation. Right block: ALICE generation. and from left to right in each block.

6.3 CelebA

We evaluate sVAE on the CelebA dataset and compare the results with ALI. In experiments we note that for high-dimensional data like the CelebA, ALICE

(Li et al., 2017) shows a trade-off between reconstruction and generation, while sVAE-r does not have this issue. If the regularization term is not included in ALI, the reconstructed images do not match the original images. On the other hand, when the regularization term is added, ALI is capable of reconstructing images but the generated images are flawed. In comparison, sVAE-r does well in both generation and reconstruction with different values of . The results for both sVAE and ALI are shown in Figure 6 and 6.

Generally speaking, adding the augmentation term as shown in (28) should encourage more peaked and . Nevertheless, ALICE fails in the inference process and performs more like an autoencoder. This is due to the fact that the discriminator becomes too sensitive to the regularization term. On the other hand, by using the symmetric KL (14

) as the cost function, we are able to alleviate this issue, which makes sVAE-r a more stable model than ALICE. This is because sVAE updates the generator using the discriminator output, before the sigmoid, a non-linear transformation on the discriminator output scale.

Figure 7: sVAE-r and ALICE CIFAR quantitative evaluation with different values of . Left: IS for generation; Right: MSE for reconstruction. The result is the average of multiple tests.

6.4 Cifar-10

The trade-off of ALICE (Li et al., 2017) mentioned in Sec. 6.3 is also manifested in the results for the CIFAR-10 dataset. In Figure 7, we show quantitative results in terms of inception score and mean squared error of sVAE-r and ALICE with different values of . As can be seen, both models are able to reconstruct images when increases. However, when is larger than , we observe a decrease in the inception score of ALICE, in which the model fails to generate images.

Model IS
ALI (Dumoulin et al., 2017) 5.34 .05
DCGAN (Radford et al., 2016) 6.16 .07
ASVAE (Pu et al., 2017) 6.89 .05
WGAN-GP 6.56 .05
WGAN-GP ResNet (Ishaan Gulrajani, 2017) 7.86 .07
sVAE (ours) 6.76 .046
sVAE-r (ours) 6.96 .066
Table 2: Unsupervised Inception Score on CIFAR-10

The CIFAR-10 dataset is also used to evaluate the generation ability of our model. The quantitative results, i.e., the inception scores, are listed in Table 2. Our model shows improved performance on image generation compared to ALI and DCGAN. Note that sVAE also gets comparable result as WGAN-GP (Ishaan Gulrajani, 2017) achieves. This can be interpreted using the similarity between (23) and (25) as summarized in the Sec. 4. The generated images are shown in Figure 8. More results are in the Appendix.

(a) sVAE CIFAR unsupervised generation.
(b) sVAE-r (with ) CIFAR unsupervised generation.
(c) sVAE-r (with ) CIFAR unsupervised reconstruction. First two rows are original images, and the last two rows are the reconstructions.
Figure 8: sVAE CIFAR results on image generation and reconstruction.

7 Conclusions

We present the symmetric variational autoencoder (sVAE), a novel framework which can match the joint distribution of data and latent code using the

symmetric Kullback-Leibler divergence. The experiment results show the advantages of sVAE, in which it not only overcomes the missing mode problem (Hu et al., 2017)

, but also is very stable to train. With excellent performance in image generation and reconstruction, we will apply sVAE on semi-supervised learning tasks and conditional generation tasks in future work. Morever, because the latent code

can be treated as data from a different domain, i.e., images (Zhu et al., 2017; Kim et al., 2017) or text (Gan et al., 2017), we can also apply sVAE to domain transfer tasks.


Appendix A Model Architectures

Encoder X to z Decoder z to X Discriminator
Input Gray Image Input latent code z Input two Gray Image

conv. 16 ReLU, stride 2, BN

MLP output 1024, BN conv. 32 ReLU, stride 2, BN
conv. 32 ReLU, stride 2, BN MLP output 3136, BN conv. 64 ReLU, stride 2, BN
MLP output 784, BN deconv. 64 ReLU, stride 2, BN conv. 128 ReLU, stride 2, BN
input z through MLP output 1024, ReLU
MLP output dim of z deconv. 1 ReLU, stride 2, sigmoid MLP output 1
Table 3:

Architecture of the models for sVAE-r on MNIST. BN denotes batch normalization.

Encoder X to z Decoder z to X Discriminator
Input Image X concat with noise Input z concat with noise Input X
conv. 32 lReLU, stride 2, BN concat random noise conv. 64 ReLU, stride 2, BN
conv. 64 lReLU, stride 2, BN MLP output 1024, lReLU, BN conv. 128 ReLU, stride 2, BN
conv. 128 lReLU, stride 2, BN MLP output 8192, lReLU, BN conv. 256 ReLU, stride 2, BN
conv. 256 lReLU, stride 2, BN conv. 512 ReLU, stride 2, BN
conv. 512 lReLU, stride 2, BN deconv. 256 lReLU, stride 2, BN Input z through MLP, output 2046, ReLU
MLP output 512, lReLU deconv. 128 lReLU, stride 2, BN concat two features from X and z
MLP output dim of z, tanh deconv. 64 lReLU, stride 2, BN
deconv. 3 tanh, stride 2, BN MLP output 1
Table 4: Architecture of the models for sVAE on CelebA. BN denotes batch normalization. lReLU denotes Leaky ReLU.
Encoder X to z Decoder z to X Discriminator
Input Image X concat with noise Input z Input X
conv. 32 lReLU, stride 2, BN concat random noise conv. 64 ReLU, stride 2, BN
conv. 64 lReLU, stride 2, BN conv. 128 ReLU, stride 2, BN
conv. 128 lReLU, stride 2, BN MP output 8192, lReLU, BN conv. 256 ReLU, stride 2, BN
conv. 256 lReLU, stride 2, BN conv. 512 ReLU, stride 2, BN, avg pooling
deconv. 256 ReLU, stride 2, BN Input z through MLP, output 512, ReLU
MLP output 512, lReLU deconv. 128 ReLU, stride 2, BN concat two features from X and z
MLP output dim of z, tanh deconv. 3 tanh, stride 2 MLP output 1
Table 5: Architecture of the models for sVAE-r on CIFAR. BN denotes batch normalization. lReLU denotes Leaky ReLU. denotes the number of attributes.

Appendix B More Result

b.1 CIFAR-10 result

Figure 9: sVAE CIFAR unsupervised generation results with .
Figure 10: sVAE CIFAR unsupervised reconstruction. First two rows are original images, and the last two rows are the reconstructions
Figure 11: sVAE-r CIFAR unsupervised reconstruction. First two rows are original images, and the last two rows are the reconstructions

b.2 CelebA result

Figure 12: sVAE-r CelebA generations results with different
Figure 13: ALICE CelebA generations results with different
Figure 14: ALICE CelebA reconstructions with different .
Figure 15: ALICE CelebA reconstructions with different .