Density estimation using Real NVP

05/27/2016 ∙ by Laurent Dinh, et al. ∙ 0

Unsupervised learning of probabilistic models is a central yet challenging problem in machine learning. Specifically, designing models with tractable learning, sampling, inference and evaluation is crucial in solving this task. We extend the space of such models using real-valued non-volume preserving (real NVP) transformations, a set of powerful invertible and learnable transformations, resulting in an unsupervised learning algorithm with exact log-likelihood computation, exact sampling, exact inference of latent variables, and an interpretable latent space. We demonstrate its ability to model natural images on four datasets through sampling, log-likelihood evaluation and latent variable manipulations.

READ FULL TEXT VIEW PDF

Authors

page 13

page 14

page 19

page 22

page 23

page 24

page 31

page 32

Code Repositories

real-nvp

Implementation of Real-NVP in Tensorflow


view repo
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

The domain of representation learning has undergone tremendous advances due to improved supervised learning techniques. However, unsupervised learning has the potential to leverage large pools of unlabeled data, and extend these advances to modalities that are otherwise impractical or impossible.

One principled approach to unsupervised learning is generative probabilistic modeling. Not only do generative probabilistic models have the ability to create novel content, they also have a wide range of reconstruction related applications including inpainting [61, 46, 59], denoising [3]

, colorization

[71]

, and super-resolution

[9].

As data of interest are generally high-dimensional and highly structured, the challenge in this domain is building models that are powerful enough to capture its complexity yet still trainable. We address this challenge by introducing real-valued non-volume preserving (real NVP) transformations

, a tractable yet expressive approach to modeling high-dimensional data.

This model can perform efficient and exact inference, sampling and log-density estimation of data points. Moreover, the architecture presented in this paper enables exact and efficient

reconstruction of input images from the hierarchical features extracted by this model.

2 Related work

Data space Latent space

Inference

Generation

Figure 1: Real NVP learns an invertible, stable, mapping between a data distribution and a latent distribution (typically a Gaussian). Here we show a mapping that has been learned on a toy 2-d dataset. The function maps samples from the data distribution in the upper left into approximate samples from the latent distribution, in the upper right. This corresponds to exact inference of the latent state given the data. The inverse function, , maps samples from the latent distribution in the lower right into approximate samples from the data distribution in the lower left. This corresponds to exact generation of samples from the model. The transformation of grid lines in and space is additionally illustrated for both and .

Substantial work on probabilistic generative models has focused on training models using maximum likelihood. One class of maximum likelihood models are those described by probabilistic undirected graphs, such as Restricted Boltzmann Machines [58] and Deep Boltzmann Machines [53]. These models are trained by taking advantage of the conditional independence property of their bipartite structure to allow efficient exact or approximate posterior inference on latent variables. However, because of the intractability of the associated marginal distribution over latent variables, their training, evaluation, and sampling procedures necessitate the use of approximations like Mean Field inference and Markov Chain Monte Carlo, whose convergence time for such complex models remains undetermined, often resulting in generation of highly correlated samples. Furthermore, these approximations can often hinder their performance [7].

Directed graphical models are instead defined in terms of an ancestral sampling procedure, which is appealing both for its conceptual and computational simplicity. They lack, however, the conditional independence structure of undirected models, making exact and approximate posterior inference on latent variables cumbersome [56]. Recent advances in stochastic variational inference [27] and amortized inference [13, 43, 35, 49], allowed efficient approximate inference and learning of deep directed graphical models by maximizing a variational lower bound on the log-likelihood [45]. In particular, the

variational autoencoder algorithm

[35, 49] simultaneously learns a generative network, that maps gaussian latent variables to samples , and a matched approximate inference network that maps samples to a semantically meaningful latent representation , by exploiting the reparametrization trick [68]

. Its success in leveraging recent advances in backpropagation

[51, 39]

in deep neural networks resulted in its adoption for several applications ranging from speech synthesis

[12] to language modeling [8]. Still, the approximation in the inference process limits its ability to learn high dimensional deep representations, motivating recent work in improving approximate inference [42, 48, 55, 63, 10, 59, 34].

Such approximations can be avoided altogether by abstaining from using latent variables. Auto-regressive models [18, 6, 37, 20]

can implement this strategy while typically retaining a great deal of flexibility. This class of algorithms tractably models the joint distribution by decomposing it into a product of conditionals using the

probability chain rule according to a fixed ordering over dimensions, simplifying log-likelihood evaluation and sampling. Recent work in this line of research has taken advantage of recent advances in recurrent networks [51], in particular long-short term memory [26], and residual networks [25, 24] in order to learn state-of-the-art generative image models [61, 46] and language models [32]. The ordering of the dimensions, although often arbitrary, can be critical to the training of the model [66]

. The sequential nature of this model limits its computational efficiency. For example, its sampling procedure is sequential and non-parallelizable, which can become cumbersome in applications like speech and music synthesis, or real-time rendering.. Additionally, there is no natural latent representation associated with autoregressive models, and they have not yet been shown to be useful for semi-supervised learning.

Generative Adversarial Networks (GANs) [21] on the other hand can train any differentiable generative network by avoiding the maximum likelihood principle altogether. Instead, the generative network is associated with a discriminator network whose task is to distinguish between samples and real data. Rather than using an intractable log-likelihood, this discriminator network provides the training signal in an adversarial fashion. Successfully trained GAN models [21, 15, 47] can consistently generate sharp and realistically looking samples [38]. However, metrics that measure the diversity in the generated samples are currently intractable [62, 22, 30]. Additionally, instability in their training process [47]

requires careful hyperparameter tuning to avoid diverging behavior.

Training such a generative network that maps latent variable to a sample does not in theory require a discriminator network as in GANs, or approximate inference as in variational autoencoders. Indeed, if is bijective, it can be trained through maximum likelihood using the change of variable formula:

(1)

This formula has been discussed in several papers including the maximum likelihood formulation of independent components analysis (ICA) [4, 28], gaussianization [14, 11] and deep density models [5, 50, 17, 3]. As the existence proof of nonlinear ICA solutions [29] suggests, auto-regressive models can be seen as tractable instance of maximum likelihood nonlinear ICA, where the residual corresponds to the independent components. However, naive application of the change of variable formula produces models which are computationally expensive and poorly conditioned, and so large scale models of this type have not entered general use.

3 Model definition

In this paper, we will tackle the problem of learning highly nonlinear models in high-dimensional continuous spaces through maximum likelihood. In order to optimize the log-likelihood, we introduce a more flexible class of architectures that enables the computation of log-likelihood on continuous data using the change of variable formula. Building on our previous work in [17], we define a powerful class of bijective functions which enable exact and tractable density evaluation and exact and tractable inference. Moreover, the resulting cost function does not to rely on a fixed form reconstruction cost such as square error [38, 47]

, and generates sharper samples as a result. Also, this flexibility helps us leverage recent advances in batch normalization

[31] and residual networks [24, 25] to define a very deep multi-scale architecture with multiple levels of abstraction.

3.1 Change of variable formula

Given an observed data variable

, a simple prior probability distribution

on a latent variable , and a bijection (with ), the change of variable formula defines a model distribution on by

(2)
(3)

where is the Jacobian of at .

Exact samples from the resulting distribution can be generated by using the inverse transform sampling rule [16]. A sample is drawn in the latent space, and its inverse image generates a sample in the original space. Computing the density on a point is accomplished by computing the density of its image and multiplying by the associated Jacobian determinant . See also Figure 1. Exact and efficient inference enables the accurate and fast evaluation of the model.

3.2 Coupling layers

(a) Forward propagation
(b) Inverse propagation
Figure 2: Computational graphs for forward and inverse propagation. A coupling layer applies a simple invertible transformation consisting of scaling followed by addition of a constant offset to one part

of the input vector conditioned on the remaining part of the input vector

. Because of its simple nature, this transformation is both easily invertible and possesses a tractable determinant. However, the conditional nature of this transformation, captured by the functions and , significantly increase the flexibility of this otherwise weak function. The forward and inverse propagation operations have identical computational cost.

Computing the Jacobian of functions with high-dimensional domain and codomain and computing the determinants of large matrices are in general computationally very expensive. This combined with the restriction to bijective functions makes Equation 2 appear impractical for modeling arbitrary distributions.

As shown however in [17], by careful design of the function , a bijective model can be learned which is both tractable and extremely flexible. As computing the Jacobian determinant of the transformation is crucial to effectively train using this principle, this work exploits the simple observation that the determinant of a triangular matrix can be efficiently computed as the product of its diagonal terms.

We will build a flexible and tractable bijective function by stacking a sequence of simple bijections. In each simple bijection, part of the input vector is updated using a function which is simple to invert, but which depends on the remainder of the input vector in a complex way. We refer to each of these simple bijections as an affine coupling layer. Given a dimensional input and , the output of an affine coupling layer follows the equations

(4)
(5)

where and stand for scale and translation, and are functions from , and is the Hadamard product or element-wise product (see Figure 2(a)).

3.3 Properties

The Jacobian of this transformation is

(6)

where is the diagonal matrix whose diagonal elements correspond to the vector . Given the observation that this Jacobian is triangular, we can efficiently compute its determinant as . Since computing the Jacobian determinant of the coupling layer operation does not involve computing the Jacobian of or

, those functions can be arbitrarily complex. We will make them deep convolutional neural networks. Note that the hidden layers of

and can have more features than their input and output layers.

Another interesting property of these coupling layers in the context of defining probabilistic models is their invertibility. Indeed, computing the inverse is no more complex than the forward propagation (see Figure 2(b)),

(7)
(8)

meaning that sampling is as efficient as inference for this model. Note again that computing the inverse of the coupling layer does not require computing the inverse of or , so these functions can be arbitrarily complex and difficult to invert.

3.4 Masked convolution

Partitioning can be implemented using a binary mask , and using the functional form for ,

(9)

We use two partitionings that exploit the local correlation structure of images: spatial checkerboard patterns, and channel-wise masking (see Figure 3). The spatial checkerboard pattern mask has value

where the sum of spatial coordinates is odd, and

otherwise. The channel-wise mask is for the first half of the channel dimensions and for the second half. For the models presented here, both and are rectified convolutional networks.

Figure 3: Masking schemes for affine coupling layers. On the left, a spatial checkerboard pattern mask. On the right, a channel-wise masking. The squeezing operation reduces the tensor (on the left) into a tensor (on the right). Before the squeezing operation, a checkerboard pattern is used for coupling layers while a channel-wise masking pattern is used afterward.

3.5 Combining coupling layers

Although coupling layers can be powerful, their forward transformation leaves some components unchanged. This difficulty can be overcome by composing coupling layers in an alternating pattern, such that the components that are left unchanged in one coupling layer are updated in the next (see Figure 4(a)).

The Jacobian determinant of the resulting function remains tractable, relying on the fact that

(10)
(11)

Similarly, its inverse can be computed easily as

(12)
(a) In this alternating pattern, units which remain identical in one transformation are modified in the next.
(b) Factoring out variables. At each step, half the variables are directly modeled as Gaussians, while the other half undergo further transformation.
Figure 4: Composition schemes for affine coupling layers.

3.6 Multi-scale architecture

We implement a multi-scale architecture using a squeezing operation: for each channel, it divides the image into subsquares of shape , then reshapes them into subsquares of shape . The squeezing operation transforms an tensor into an tensor (see Figure 3), effectively trading spatial size for number of channels.

At each scale, we combine several operations into a sequence: we first apply three coupling layers with alternating checkerboard masks, then perform a squeezing operation, and finally apply three more coupling layers with alternating channel-wise masking. The channel-wise masking is chosen so that the resulting partitioning is not redundant with the previous checkerboard masking (see Figure 3). For the final scale, we only apply four coupling layers with alternating checkerboard masks.

Propagating a dimensional vector through all the coupling layers would be cumbersome, in terms of computational and memory cost, and in terms of the number of parameters that would need to be trained. For this reason we follow the design choice of [57] and factor out half of the dimensions at regular intervals (see Equation 14). We can define this operation recursively (see Figure 4(b)),

(13)
(14)
(15)
(16)

In our experiments, we use this operation for . The sequence of coupling-squeezing-coupling operations described above is performed per layer when computing (Equation 14). At each layer, as the spatial resolution is reduced, the number of hidden layer features in and is doubled. All variables which have been factored out at different scales are concatenated to obtain the final transformed output (Equation 16).

As a consequence, the model must Gaussianize units which are factored out at a finer scale (in an earlier layer) before those which are factored out at a coarser scale (in a later layer). This results in the definition of intermediary levels of representation [53, 49] corresponding to more local, fine-grained features as shown in Appendix D.

Moreover, Gaussianizing and factoring out units in earlier layers has the practical benefit of distributing the loss function throughout the network, following the philosophy similar to guiding intermediate layers using intermediate classifiers

[40]. It also reduces significantly the amount of computation and memory used by the model, allowing us to train larger models.

3.7 Batch normalization

To further improve the propagation of training signal, we use deep residual networks [24, 25] with batch normalization [31] and weight normalization [2, 54] in and . As described in Appendix E we introduce and use a novel variant of batch normalization which is based on a running average over recent minibatches, and is thus more robust when training with very small minibatches.

We also use apply batch normalization to the whole coupling layer output. The effects of batch normalization are easily included in the Jacobian computation, since it acts as a linear rescaling on each dimension. That is, given the estimated batch statistics and , the rescaling function

(17)

has a Jacobian determinant

(18)

This form of batch normalization can be seen as similar to reward normalization in deep reinforcement learning

[44, 65].

We found that the use of this technique not only allowed training with a deeper stack of coupling layers, but also alleviated the instability problem that practitioners often encounter when training conditional distributions with a scale parameter through a gradient-based approach.

4 Experiments

4.1 Procedure

The algorithm described in Equation 2 shows how to learn distributions on unbounded space. In general, the data of interest have bounded magnitude. For examples, the pixel values of an image typically lie in after application of the recommended jittering procedure [64, 62]. In order to reduce the impact of boundary effects, we instead model the density of , where is picked here as . We take into account this transformation when computing log-likelihood and bits per dimension. We also augment the CIFAR-10, CelebA and LSUN datasets during training to also include horizontal flips of the training examples.

We train our model on four natural image datasets: CIFAR-10 [36], Imagenet [52],

Large-scale Scene Understanding (LSUN)

[70], CelebFaces Attributes (CelebA) [41]. More specifically, we train on the downsampled to and

versions of Imagenet

[46]. For the LSUN dataset, we train on the bedroom, tower and church outdoor categories. The procedure for LSUN is the same as in [47]: we downsample the image so that the smallest side is pixels and take random crops of . For CelebA, we use the same procedure as in [38]: we take an approximately central crop of then resize it to .

We use the multi-scale architecture described in Section 3.6 and use deep convolutional residual networks in the coupling layers with rectifier nonlinearity and skip-connections as suggested by [46]. To compute the scaling functions , we use a hyperbolic tangent function multiplied by a learned scale, whereas the translation function has an affine output. Our multi-scale architecture is repeated recursively until the input of the last recursion is a tensor. For datasets of images of size , we use residual blocks with hidden feature maps for the first coupling layers with checkerboard masking. Only residual blocks are used for images of size . We use a batch size of . For CIFAR-10, we use residual blocks, feature maps, and downscale only once. We optimize with ADAM [33] with default hyperparameters and use an regularization on the weight scale parameters with coefficient .

We set the prior to be an isotropic unit norm Gaussian. However, any distribution could be used for , including distributions that are also learned during training, such as from an auto-regressive model, or (with slight modifications to the training objective) a variational autoencoder.

4.2 Results

Dataset PixelRNN [46] Real NVP Conv DRAW [22] IAF-VAE [34]
CIFAR-10 3.00 3.49 < 3.59 < 3.28
Imagenet () 3.86 (3.83) 4.28 (4.26) < 4.40 (4.35)
Imagenet () 3.63 (3.57) 3.98 (3.75) < 4.10 (4.04)
LSUN (bedroom) 2.72 (2.70)
LSUN (tower) 2.81 (2.78)
LSUN (church outdoor) 3.08 (2.94)
CelebA 3.02 (2.97)
Table 1: Bits/dim results for CIFAR-10, Imagenet, LSUN datasets and CelebA. Test results for CIFAR-10 and validation results for Imagenet, LSUN and CelebA (with training results in parenthesis for reference).

We show in Table 1 that the number of bits per dimension, while not improving over the Pixel RNN [46] baseline, is competitive with other generative methods. As we notice that our performance increases with the number of parameters, larger models are likely to further improve performance. For CelebA and LSUN, the bits per dimension for the validation set was decreasing throughout training, so little overfitting is expected.

Figure 5: On the left column, examples from the dataset. On the right column, samples from the model trained on the dataset. The datasets shown in this figure are in order: CIFAR-10, Imagenet (), Imagenet (), CelebA, LSUN (bedroom).

We show in Figure 5 samples generated from the model with training examples from the dataset for comparison. As mentioned in [62, 22], maximum likelihood is a principle that values diversity over sample quality in a limited capacity setting. As a result, our model outputs sometimes highly improbable samples as we can notice especially on CelebA. As opposed to variational autoencoders, the samples generated from our model look not only globally coherent but also sharp. Our hypothesis is that as opposed to these models, real NVP does not rely on fixed form reconstruction cost like an norm which tends to reward capturing low frequency components more heavily than high frequency components. Unlike autoregressive models, sampling from our model is done very efficiently as it is parallelized over input dimensions. On Imagenet and LSUN, our model seems to have captured well the notion of background/foreground and lighting interactions such as luminosity and consistent light source direction for reflectance and shadows.

Figure 6: Manifold generated from four examples in the dataset. Clockwise from top left: CelebA, Imagenet (), LSUN (tower), LSUN (bedroom).

We also illustrate the smooth semantically consistent meaning of our latent variables. In the latent space, we define a manifold based on four validation examples , , , , and parametrized by two parameters and by,

(19)

We project the resulting manifold back into the data space by computing . Results are shown Figure 6

. We observe that the model seems to have organized the latent space with a notion of meaning that goes well beyond pixel space interpolation. More visualization are shown in the Appendix. To further test whether the latent space has a consistent semantic interpretation, we trained a class-conditional model on CelebA, and found that the learned representation had a consistent semantic meaning across class labels (see Appendix

F).

5 Discussion and conclusion

In this paper, we have defined a class of invertible functions with tractable Jacobian determinant, enabling exact and tractable log-likelihood evaluation, inference, and sampling. We have shown that this class of generative model achieves competitive performances, both in terms of sample quality and log-likelihood. Many avenues exist to further improve the functional form of the transformations, for instance by exploiting the latest advances in dilated convolutions [69] and residual networks architectures [60].

This paper presented a technique bridging the gap between auto-regressive models, variational autoencoders, and generative adversarial networks. Like auto-regressive models, it allows tractable and exact log-likelihood evaluation for training. It allows however a much more flexible functional form, similar to that in the generative model of variational autoencoders. This allows for fast and exact sampling from the model distribution. Like GANs, and unlike variational autoencoders, our technique does not require the use of a fixed form reconstruction cost, and instead defines a cost in terms of higher level features, generating sharper images. Finally, unlike both variational autoencoders and GANs, our technique is able to learn a semantically meaningful latent space which is as high dimensional as the input space. This may make the algorithm particularly well suited to semi-supervised learning tasks, as we hope to explore in future work.

Real NVP generative models can additionally be conditioned on additional variables (for instance class labels) to create a structured output algorithm. More so, as the resulting class of invertible transformations can be treated as a probability distribution in a modular way, it can also be used to improve upon other probabilistic models like auto-regressive models and variational autoencoders. For variational autoencoders, these transformations could be used both to enable a more flexible reconstruction cost [38] and a more flexible stochastic inference distribution [48]. Probabilistic models in general can also benefit from batch normalization techniques as applied in this paper.

The definition of powerful and trainable invertible functions can also benefit domains other than generative unsupervised learning. For example, in reinforcement learning, these invertible functions can help extend the set of functions for which an operation is tractable for continuous Q-learning [23] or find representation where local linear Gaussian approximations are more appropriate [67].

6 Acknowledgments

The authors thank the developers of Tensorflow

[1]. We thank Sherry Moore, David Andersen and Jon Shlens for their help in implementing the model. We thank Aäron van den Oord, Yann Dauphin, Kyle Kastner, Chelsea Finn, Maithra Raghu, David Warde-Farley, Daniel Jiwoong Im and Oriol Vinyals for fruitful discussions. Finally, we thank Ben Poole, Rafal Jozefowicz and George Dahl for their input on a draft of the paper.

References

Appendix A Samples


Figure 7: Samples from a model trained on Imagenet ().
Figure 8: Samples from a model trained on CelebA.
Figure 9: Samples from a model trained on LSUN (bedroom category).
Figure 10: Samples from a model trained on LSUN (church outdoor category).
Figure 11: Samples from a model trained on LSUN (tower category).

Appendix B Manifold


Figure 12: Manifold from a model trained on Imagenet (). Images with red borders are taken from the validation set, and define the manifold. The manifold was computed as described in Equation 19, where the x-axis corresponds to , and the y-axis to , and where .
Figure 13: Manifold from a model trained on CelebA. Images with red borders are taken from the training set, and define the manifold. The manifold was computed as described in Equation 19, where the x-axis corresponds to , and the y-axis to , and where .
Figure 14: Manifold from a model trained on LSUN (bedroom category). Images with red borders are taken from the validation set, and define the manifold. The manifold was computed as described in Equation 19, where the x-axis corresponds to , and the y-axis to , and where .
Figure 15: Manifold from a model trained on LSUN (church outdoor category). Images with red borders are taken from the validation set, and define the manifold. The manifold was computed as described in Equation 19, where the x-axis corresponds to , and the y-axis to , and where .
Figure 16: Manifold from a model trained on LSUN (tower category). Images with red borders are taken from the validation set, and define the manifold. The manifold was computed as described in Equation 19, where the x-axis corresponds to , and the y-axis to , and where .

Appendix C Extrapolation

Inspired by the texture generation work by [19, 61] and extrapolation test with DCGAN [47], we also evaluate the statistics captured by our model by generating images twice or ten times as large as present in the dataset. As we can observe in the following figures, our model seems to successfully create a “texture” representation of the dataset while maintaining a spatial smoothness through the image. Our convolutional architecture is only aware of the position of considered pixel through edge effects in convolutions, therefore our model is similar to a stationary process. This also explains why these samples are more consistent in LSUN, where the training data was obtained using random crops.

(a)
(b)
Figure 17: We generate samples a factor bigger than the training set image size on Imagenet ().
(a)
(b)
Figure 18: We generate samples a factor bigger than the training set image size on CelebA.
(a)
(b)
Figure 19: We generate samples a factor bigger than the training set image size on LSUN (bedroom category).
(a)
(b)
Figure 20: We generate samples a factor bigger than the training set image size on LSUN (church outdoor category).
(a)
(b)
Figure 21: We generate samples a factor bigger than the training set image size on LSUN (tower category).

Appendix D Latent variables semantic

As in [22], we further try to grasp the semantic of our learned layers latent variables by doing ablation tests. We infer the latent variables and resample the lowest levels of latent variables from a standard gaussian, increasing the highest level affected by this resampling. As we can see in the following figures, the semantic of our latent space seems to be more on a graphic level rather than higher level concept. Although the heavy use of convolution improves learning by exploiting image prior knowledge, it is also likely to be responsible for this limitation.

Figure 22: Conceptual compression from a model trained on Imagenet (). The leftmost column represent the original image, the subsequent columns were obtained by storing higher level latent variables and resampling the others, storing less and less as we go right. From left to right: , , , and of the latent variables are kept.
Figure 23: Conceptual compression from a model trained on CelebA. The leftmost column represent the original image, the subsequent columns were obtained by storing higher level latent variables and resampling the others, storing less and less as we go right. From left to right: , , , and of the latent variables are kept.
Figure 24: Conceptual compression from a model trained on LSUN (bedroom category). The leftmost column represent the original image, the subsequent columns were obtained by storing higher level latent variables and resampling the others, storing less and less as we go right. From left to right: , , , and of the latent variables are kept.
Figure 25: Conceptual compression from a model trained on LSUN (church outdoor category). The leftmost column represent the original image, the subsequent columns were obtained by storing higher level latent variables and resampling the others, storing less and less as we go right. From left to right: , , , and of the latent variables are kept.
Figure 26: Conceptual compression from a model trained on LSUN (tower category). The leftmost column represent the original image, the subsequent columns were obtained by storing higher level latent variables and resampling the others, storing less and less as we go right. From left to right: , , , and of the latent variables are kept.

Appendix E Batch normalization

We further experimented with batch normalization by using a weighted average of a moving average of the layer statistics and the current batch batch statistics ,

(20)
(21)

where is the momentum. When using , we only propagate gradient through the current batch statistics . We observe that using this lag helps the model train with very small minibatches.

We used batch normalization with a moving average for our results on CIFAR-10.

Appendix F Attribute change

Additionally, we exploit the attribute information in CelebA to build a conditional model, i.e. the invertible function from image to latent variable uses the labels in to define its parameters. In order to observe the information stored in the latent variables, we choose to encode a batch of images with their original attribute and decode them using a new set of attributes , build by shuffling the original attributes inside the batch. We obtain the new images .

We observe that, although the faces are changed as to respect the new attributes, several properties remain unchanged like position and background.

Figure 27: Examples from the CelebA dataset.
Figure 28: From a model trained on pairs of images and attributes from the CelebA dataset, we encode a batch of images with their original attributes before decoding them with a new set of attributes. We notice that the new images often share similar characteristics with those in Fig 27, including position and background.