Multi-Level Variational Autoencoder: Learning Disentangled Representations from Grouped Observations

05/24/2017 ∙ by Diane Bouchacourt, et al. ∙ University of Oxford Microsoft 0

We would like to learn a representation of the data which decomposes an observation into factors of variation which we can independently control. Specifically, we want to use minimal supervision to learn a latent representation that reflects the semantics behind a specific grouping of the data, where within a group the samples share a common factor of variation. For example, consider a collection of face images grouped by identity. We wish to anchor the semantics of the grouping into a relevant and disentangled representation that we can easily exploit. However, existing deep probabilistic models often assume that the observations are independent and identically distributed. We present the Multi-Level Variational Autoencoder (ML-VAE), a new deep probabilistic model for learning a disentangled representation of a set of grouped observations. The ML-VAE separates the latent representation into semantically meaningful parts by working both at the group level and the observation level, while retaining efficient test-time inference. Quantitative and qualitative evaluations show that the ML-VAE model (i) learns a semantically meaningful disentanglement of grouped data, (ii) enables manipulation of the latent representation, and (iii) generalises to unseen groups.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 7

page 8

page 12

page 14

page 15

page 18

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Representation learning refers to the task of learning a representation of the data that can be easily exploited, see Bengio et al. (2013). In this work, our goal is to build a model that disentangles the data into separate salient factors of variation and easily applies to a variety of tasks and different types of observations. Towards this goal there are multiple difficulties. First, the representative power of the learned representation depends on the information one wishes to extract from the data. Second, the multiple factors of variation impact the observations in a complex and correlated manner. Finally, we have access to very little, if any, supervision over these different factors. If there is no specific meaning to embed in the desired representation, the infomax principle, described in Linsker (1988), states that an optimal representation is one of bounded entropy which retains as much information about the data as possible. However, we are interested in learning a semantically meaningful disentanglement of interesting latent factors. How can we anchor semantics in high-dimensional representations?

We propose group-level supervision: observations are organised in groups, where within a group the observations share a common but unknown value for one of the factors of variation. For example, take images of circle and stars, of possible colors green, yellow and blue. A possible grouping organises the images by shape (circled or starred). Group observations allow us to anchor the semantics of the data (shape and color) into the learned representation. Group observations are a form of weak supervision that is inexpensive to collect. In the above shape example, we do not need to know the factor of variation that defines the grouping.

Deep probabilistic generative models learn expressive representations of a given set of observations. Among them, Kingma and Welling (2014); Rezende et al. (2014) proposed the very successful Variational Autoencoder (VAE). In the VAE model, a network (the encoder) encodes an observation into its latent representation (or latent code) and a generative network (the decoder) decodes an observation from a latent code. The VAE model performs amortised inference, that is, the observations parametrise the posterior distribution of the latent code, and all observations share a single set of parameters to learn. This allows efficient test-time inference. However, the VAE model assumes that the observations are independent and identically distributed (i.i.d.). In the case of grouped observations, this assumption is no longer true. Considering the toy example of objects grouped by shape, the VAE model considers and processes each observation independently. This is shown in Figure 0(a). The VAE model takes no advantage of the knowledge of the grouping.

How can we build a probabilistic model that easily incorporates this grouping information and learns the corresponding relevant representation? We could enforce equal representations within groups in a graphical model, using stochastic variational inference (SVI) for approximate posterior inference, Hoffman et al. (2013). However, such model paired with SVI cannot take advantage of efficient amortised inference. As a result, SVI requires more passes over the training data and expensive test-time inference. Our proposed model retains the advantages of amortised inference while using the grouping information in a simple yet flexible manner.

(b) ML-VAE accumulates evidence.
(c) ML-VAE generalises to unseen shapes and colors and allows control on the latent code.
(a) Original VAE assumes i.i.d. observations.
Figure 1: In (a) the VAE of Kingma and Welling (2014); Rezende et al. (2014), it assumes i.i.d. observations. In comparison, (b) and (c) show our ML-VAE working at the group level. In (b) and (c) upper part of the latent code is color, lower part is shape. Black shapes show the ML-VAE accumulating evidence on the shape from the two grey shapes. E is the Encoder, D is the Decoder, G is the grouping operation. Best viewed in color.
(a) Original VAE assumes i.i.d. observations.


We present the Multi-Level Variational Autoencoder (ML-VAE), a new deep probabilistic model that learns a disentangled representation of a set of grouped observations. The ML-VAE separates the latent representation into semantically meaningful parts by working both at the group level and the observation level. Without loss of generality we assume that there are two latent factors, style and content. The content is common for a group, while the style can differ within the group. We emphasise that our approach is general in that there can be more than two factors. Moreover, for the same set of observations, multiple groupings are possible along different factors of variation. To use group observations the ML-VAE uses a grouping operation that separates the latent representation into two parts, style and content, and samples in the same group have the same content. This in turns makes the encoder learn a semantically meaningful disentanglement. This process is shown in Figure 0(b). For illustrative purposes, the upper part of the latent code represents the style (color) and the lower part the content (shape: circle or star). In Figure 0(b), after being encoded the two circles share the same shape in the lower part of the latent code (corresponding to content). The variations within the group (style), in this case color, gets naturally encoded in the upper part. Moreover, while the ML-VAE handles the case of a single sample in a group, if there are multiples samples in a group the grouping operation increases the certainty on the content. This is shown in Figure 0(b) where black circles show that the model has accumulated evidence of the content (circle) from the two disentangled codes (grey circles). The grouping operation does not need to know that the data are grouped by shape nor what shape and color represent; the only supervision is the organisation of the data in groups. At test-time, the ML-VAE generalises to unseen realisations of the factors of variation, for example the purple triangle in Figure 0(c). Using the disentangled representation, we can control the latent code and can perform operations such as swapping part of the latent representation to generate new observations, as shown in Figure 0(c). To sum-up, our contributions are as follows.

  • [noitemsep,nolistsep]

  • We propose the ML-VAE model to learn disentangled representations from group level supervision;

  • we extend amortized inference to the case of non-iid observations;

  • we demonstrate experimentally that the ML-VAE model learns a semantically meaningful disentanglement of grouped data;

  • we demonstrate manipulation of the latent representation and generalises to unseen groups.

2 Related Work

Research has actively focused on the development of deep probabilistic models that learn to represent the distribution of the data. Such models parametrise the learned representation by a neural network. We distinguish between two types of deep probabilistic models. Implicit probabilistic models stochastically map an input random noise to a sample of the modelled distribution. Examples of implicit models include Generative Adversarial Networks (GANs) developed by 

Goodfellow et al. (2014) and kernel based models, see Li et al. (2015); Dziugaite et al. (2015); Bouchacourt et al. (2016). The second type of model employs an explicit model distribution and builds on variational inference to learn its parameters. This is the case of the Variational Autoencoder (VAE) proposed by Kingma and Welling (2014); Rezende et al. (2014). Both types of model have been extended to the representation learning framework, where the goal is to learn a representation that can be effectively employed. In the unsupervised setting, the InfoGAN model of Chen et al. (2016) adapts GANs to the learning of an interpretable representation with the use of mutual information theory, and Wang and Gupta (2016) use two sequentially connected GANs. The -VAE model of Higgins et al. (2017) encourages the VAE model to optimally use its capacity by increasing the Kullback-Leibler term in the VAE objective. This favors the learning of a meaningful representation. Abbasnejad et al. (2016) uses an infinite mixture as variational approximation to improve performance on semi-supervised tasks. Contrary to our setting, these unsupervised models do not anchor a specific meaning into the disentanglement. In the semi-supervised setting, i.e. when an output label is partly available, Siddharth et al. (2017) learn a disentangled representation by introducing an auxiliary variable. While related to our work, this model defines a semi-supervised factor of variation. In the example of multi-class classification, it would not generalise to unseen classes. We define our model in the grouping supervision setting, therefore we can handle unseen classes at testing.

The VAE model has been extended to the learning of representations that are invariant to a certain source of variation. In this context Alemi et al. (2017) build a meaningful representation by using the Information Bottleneck (IB) principle, presented by Tishby et al. (1999). The Variational Fair Autoencoder presented by Louizos et al. (2016) encourages independence between the latent representation and a sensitive factor with the use of a Maximum Mean Discrepancy (MMD) based regulariser, while Edwards and Storkey (2015) uses adversarial training. Finally, Chen et al. (2017) control which part of the data gets encoded by the encoder and employ an autoregressive architecture to model the part that is not encoded. While related to our work, these models require supervision on the source of variation to be invariant to. In the specific case of learning interpretable representation of images, Kulkarni et al. (2015) train an autoencoder with minibatch where only one latent factor changes. Finally, Mathieu et al. (2016) learn a representation invariant to a certain source of data by combining autoencoders trained in an adversarial manner.

Multiple works perform image-to-image translation between two unpaired images collections using GAN-based architectures, see 

Zhu et al. (2017); Kim et al. (2017); Yi et al. (2017); Fu et al. (2017); Taigman et al. (2017); Shrivastava et al. (2017); Bousmalis et al. (2016), while Liu et al. (2017) employ a combination of VAE and GANs. Interestingly, all these models require a form of weak supervision that is similar to our setting. We can think of the two unpaired images collections as two groups of observed data, sharing image type (painting versus photograph for example). Our work differs from theirs as we generalise to any type of data and number of groups. It is unclear how to extend the cited models to the setting of more than two groups and other types of data. Also, we do not employ multiple GANs models but a single VAE-type model. While not directly related to our work, Murali et al. (2017) perform computer program synthesis using grouped user-supplied example programs, and Allamanis et al. (2017) learn continuous semantic representations of mathematical and logical expressions. Finally we mention the concurrent recent work of Donahue et al. (2017) which disentangles the latent space of GANs.

3 Model

3.1 Amortised Inference with the Variational Autoencoder (VAE) Model

We define . In the probabilistic model framework, we assume that the observations  are generated by , the unobserved (latent) variables. The goal is to infer the values of the latent variable that generated the observations, that is, to calculate the posterior distribution over the latent variables , which is often intractable. The original VAE model proposed by Kingma and Welling (2014); Rezende et al. (2014) approximate the intractable posterior with the use of a variational approximation , where  are the variational parameters. Contrary to Stochastic Variational Inference (SVI), the VAE model performs amortised variational inference, that is, the observations parametrise the posterior distribution of the latent code, and all observations share a single set of parameters . This allows efficient test-time inference. Figure 2 shows the SVI and VAE graphical models, we highlight in red that the SVI model does not take advantage of amortised inference.

(a) SVI graphical model.

(b) VAE graphical model.
Figure 2: VAE Kingma and Welling (2014); Rezende et al. (2014) and SVI Hoffman et al. (2013) graphical models. Solid lines denote the generative model, dashed lines denote the variational approximation.

3.2 The ML-VAE for Grouped Observations

We now assume that the observations are organised in a set  of distinct groups, with a factor of variation that is shared among all observations within a group. The grouping forms a partition of , i.e. each group  is a subset of  of arbitary size, disjoint of all other groups. Without loss of generality, we separate the latent representation in two latent variables  with style  and content . The content is the factor of variation along which the groups are formed. In this context, referred as the grouped observations case, the latent representation has a single content latent variable per group . SVI can easily be adapted by enforcing that all observations within a group share a single content latent variable while the style remains untied, see Figure 2(a). However, employing SVI requires iterative test-time inference since it does not perform amortised inference. Experimentally, it also requires more passes on the training data as we show in the supplementary material. The VAE model assumes that the observations are i.i.d, therefore it does not take advantadge of the grouping. In this context, the question is how to perform amortised inference in the context of non-i.i.d., grouped observations? In order to tackle the aforementioned deficiency we propose the Multi-Level VAE (ML-VAE).

We denote by  the observations corresponding to the group . We explicitly model each  in  to have its independent latent representation for the style , and . is a unique latent variable shared among the group for the content. The variational approximation  factorises and and  are the variational parameters for content and style respectively. We assume that the style is independent in a group, so  also factorises. Finally, given style and content, the likelihood  decomposes on the samples. This results in the graphical model shown Figure 2(b).

(a) SVI for grouped observations.

(b) Our ML-VAE.
Figure 3: SVI Hoffman et al. (2013) and our ML-VAE graphical models. Solid lines denote the generative model, dashed lines denote the variational approximation.

We do not assume i.i.d. observations, but independence at the grouped observations level. The average marginal log-likelihood decomposes over groups of observations

(1)

For each group, we can rewrite the marginal log-likelihood as the sum of the group Evidence Lower Bound 

and the Kullback-Leibler divergence between the true posterior 

and the variational approximation . Since this Kullback-Leibler divergence is always positive, the first term, , is a lower bound on the marginal log-likelihood,

(2)

The  for a group is

(3)

We define the average group ELBO over the dataset,  and we maximise . It is a lower bound on  because each group Evidence Lower Bound  is a lower bound on , therefore,

(4)

In comparison, the original VAE model maximises the average ELBO over individual samples. In practise, we build an estimate of 

using minibatches of group.

(5)

If we take each group 

, in its entirety this is an unbiased estimate. When the groups sizes are too large, for efficiency, we subsample

and this estimate is biased. We discuss the bias in the supplementary material. The resulting algorithm is shown in Algorithm 1.

1 for 

Each epoch

 do
2       Sample minibatch of groups ,
3       for  do
4             for  do
5                   Encode  into ,  
6             end for
7            Construct  using ,
8             for  do
9                   Sample , ,
10                   Decode  to obtain ,
11                  
12             end for
13            
14       end for
15      Update  by taking a gradient step of Equation (5):
16 end for
Algorithm 1 ML-VAE training algorithm.

For each group , in step 1 of Algorithm 1 we build the group content distribution by accumulating information from the result of encoding each sample in . The question is how can we accumulate the information in a relevant manner to compute the group content distribution?

3.3 Accumulating Group Evidence using a Product of Normal densities

Our idea is to build the variational approximation of the single group content variable, , from the encoding of the grouped observations . While any distribution could be employed, we focus on using a product of Normal density functions. Other possibilities, such as a mixture of density functions, are discussed in the supplementary material.

We construct the probability density function of the latent variable 

taking the value  by multiplying  normal density functions, each of them evaluating the probability of  given ,

(6)

where we assume 

to be a Normal distribution 

.  Murphy (2007) shows that the product of two Gaussians is a Gaussian. Similarly, in the supplementary material we show that  is the density function of a Normal distribution of mean 

and variance 

(7)

It is interesting to note that the variance of the resulting Normal distribution, , is inversely proportional to the sum of the group’s observations inverse variances . Therefore, we expect that by increasing the number of observations in a group, the variance of the resulting distribution decreases. This is what we refer as “accumulating evidence”. We empirically investigate this effect in Section 4. Since the resulting distribution is a Normal distribution, the term  can be evaluated in closed-form. We also assume a Normal distribution for .

4 Experiments

We evaluate the ML-VAE on images, other forms of data are possible and we leave these for future work. In all experiments we use the Product of Normal method presented in Section 3.3 to construct the content latent representation. Our goal with the experiments is twofold. First, we want to evaluate the performance of ML-VAE to learn a semantically meaningful disentangled representation. Second, we want to explore the impact of “accumulating evidence” described in Section 3.3. Indeed when we encode test images two strategies are possible: strategy   is disregarding the grouping information of the test samples, i.e. each test image is a group; and strategy  is considering the grouping information of the test samples, i.e. taking multiple test images per identity to construct the content latent representation.

MNIST dataset.

We evaluate the ML-VAE on MNIST Lecun et al. (1998). We consider the data grouped by digit label, i.e. the content latent code  should encode the digit label. We randomly separate the  training examples into  training samples and  validation samples, and use the standard MNIST testing set. For both the encoder and decoder, we use a simple architecture of  linear layers (detailed in the supplementary material).

MS-Celeb-1M dataset.

Next, we evaluate the ML-VAE on the face aligned version of the MS-Celeb-1M dataset Guo et al. (2016). The dataset was constructed by retrieving approximately  images per celebrity from popular search engines, and noise has not been removed from the dataset. For each query, we consider the top ten results (note there was multiple queries per celebrity, therefore some identities have more than  images). This creates a dataset of  entities for a total of  images, and we group the data by identity. Importantly, we randomly separate the dataset in disjoints sets of identities as the training, validation and testing datasets. This way we evaluate the ability of ML-VAE level to generalise to unseen groups (unseen identities) at test-time. The training dataset consists of  identities (total  images), the validation dataset consists of  identities (total  images) and the testing dataset consists of  identities (total  images). The encoder and the decoder network architectures, composed of either convolutional or deconvolutional and linear layers, are detailed in the supplementary material. We resize the images to  pixels to fit the network architecture.

Qualitative Evaluation.

As explained in Mathieu et al. (2016), there is no standard benchmark dataset or metric to evaluate a model on its disentanglement performance. Therefore similarly to Mathieu et al. (2016)

we perform qualitative and quantitative evaluations. We qualitatively assess the relevance of the learned representation by performing operations on the latent space. First we perform swapping: we encode test images, draw a sample per image from its style and content latent representations, and swap the style between images. Second we perform interpolation: we encode a pair of test images, draw one sample from each image style and content latent codes, and linearly interpolate between the style and content samples. We present the results of swapping and interpolation with accumulating evidence of 

other images in the group (strategy ). Results without accumulated evidence (strategy ) are also convincing and available in the supplementary material. We also perform generation: for a given test identity, we build the content latent code by accumulating images of this identity. Then take the mean of the resulting content distribution and generate images with styles sampled from the prior. Finally in order to explore the benefits of taking into account the grouping information, for a given test identity, we reconstruct all images for this identity using both these strategies and show the resulting images.

(a) MNIST, test dataset.
(b) MS-Celeb-1M, test dataset.
Figure 4: Swapping, first row and first column are test data samples (green boxes), second row and column are reconstructed samples (blue boxes) and the rest are swapped reconstructed samples (red boxes). Each row is fixed style and each column is a fixed content. Best viewed in color on screen.
(a) Generation, the green boxed images are all the test data images for this identity. On the right, sampling from the random prior for the style and using the mean of the grouped images latent code.
(b) Interpolation, from top left to bottom right rows correspond to a fixed style and interpolating on the content, columns correspond to a fixed content and interpolating on the style.
Figure 5: Left: Generation. Right: Interpolation. Best viewed in color on screen.
(a) The four digits are of the same label. (b) The four images are of the same person.
(c) Quantitative Evaluation. For clarity on MNIST we show up to  as values stay stationary for larger  (in supplementary material).
Figure 6: Accumulating evidence (acc. ev.). Left column are test data samples, middle column are reconstructed sample without acc. ev., right column are reconstructed samples with acc. ev. from the four images. In (a), ML-VAE corrects inference (wrong digit label in first row second column) with acc. ev. (correct digit label in first row third column). In (b), where images of the same identity are taken at different ages, ML-VAE benefits from group information and the facial traits with acc. ev. (third column) are more constant than without acc. ev. (second column). Best viewed in color on screen.

Figure 4 shows the swapping procedure, where the first row and the first column show the test data sample input to ML-VAE, second row and column are reconstructed samples. Each row is a fixed style and each column is a fixed content. We see that the ML-VAE disentangles the factors of variation of the data in a relevant manner. In the case of MS-Celeb-1M, we see that the model encodes the factor of variation that grouped the data, that is the identity, into the facial traits which remain constant when we change the style, and encodes the style into the remaining factors (background color, face orientation for example). The ML-VAE learns this meaningful disentanglement without the knowledge that the images are grouped by identity, but only the organisation of the data into groups. Figure 5 shows interpolation and generation. We see that our model covers the manifold of the data, and that style and content are disentangled. In Figures 5(a) and 5(b), we reconstruct images of the same group with and without taking into account the grouping information. We see that the ML-VAE handles cases where there is no group information at test-time, and benefits from accumulating evidence if available.

Quantitative Evaluation.

In order to quantitatively evaluate the disentanglement power of ML-VAE, we use the style latent code  and content latent code  as features for a classification task. The quality of the disentanglement is high if the content  is informative about the class, while the style 

is not. In the case of MNIST the class is the digit label and for MS-Celeb-1M the class is the identity. We emphasise that in the case of MS-Celeb-1M test images are all unseen classes (unseen identities) at training. We learn to classify the test images with a neural network classifier composed of two linear layers of 

hidden units each, once using  and once using  as input features. Again we explore the benefits of accumulating evidence: while we construct the variational approximation on the content latent code by accumulating  images per class for training the classifier, we accumulate only  images per class at test time, where  corresponds to no group information. When  increases we expect the performance of the classifer trained on  to improve as the features become more informative and the performance using features  to remain constant. We compare to the original VAE model, where we also accumulate evidence by using the Product of Normal method on the VAE latent code for samples of the same class. The results are shown in Figure 5(c). The ML-VAE content latent code is as informative about the class as the original VAE latent code, both in terms of classification accuracy and conditional entropy. ML-VAE also provides relevant disentanglement as the style remains uninformative about the class. Details on the choices of  and this experiment are in the supplementary material.

5 Discussion

We proposed the Multi-Level VAE model for learning a meaningful disentanglement from a set of grouped observations. The ML-VAE model handles an arbitrary number of groups of observations, which needs not be the same at training and testing. We proposed different methods for incorporating the semantics embedded in the grouping. Experimental evaluation show the relevance of our method, as the ML-VAE learns a semantically meaningful disentanglement, generalises to unseen groups and enables control on the latent representation. For future work, we wish to apply the ML-VAE to text data.

References