1 Introduction
Representation learning refers to the task of learning a representation of the data that can be easily exploited, see Bengio et al. (2013). In this work, our goal is to build a model that disentangles the data into separate salient factors of variation and easily applies to a variety of tasks and different types of observations. Towards this goal there are multiple difficulties. First, the representative power of the learned representation depends on the information one wishes to extract from the data. Second, the multiple factors of variation impact the observations in a complex and correlated manner. Finally, we have access to very little, if any, supervision over these different factors. If there is no specific meaning to embed in the desired representation, the infomax principle, described in Linsker (1988), states that an optimal representation is one of bounded entropy which retains as much information about the data as possible. However, we are interested in learning a semantically meaningful disentanglement of interesting latent factors. How can we anchor semantics in highdimensional representations?
We propose grouplevel supervision: observations are organised in groups, where within a group the observations share a common but unknown value for one of the factors of variation. For example, take images of circle and stars, of possible colors green, yellow and blue. A possible grouping organises the images by shape (circled or starred). Group observations allow us to anchor the semantics of the data (shape and color) into the learned representation. Group observations are a form of weak supervision that is inexpensive to collect. In the above shape example, we do not need to know the factor of variation that defines the grouping.
Deep probabilistic generative models learn expressive representations of a given set of observations. Among them, Kingma and Welling (2014); Rezende et al. (2014) proposed the very successful Variational Autoencoder (VAE). In the VAE model, a network (the encoder) encodes an observation into its latent representation (or latent code) and a generative network (the decoder) decodes an observation from a latent code. The VAE model performs amortised inference, that is, the observations parametrise the posterior distribution of the latent code, and all observations share a single set of parameters to learn. This allows efficient testtime inference. However, the VAE model assumes that the observations are independent and identically distributed (i.i.d.). In the case of grouped observations, this assumption is no longer true. Considering the toy example of objects grouped by shape, the VAE model considers and processes each observation independently. This is shown in Figure 0(a). The VAE model takes no advantage of the knowledge of the grouping.
How can we build a probabilistic model that easily incorporates this grouping information and learns the corresponding relevant representation? We could enforce equal representations within groups in a graphical model, using stochastic variational inference (SVI) for approximate posterior inference, Hoffman et al. (2013). However, such model paired with SVI cannot take advantage of efficient amortised inference. As a result, SVI requires more passes over the training data and expensive testtime inference. Our proposed model retains the advantages of amortised inference while using the grouping information in a simple yet flexible manner.
We present the MultiLevel Variational Autoencoder (MLVAE), a new deep
probabilistic model that learns a disentangled representation of a set of
grouped observations.
The MLVAE separates the latent representation into semantically meaningful
parts by working both at the group level and the observation level.
Without loss of generality we assume that there are two latent factors, style
and content. The content is common for a group, while the style can
differ within the group. We emphasise that our approach is general in that
there can be more than two factors.
Moreover, for the same set of observations, multiple groupings are possible
along different factors of variation.
To use group observations the MLVAE uses a grouping operation that separates
the latent representation into two parts, style and content, and samples in
the same group have the same content.
This in turns makes the encoder learn a semantically meaningful
disentanglement. This process is shown in Figure 0(b). For
illustrative purposes, the upper part of the latent code represents the style
(color) and the lower part the content (shape: circle or star).
In Figure 0(b), after being encoded the two circles share the same
shape in the lower part of the latent code (corresponding to content). The
variations within the group (style), in this case color, gets naturally
encoded in the upper part. Moreover, while the MLVAE handles the case of a
single sample in a group, if there are multiples samples in a group the
grouping operation increases the certainty on the content. This is shown in
Figure 0(b) where black circles show that the model has
accumulated evidence of the content (circle) from the two disentangled codes
(grey circles). The grouping operation does not need to know that the data are
grouped by shape nor what shape and color represent; the only supervision is
the organisation of the data in groups. At testtime, the MLVAE generalises
to unseen realisations of the factors of variation, for example the purple
triangle in Figure 0(c). Using the disentangled representation, we
can control the latent code and can perform operations such as swapping part
of the latent representation to generate new observations, as shown in
Figure 0(c). To sumup, our contributions are as follows.

[noitemsep,nolistsep]

We propose the MLVAE model to learn disentangled representations from group level supervision;

we extend amortized inference to the case of noniid observations;

we demonstrate experimentally that the MLVAE model learns a semantically meaningful disentanglement of grouped data;

we demonstrate manipulation of the latent representation and generalises to unseen groups.
2 Related Work
Research has actively focused on the development of deep probabilistic models that learn to represent the distribution of the data. Such models parametrise the learned representation by a neural network. We distinguish between two types of deep probabilistic models. Implicit probabilistic models stochastically map an input random noise to a sample of the modelled distribution. Examples of implicit models include Generative Adversarial Networks (GANs) developed by
Goodfellow et al. (2014) and kernel based models, see Li et al. (2015); Dziugaite et al. (2015); Bouchacourt et al. (2016). The second type of model employs an explicit model distribution and builds on variational inference to learn its parameters. This is the case of the Variational Autoencoder (VAE) proposed by Kingma and Welling (2014); Rezende et al. (2014). Both types of model have been extended to the representation learning framework, where the goal is to learn a representation that can be effectively employed. In the unsupervised setting, the InfoGAN model of Chen et al. (2016) adapts GANs to the learning of an interpretable representation with the use of mutual information theory, and Wang and Gupta (2016) use two sequentially connected GANs. The VAE model of Higgins et al. (2017) encourages the VAE model to optimally use its capacity by increasing the KullbackLeibler term in the VAE objective. This favors the learning of a meaningful representation. Abbasnejad et al. (2016) uses an infinite mixture as variational approximation to improve performance on semisupervised tasks. Contrary to our setting, these unsupervised models do not anchor a specific meaning into the disentanglement. In the semisupervised setting, i.e. when an output label is partly available, Siddharth et al. (2017) learn a disentangled representation by introducing an auxiliary variable. While related to our work, this model defines a semisupervised factor of variation. In the example of multiclass classification, it would not generalise to unseen classes. We define our model in the grouping supervision setting, therefore we can handle unseen classes at testing.The VAE model has been extended to the learning of representations that are invariant to a certain source of variation. In this context Alemi et al. (2017) build a meaningful representation by using the Information Bottleneck (IB) principle, presented by Tishby et al. (1999). The Variational Fair Autoencoder presented by Louizos et al. (2016) encourages independence between the latent representation and a sensitive factor with the use of a Maximum Mean Discrepancy (MMD) based regulariser, while Edwards and Storkey (2015) uses adversarial training. Finally, Chen et al. (2017) control which part of the data gets encoded by the encoder and employ an autoregressive architecture to model the part that is not encoded. While related to our work, these models require supervision on the source of variation to be invariant to. In the specific case of learning interpretable representation of images, Kulkarni et al. (2015) train an autoencoder with minibatch where only one latent factor changes. Finally, Mathieu et al. (2016) learn a representation invariant to a certain source of data by combining autoencoders trained in an adversarial manner.
Multiple works perform imagetoimage translation between two unpaired images collections using GANbased architectures, see
Zhu et al. (2017); Kim et al. (2017); Yi et al. (2017); Fu et al. (2017); Taigman et al. (2017); Shrivastava et al. (2017); Bousmalis et al. (2016), while Liu et al. (2017) employ a combination of VAE and GANs. Interestingly, all these models require a form of weak supervision that is similar to our setting. We can think of the two unpaired images collections as two groups of observed data, sharing image type (painting versus photograph for example). Our work differs from theirs as we generalise to any type of data and number of groups. It is unclear how to extend the cited models to the setting of more than two groups and other types of data. Also, we do not employ multiple GANs models but a single VAEtype model. While not directly related to our work, Murali et al. (2017) perform computer program synthesis using grouped usersupplied example programs, and Allamanis et al. (2017) learn continuous semantic representations of mathematical and logical expressions. Finally we mention the concurrent recent work of Donahue et al. (2017) which disentangles the latent space of GANs.3 Model
3.1 Amortised Inference with the Variational Autoencoder (VAE) Model
We define . In the probabilistic model framework, we assume that the observations are generated by , the unobserved (latent) variables. The goal is to infer the values of the latent variable that generated the observations, that is, to calculate the posterior distribution over the latent variables , which is often intractable. The original VAE model proposed by Kingma and Welling (2014); Rezende et al. (2014) approximate the intractable posterior with the use of a variational approximation , where are the variational parameters. Contrary to Stochastic Variational Inference (SVI), the VAE model performs amortised variational inference, that is, the observations parametrise the posterior distribution of the latent code, and all observations share a single set of parameters . This allows efficient testtime inference. Figure 2 shows the SVI and VAE graphical models, we highlight in red that the SVI model does not take advantage of amortised inference.
3.2 The MLVAE for Grouped Observations
We now assume that the observations are organised in a set of distinct groups, with a factor of variation that is shared among all observations within a group. The grouping forms a partition of , i.e. each group is a subset of of arbitary size, disjoint of all other groups. Without loss of generality, we separate the latent representation in two latent variables with style and content . The content is the factor of variation along which the groups are formed. In this context, referred as the grouped observations case, the latent representation has a single content latent variable per group . SVI can easily be adapted by enforcing that all observations within a group share a single content latent variable while the style remains untied, see Figure 2(a). However, employing SVI requires iterative testtime inference since it does not perform amortised inference. Experimentally, it also requires more passes on the training data as we show in the supplementary material. The VAE model assumes that the observations are i.i.d, therefore it does not take advantadge of the grouping. In this context, the question is how to perform amortised inference in the context of noni.i.d., grouped observations? In order to tackle the aforementioned deficiency we propose the MultiLevel VAE (MLVAE).
We denote by the observations corresponding to the group . We explicitly model each in to have its independent latent representation for the style , and . is a unique latent variable shared among the group for the content. The variational approximation factorises and and are the variational parameters for content and style respectively. We assume that the style is independent in a group, so also factorises. Finally, given style and content, the likelihood decomposes on the samples. This results in the graphical model shown Figure 2(b).
We do not assume i.i.d. observations, but independence at the grouped observations level. The average marginal loglikelihood decomposes over groups of observations
(1) 
For each group, we can rewrite the marginal loglikelihood as the sum of the group Evidence Lower Bound
and the KullbackLeibler divergence between the true posterior
and the variational approximation . Since this KullbackLeibler divergence is always positive, the first term, , is a lower bound on the marginal loglikelihood,(2) 
The for a group is
(3) 
We define the average group ELBO over the dataset, and we maximise . It is a lower bound on because each group Evidence Lower Bound is a lower bound on , therefore,
(4) 
In comparison, the original VAE model maximises the average ELBO over individual samples. In practise, we build an estimate of
using minibatches of group.(5) 
If we take each group
, in its entirety this is an unbiased estimate. When the groups sizes are too large, for efficiency, we subsample
and this estimate is biased. We discuss the bias in the supplementary material. The resulting algorithm is shown in Algorithm 1.3.3 Accumulating Group Evidence using a Product of Normal densities
Our idea is to build the variational approximation of the single group content variable, , from the encoding of the grouped observations . While any distribution could be employed, we focus on using a product of Normal density functions. Other possibilities, such as a mixture of density functions, are discussed in the supplementary material.
We construct the probability density function of the latent variable
taking the value by multiplying normal density functions, each of them evaluating the probability of given ,(6) 
where we assume
to be a Normal distribution
. Murphy (2007) shows that the product of two Gaussians is a Gaussian. Similarly, in the supplementary material we show that is the density function of a Normal distribution of meanand variance
(7) 
It is interesting to note that the variance of the resulting Normal distribution, , is inversely proportional to the sum of the group’s observations inverse variances . Therefore, we expect that by increasing the number of observations in a group, the variance of the resulting distribution decreases. This is what we refer as “accumulating evidence”. We empirically investigate this effect in Section 4. Since the resulting distribution is a Normal distribution, the term can be evaluated in closedform. We also assume a Normal distribution for .
4 Experiments
We evaluate the MLVAE on images, other forms of data are possible and we leave these for future work. In all experiments we use the Product of Normal method presented in Section 3.3 to construct the content latent representation. Our goal with the experiments is twofold. First, we want to evaluate the performance of MLVAE to learn a semantically meaningful disentangled representation. Second, we want to explore the impact of “accumulating evidence” described in Section 3.3. Indeed when we encode test images two strategies are possible: strategy is disregarding the grouping information of the test samples, i.e. each test image is a group; and strategy is considering the grouping information of the test samples, i.e. taking multiple test images per identity to construct the content latent representation.
MNIST dataset.
We evaluate the MLVAE on MNIST Lecun et al. (1998). We consider the data grouped by digit label, i.e. the content latent code should encode the digit label. We randomly separate the training examples into training samples and validation samples, and use the standard MNIST testing set. For both the encoder and decoder, we use a simple architecture of linear layers (detailed in the supplementary material).
MSCeleb1M dataset.
Next, we evaluate the MLVAE on the face aligned version of the MSCeleb1M dataset Guo et al. (2016). The dataset was constructed by retrieving approximately images per celebrity from popular search engines, and noise has not been removed from the dataset. For each query, we consider the top ten results (note there was multiple queries per celebrity, therefore some identities have more than images). This creates a dataset of entities for a total of images, and we group the data by identity. Importantly, we randomly separate the dataset in disjoints sets of identities as the training, validation and testing datasets. This way we evaluate the ability of MLVAE level to generalise to unseen groups (unseen identities) at testtime. The training dataset consists of identities (total images), the validation dataset consists of identities (total images) and the testing dataset consists of identities (total images). The encoder and the decoder network architectures, composed of either convolutional or deconvolutional and linear layers, are detailed in the supplementary material. We resize the images to pixels to fit the network architecture.
Qualitative Evaluation.
As explained in Mathieu et al. (2016), there is no standard benchmark dataset or metric to evaluate a model on its disentanglement performance. Therefore similarly to Mathieu et al. (2016)
we perform qualitative and quantitative evaluations. We qualitatively assess the relevance of the learned representation by performing operations on the latent space. First we perform swapping: we encode test images, draw a sample per image from its style and content latent representations, and swap the style between images. Second we perform interpolation: we encode a pair of test images, draw one sample from each image style and content latent codes, and linearly interpolate between the style and content samples. We present the results of swapping and interpolation with accumulating evidence of
other images in the group (strategy ). Results without accumulated evidence (strategy ) are also convincing and available in the supplementary material. We also perform generation: for a given test identity, we build the content latent code by accumulating images of this identity. Then take the mean of the resulting content distribution and generate images with styles sampled from the prior. Finally in order to explore the benefits of taking into account the grouping information, for a given test identity, we reconstruct all images for this identity using both these strategies and show the resulting images.Figure 4 shows the swapping procedure, where the first row and the first column show the test data sample input to MLVAE, second row and column are reconstructed samples. Each row is a fixed style and each column is a fixed content. We see that the MLVAE disentangles the factors of variation of the data in a relevant manner. In the case of MSCeleb1M, we see that the model encodes the factor of variation that grouped the data, that is the identity, into the facial traits which remain constant when we change the style, and encodes the style into the remaining factors (background color, face orientation for example). The MLVAE learns this meaningful disentanglement without the knowledge that the images are grouped by identity, but only the organisation of the data into groups. Figure 5 shows interpolation and generation. We see that our model covers the manifold of the data, and that style and content are disentangled. In Figures 5(a) and 5(b), we reconstruct images of the same group with and without taking into account the grouping information. We see that the MLVAE handles cases where there is no group information at testtime, and benefits from accumulating evidence if available.
Quantitative Evaluation.
In order to quantitatively evaluate the disentanglement power of MLVAE, we use the style latent code and content latent code as features for a classification task. The quality of the disentanglement is high if the content is informative about the class, while the style
is not. In the case of MNIST the class is the digit label and for MSCeleb1M the class is the identity. We emphasise that in the case of MSCeleb1M test images are all unseen classes (unseen identities) at training. We learn to classify the test images with a neural network classifier composed of two linear layers of
hidden units each, once using and once using as input features. Again we explore the benefits of accumulating evidence: while we construct the variational approximation on the content latent code by accumulating images per class for training the classifier, we accumulate only images per class at test time, where corresponds to no group information. When increases we expect the performance of the classifer trained on to improve as the features become more informative and the performance using features to remain constant. We compare to the original VAE model, where we also accumulate evidence by using the Product of Normal method on the VAE latent code for samples of the same class. The results are shown in Figure 5(c). The MLVAE content latent code is as informative about the class as the original VAE latent code, both in terms of classification accuracy and conditional entropy. MLVAE also provides relevant disentanglement as the style remains uninformative about the class. Details on the choices of and this experiment are in the supplementary material.5 Discussion
We proposed the MultiLevel VAE model for learning a meaningful disentanglement from a set of grouped observations. The MLVAE model handles an arbitrary number of groups of observations, which needs not be the same at training and testing. We proposed different methods for incorporating the semantics embedded in the grouping. Experimental evaluation show the relevance of our method, as the MLVAE learns a semantically meaningful disentanglement, generalises to unseen groups and enables control on the latent representation. For future work, we wish to apply the MLVAE to text data.
References
 Abbasnejad et al. [2016] Ehsan Abbasnejad, Anthony R. Dick, and Anton van den Hengel. Infinite variational autoencoder for semisupervised learning. arXiv preprint arXiv:1611.07800, 2016.
 Alemi et al. [2017] Alexander A. Alemi, Ian Fischer, Joshua V. Dillon, and Kevin Murphy. Deep variational information bottleneck. ICLR, 2017.
 Allamanis et al. [2017] Miltiadis Allamanis, Pankajan Chanthirasegaran, Pushmeet Kohli, and Charles Sutton. Learning continuous semantic representations of symbolic expressions. arXiv preprint 1611.01423, 2017.
 Bengio et al. [2013] Yoshua Bengio, Aaron Courville, and Pascal Vincent. Representation learning: A review and new perspectives. IEEE Trans. Pattern Anal. Mach. Intell., 35(8):1798–1828, August 2013. ISSN 01628828.
 Bouchacourt et al. [2016] Diane Bouchacourt, Pawan Kumar Mudigonda, and Sebastian Nowozin. DISCO nets : Dissimilarity coefficients networks. NIPS, 2016.
 Bousmalis et al. [2016] Konstantinos Bousmalis, Nathan Silberman, David Dohan, Dumitru Erhan, and Dilip Krishnan. Unsupervised pixellevel domain adaptation with generative adversarial networks. arXiv preprint arXiv:1612.05424, 2016.
 Chen et al. [2016] Xi Chen, Yan Duan, Rein Houthooft, John Schulman, Ilya Sutskever, and Pieter Abbeel. InfoGAN: Interpretable representation learning by information maximizing generative adversarial nets. NIPS, 2016.
 Chen et al. [2017] Xi Chen, Diederik P. Kingma, Tim Salimans, Yan Duan, Prafulla Dhariwal, John Schulman, Ilya Sutskever, and Pieter Abbeel. Variational lossy autoencoder. ICLR, 2017.
 Donahue et al. [2017] Chris Donahue, Akshay Balsubramani, Julian McAuley, and Zachary C. Lipton. Semantically decomposing the latent spaces of generative adversarial networks. arXiv preprint 1705.07904, 2017.
 Dziugaite et al. [2015] Gintare Karolina Dziugaite, Daniel M. Roy, and Zoubin Ghahramani. Training generative neural networks via maximum mean discrepancy optimization. UAI, 2015.
 Edwards and Storkey [2015] Harrison Edwards and Amos J. Storkey. Censoring representations with an adversary. CoRR, 2015.
 Fu et al. [2017] T.C. Fu, Y.C. Liu, W.C. Chiu, S.D. Wang, and Y.C. F. Wang. Learning CrossDomain Disentangled Deep Representation with Supervision from A Single Domain. arXiv preprint arXiv:1705.01314, 2017.
 Goodfellow et al. [2014] Ian Goodfellow, Jean PougetAbadie, Mehdi Mirza, Bing Xu, David WardeFarley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial nets. NIPS, 2014.

Guo et al. [2016]
Yandong Guo, Lei Zhang, Yuxiao Hu, Xiaodong He, and Jianfeng Gao.
MSCeleb1M: A dataset and benchmark for large scale face recognition.
ECCV, 2016.  Higgins et al. [2017] Irina Higgins, Loic Matthey, Arka Pal, Christopher Burgess, Xavier Glorot, Matthew Botvinick, Shakir Mohamed, and Alexander Lerchner. betaVAE: Learning basic visual concepts with a constrained variational framework. ICLR, 2017.
 Hoffman et al. [2013] Matthew D. Hoffman, David M. Blei, Chong Wang, and John Paisley. Stochastic variational inference. JMLR, 2013.
 Kim et al. [2017] T Kim, M Cha, H Kim, J Lee, and J Kim. Learning to discover crossdomain relations with generative adversarial networks. arXiv preprint arXiv:1703.05192, 2017.
 Kingma and Welling [2014] Diederik P. Kingma and Max Welling. AutoEncoding Variational Bayes. ICLR, 2014.
 Kulkarni et al. [2015] Tejas D Kulkarni, Will Whitney, Pushmeet Kohli, and Joshua B Tenenbaum. Deep convolutional inverse graphics network. NIPS, 2015.
 Lecun et al. [1998] Yann Lecun, Léon Bottou, Yoshua Bengio, and Patrick Haffner. Gradientbased learning applied to document recognition. Proceedings of the IEEE, pages 2278–2324, 1998.

Li et al. [2015]
Yujia Li, Kevin Swersky, and Richard S. Zemel.
Generative moment matching networks.
ICML, 2015.  Linsker [1988] Ralph Linsker. Selforganization in a perceptual network. Computer, 21(3):105–117, 1988.
 Liu et al. [2017] MingYu Liu, Thomas Breuel, and Jan Kautz. Unsupervised imagetoimage translation networks. arXiv preprint arXiv:1703.00848, 2017.
 Louizos et al. [2016] Christos Louizos, Kevin Swersky, Yujia Li, Max Welling, and Richard S. Zemel. The variational fair autoencoder. ICLR, 2016.
 Mathieu et al. [2016] Michael F Mathieu, Junbo Jake Zhao, Junbo Zhao, Aditya Ramesh, Pablo Sprechmann, and Yann LeCun. Disentangling factors of variation in deep representation using adversarial training. NIPS, 2016.
 Murali et al. [2017] Vijayaraghavan Murali, Swarat Chaudhuri, and Chris Jermaine. Bayesian sketch learning for program synthesis. arXiv preprint arXiv:1703.05698v2, 2017.

Murphy [2007]
Kevin P. Murphy.
Conjugate Bayesian Analysis of the Gaussian Distribution.
Technical report, 2007. 
Rezende et al. [2014]
Danilo Jimenez Rezende, Shakir Mohamed, and Daan Wierstra.
Stochastic backpropagation and approximate inference in deep generative models.
ICML, 2014.  Shrivastava et al. [2017] Ashish Shrivastava, Tomas Pfister, Oncel Tuzel, Josh Susskind, Wenda Wang, and Russ Webb. Learning from simulated and unsupervised images through adversarial training. arXiv preprint arXiv:1612.07828, 2017.
 Siddharth et al. [2017] N. Siddharth, Brooks Paige, Alban Desmaison, Frank Wood, and Philip Torr. Learning disentangled representations in deep generative models. Submitted to ICLR, 2017.
 Taigman et al. [2017] Yaniv Taigman, Adam Polyak, and Lior Wolf. Unsupervised crossdomain image generation. ICLR, 2017.
 Tishby et al. [1999] N. Tishby, F. C. Pereira, and W. Bialek. The information bottleneck method. 37th annual Allerton Conference on Communication, Control and Computing, 1999.
 Wang and Gupta [2016] Xiaolong Wang and Abhinav Gupta. Generative image modeling using style and structure adversarial networks. ECCV, 2016.
 Yi et al. [2017] Zili Yi, Hao Zhang, Ping Tan, and Minglun Gong. Dualgan: Unsupervised dual learning for imagetoimage translation. arXiv preprint arXiv:1704.02510, 2017.
 Zhu et al. [2017] JunYan Zhu, Taesung Park, Phillip Isola, and Alexei A Efros. Unpaired imagetoimage translation using cycleconsistent adversarial networks. arXiv preprint arXiv:1703.10593, 2017.
Comments
There are no comments yet.