1 Introduction
Natural images are the result of a generative process involving a large number factors of variation. For instance, the appearance of a face is determined by the interaction between many latent variables including the pose, the illumination, identity, and expression. Given that the interaction between these underlying explanatory factors is very complex, inverting the generative process is extremely challenging.
From this perspective, learning disentangled representations where different highlevel generative factors are independently encoded can be considered one of the most relevant problems in computer vision (Bengio et al., 2013). For instance, these representations can be applied to complex classification tasks given that features correlated with image labels can be easily identified. We find another example in conditional image generation (van den Oord et al., 2016; Yan et al., 2016), where disentangled representations allow to manipulate highlevel attributes in synthesized images.
Motivation:
By coupling deep learning with variational inference, Variational autoencoders (VAEs)
(Kingma & Welling, 2014) have emerged as a powerful latent variable model able to learn abstract data representations. However, VAEs are typically trained in an unsupervised manner and, they therefore lack a mechanism to impose specific highlevel semantics on the latent space. In order to address this limitation, different semisupervised variants have been proposed (Kingma et al., 2014; Narayanaswamy et al., 2017). These approaches, however, require latent factors to be explicitly labelled in a training set. These annotations provide supervision to the model, and allow to disentangle the labelled variables from the remaining generative factors. The main drawback of this strategy is that it may require a significant annotation effort. For instance, if we are interested in disentangling facial gesture information from face images, we need to annotate samples according to different expression classes. While this is feasible for a reduced number of basic gestures, natural expressions depend on a combination of a large number of facial muscle activations with their corresponding intensities (Ekman & Rosenberg, 1997). Therefore, it is impractical to label all these factors even in a small subset of training images. In this context, our main motivation is to explore a novel learning setting allowing to disentangle specific factors of variation while minimizing the required annotation effort.Contributions: We introduce referencebased disentangling. A learning setting in which, given a training set of unlabelled images, the goal is to learn a representation where a specific set of generative factors are disentangled from the rest. For that purpose, the only supervision comes in the form of an auxiliary reference set containing images where the factors of interest are constant (see Fig. 1). Different from a semisupervised scenario, explicit labels are not available for the factors of interest during training. In contrast, referencebased disentangling is a weaklysupervised task, where the reference set only provides implicit information about the generative factors that we aim to disentangle. Note that a collection of reference images is generally easier to obtain compared to explicit labels of target factors. For example, it is more feasible to collect a set of faces with a neutral expression, than to annotate images across a large range of expression classes or attributes.
The main contributions of our paper are summarized as follows: (1) We propose referencebased variational autoencoders (RbVAEs). Different from unsupervised VAEs, our model is able to impose highlevel semantics into the latent variables by exploiting the weak supervision provided by the reference set; (2) We identify critical limitations of the standard VAE objective when used to train our model. To address this problem, we propose an alternative training procedure based on recently introduced ideas in the context of variational inference and adversarial learning; (3) By learning disentangled representations from minimal supervision, we show how our framework is able to naturally address tasks such as feature learning, conditional image generation, and attribute transfer.
2 Related Work
Deep Generative Models have been extensively explored to model visual and other types of data. Variational autoencoders (Kingma & Welling, 2014) and generative adversarial networks (GANs) (Goodfellow et al., 2014) have emerged as two of the most effective frameworks. VAEs use variational evidence lower bound to learn an encoder network that maps images to an approximation of the posterior distribution over latent variables. Similarly, a decoder network is learned that produces the conditional distribution on images given the latent variables. GANs are also composed of two differentiable networks. The generator network synthesizes images from latent variables, similar to the VAE decoder. The discriminator’s goal is to separate real training images from synthetic images sampled from the generator. During training, GANs employ an adversarial learning procedure which allows to simultaneously optimize the discriminator and generator parameters. Even though GANs have been shown to generate more realistic samples than VAEs, they lack an inference mechanism able to map images into their corresponding latent variables. In order to address this drawback, there have been several attempts to combine ideas from VAEs and GANs (Larsen et al., 2015; Dumoulin et al., 2017; Donahue et al., 2017). Interestingly, it has been shown that adversarial learning can be used to minimize the variational objective function of VAEs (Makhzani et al., 2016; Huszár, 2017). Inspired by this observation, various methods such as adversarial variational Bayes (Mescheder et al., 2017), GAN (Rosca et al., 2017), and symmetricVAE (sVAE) (Pu et al., 2018) have incorporated adversarial learning into the VAE framework.
Different from this prior work, our RbVAE model is a deep generative model specifically designed to solve the referencebased disentangling problem. During training, adversarial learning is used in order to minimize a variational objective function inspired by the one employed in sVAE (Pu et al., 2018). Although sVAE was originally motivated by the limitations of the maximum likelihood criterion used in unsupervised VAEs, we show how its variational formulation offers specific advantages in our context.
Learning Disentangled Representations
is a long standing problem in machine learning and computer vision
(Bengio et al., 2013). In the literature, we can differentiate three main paradigms to address it: unsupervised, supervised, and weaklysupervised learning. Unsupervised models are trained without specific information about the generative factors of interest (Desjardins et al., 2012; Chen et al., 2016). To address this task, the most common approach consists in imposing different constraints on the latent representation. For instance, unsupervised VAEs typically define the prior over the latent variables with a fullyfactorized Gaussian distribution. Given that highlevel generative factors are typically independent, this prior encourage their disentanglement in different dimensions of the latent representation. Based on this observation, different approaches such as
VAE (Higgins et al., 2017), DIPVAE (Kumar et al., 2018), FactorVAE (Kim & Mnih, 2018) or TCVAE (Chen et al., 2018) have explored more sophisticated regularization mechanisms over the distribution of inferred latent variables. Although unsupervised approaches are able to identify simple explanatory components, they do not allow latent variables to model specific highlevel factors.A straightforward approach to overcome this limitation is to use a fullysupervised strategy. In this scenario, models are learned by using a training set where the factors of interest are explicitly labelled. Following this paradigm, we can find different semisupervised (Kingma et al., 2014; Narayanaswamy et al., 2017), and conditional (Yan et al., 2016; Pu et al., 2016) variants of autoencoders. In spite of the effectiveness of supervised approaches in different applications, obtaining explicit labels is not feasible in scenarios where we aim to disentangle a large number of factors or their annotation is difficult. An intermediate solution between unsupervised and fullysupervised methods are weaklysupervised approaches. In this case, only implicit information about factors of variation is provided during training. Several works have explored this strategy by using different forms of weaksupervision such as: temporal coherence in sequential data (Hsu et al., 2017; Denton et al., 2017; Villegas et al., 2017), pairs of aligned images obtained from different domains (GonzalezGarcia et al., 2018) or knowledge about the rendering process in computer graphics (Yang et al., 2015; Kulkarni et al., 2015).
Different from previous works relying on other forms of weak supervision, our method addresses the referencebased disentangling problem. In this scenario, the challenge is to exploit the implicit information provided by a training set of images where the generative factors of interest are constant. Related with this setting, recent approaches have considered to exploit pairing information of images known to share the same generative factors (Mathieu et al., 2016b; Donahue et al., 2018; Feng et al., 2018; Bouchacourt et al., 2018). However, the amount of supervision required by these methods is larger than the available in referencebased disentangling. Concretely, we only know that reference images are generated by the same constant factor. In addition, no information is available about what unlabelled samples share the same target factors.
3 Preliminaries: Variational Autoencoders
Variational autoencoders (VAEs) are generative models defining a joint distribution
, where is an observation, e.g. an image, and is a latent variable with a simple prior , e.g. a Gaussian with zero mean and identity covariance matrix. Moreover, is typically modeled as a factored Gaussian, whose mean and diagonal covariance matrix are given by a function of , implemented by a generatorneural network.Given a training set of samples from an unknown data distribution , VAEs learn the optimal parameters by defining a variational distribution . Note that approximates the intractable posterior and is defined as another factored Gaussian, whose mean and diagonal covariance matrix are given as the output of an encoder or inference network with parameters . The generator and the encoder are optimized by solving:
which is equivalent to the minimization of the KL divergence between and . The first KL term can be interpreted as a regularization mechanism encouraging the distribution to be similar to . The second term is known as the reconstruction error, measuring the negative loglikelihood of a generated sample from its latent variables
. Optimization can be carried out by using stochastic gradient descent (SGD) where
is approximated by the training set. The reparametrization trick (Rezende et al., 2014) is employed to enable gradient backpropagation across samples from .4 Referencebased Disentangling
Consider a training set of unlabelled images (e.g. human faces) sampled from a given distribution . Our goal is to learn a latent variable model defining a joint distribution over and latent variables and . Whereas is expected to encode information about a set of generative factors of interest, e.g. facial expressions, should model the remaining factors of variation underlying the images, e.g. pose, illumination, identity, etc. From now on, we will refer to and as the “target” and “common factors”, respectively. In order to disentangle them, we are provided with an additional set of reference images sampled from , representing a distribution over where target factors are constant e.g. neutral faces. Given and
, we define an auxiliary binary variable
indicating whether an image has been sampled from the unlabelled or reference distributions, i.e. and . In referencebased disentangling, we aim to exploit the weaksupervision provided by in order to effectively disentangle target factors and common factors .4.1 Referencebased Variational Autoencoders
In this section, we present referencebased variational autoencoders (RbVAE). RbVAE is a deep latent variable model defining a joint distribution:
(1) 
where conditional dependencies are designed to address the referencebased disentangling problem, see Fig. 2(a). We define , where is the generator network, mapping a pair of latent variables to an image defining the mean of a Laplace distribution with fixed scale parameter . We use a Laplace distribution, instead of the Gaussian usually employed in the VAEs. The reason is that the negative loglikelihood is equivalent to the loss which encourages sharper image reconstructions with better visual quality (Mathieu et al., 2016a).
To reflect the assumption of constant target factors across reference images, we define the conditional distribution over given
as a delta peak centered on a learned vector
, i.e. . In contrast, for , the conditional distribution is set to a unit Gaussian, , as in standard VAEs. In the following, we denote . Contrary to the case of target factors , the prior over common factors is equal for reference and unlabelled images, and taken to be a unit Gaussian . Finally, we assume a uniform prior over , i.e. .4.2 Conventional Variational Learning
Following the standard VAE framework discussed in Sec. 3, we define a variational distribution , and learn the model parameters by minimizing the KL divergence between and :
(2) 
Note that the conditionals and provide a factored approximation of the intractable posterior , allowing to infer target and common factors and given the image , see Fig. 2(b). Given a reference image, i.e. with , the target factors are known to be equal to the reference value . On the other hand, given an nonreference image, i.e. with , we define the approximate posterior , where the means and diagonal covariance matrices of a conditional Gaussian distribution are given by nonlinear functions and , respectively. Similarly, we use an additional network to model .
Optimization. In Appendix A.1 we show that the minimization of Eq. (2) can be expressed as
(3) 
where the second and fourth terms of the expression correspond to the reconstruction errors for unlabelled and reference images respectively. Note that, for reference images, no inference over target factors is needed. Instead, the generator reconstructs them using the learned parameter . Similar to standard VAEs, the remaining terms consist of KL divergences between approximate posteriors and priors over the latent variables. The minimization problem defined in Eq. (3) can be solved using SGD and the reparametrization trick in order to backpropagate the gradient when sampling from and .
4.3 Symmetric Variational Learning
The main limitation of the variational objective defined in Eq. (3) is that it does not guarantee that common and target factors will be effectively disentangled in and respectively. In order to understand this phenomenon, it is necessary to analyze the role of the conditional distribution in RbVAEs. By defining as a delta function, the model is forced to encode into all the generative factors of reference images, given that they must be reconstructed via with constant . Therefore, is implicitly encouraging to encode common factors present in reference and unlabelled samples. However, this mechanism does not avoid the scenario where target factors are also encoded into latent variables . More formally, given that is expressive enough, the minimization of Eq. (3) does not prevent a degenerate solution , where the inferred latent variables by are ignored by the generator.
To address this limitation, we propose to optimize an alternative variational expression inspired by unsupervised Symmetric VAEs (Pu et al., 2018). Specifically, we add the reverse KL between and to the objective of the minimization problem:
(4) 
In order to understand why this additional term allows to mitigate the degenerate solution , it is necessary to observe that its minimization is equivalent to:
(5)  
see Appendix A.1 for details. Note that the two KL divergences encourage images generated using , and to be similar to samples from the real distributions and . On the other hand, the remaining terms correspond to reconstruction errors over latent variables inferred from generated images drawn from . As a consequence, the minimization of these errors is encouraging the generator to generate images by taking into account latent variables , since the latter must be reconstructed via . In conclusion, the minimization of the reversed KL avoids the degenerate solution ignoring .
Optimization via Adversarial Learning. Given the introduction of the reversed KL divergence, the learning procedure described in Sec. 4.2 can not be directly applied to the minimization of Eq. (4). However, note that we can express the defined symmetric objective as:
(6) 
where corresponds to the logdensity ratio between distributions and . Similarly, defines an analogous expression for and . See Appendix A.1 for a detailed derivation.
Taking into account previous definitions, SGD optimization can be employed in order to learn model parameters. Concretely, we can evaluate and to backpropagate the gradients w.r.t. parameters and by using the reparametrization trick over samples of , and . The main challenge of this strategy is that expressions and
can not be explicitly computed. However, the logdensity ratio between two distributions can be estimated by using logistic regression
(Bickel et al., 2009). In particular, we define an auxiliary parametric function and learn its parameters by solving:(7) 
where
refers to the sigmoid function. Similarly,
is approximated with an additional function .This approach is analogous to adversarial unsupervised methods such as ALI (Dumoulin et al., 2017), where the function acts as a discriminator trying to distinguish whether pairs of reference images and latent variables have been generated by and . However, in our case we have an additional discriminator operating over unlabelled images and its corresponding latent variables and (see Fig. 3ab) To conclude, it is also interesting to observe that the discriminator is implicitly encouraging latent variables to encode only information about the common factors. The reason is that samples generated from are forced to be similar to reference images. As a consequence, can not contain information about target factors, which must be encoded into .
Using previous definitions, we use an adversarial procedure where model and discriminators parameters (,), and (,) are simultaneously optimized by minimizing and maximizing equations (6) and (7) respectively. The algorithm used to process one batch during SGD is shown in Appendix A.2. In RbVAEs, the discriminators and are also implemented as deep convolutional networks.
Explicit Loglikelihood Maximization. As shown in equations (3) and (5), the minimization of the symmetric KL divergence encourages low reconstruction errors for images and inferred latent variables. However, by using the proposed adversarial learning procedure, the minimization of these terms becomes implicit. As shown by Dumoulin et al. (2017) and Donahue et al. (2017), this can cause original samples to differ substantially from their corresponding reconstructions. In order to address this drawback, we use a similar strategy as Pu et al. (2018) and Li et al. (2017), and explicitly add the reconstruction terms into the learning objective, minimizing them together with Eq. (6), see Fig. 3(c–f). In preliminary experiments, we found that the explicit addition of these reconstructions terms during training is important to achieve low reconstruction errors, and to increase stability of adversarial training.
5 Experiments
5.1 Datasets
To validate our approach and to compare to existing work, we consider two different problems.
Digit Style Disentangling. The goal is to model style variations from handwritten digits. We consider the digit style as a set of three different properties: scale, width and color. In order to address this task from a referencebased perspective, we use half of the original training images in the MNIST dataset (LeCun et al., 1998)
as our reference distribution (30k examples). The unlabelled set is synthetically generated by applying different transformations over the remaining half of images: (1) Simulation of stroke widths by using a dilation with a given filter size; (2) Digit colorization by multiplying the RGB components of the pixels in an image by a random 3D vector; (3) Size variations by downscaling the image by a given factor. We randomly transform each image twice to obtain a total of 60k unsuperivsed images. See more details in Appendix
A.3.Facial Expression Disentangling. We address the disentangling of facial expressions by using a reference set of neutral faces. As unlabelled images we use a subset of the AffectNet dataset (Mollahosseini et al., 2017), which contains a large quantity of facial images. This database is especially challenging since faces were collected “in the wild” and exhibit a large variety of natural expressions. A subset of the images are annotated according to different facial expressions: happiness, sadness, surprise, fear, disgust, anger, and contempt. We use these labels only for quantitative evaluation. Given that we found that many neutral images in the original database were not correctly annotated, we collected a separate reference set, see Appendix A.3. The unlabelled and reference sets consist of 150k and 10k images, respectively.
5.2 Baselines and Implementation Details
We evaluate the two different variants of our proposed method: RbVAE, trained using the standard variational objective (Sec. 4.2), and sRbVAE, learned by minimizing the symmetric KL divergence (Sec. 4.3). To demonstrate the advantages of exploiting the weaksupervision provided by reference images, we compare both methods with various stateoftheart unsupervised approaches based on the VAE framework: VAE (Higgins et al., 2017), TCVAE (Chen et al., 2018), sVAE (Pu et al., 2018), DIPVAEI and DIPVAEII (Kumar et al., 2018). Note that VAE DIPVAE and TCVAE have been specifically proposed for learning disentangled representations, showing better performance than other unsupervised methods such as InfoGAN (Chen et al., 2016). On the other hand, sVAE is trained using a similar variational objective as sRbVAE, and can therefore be considered an unsupervised version of our method. We also evaluate vanilla VAEs (Kingma & Welling, 2014).
As discussed in Sec. 2, there are no existing approaches in the literature that directly address referencebased disentangling. In order to evaluate an alternative weaklysupervised baseline exploiting the referenceset, we have implemented (Mathieu et al., 2016b), and adapted it to our context. Concretely, we have modified the learning algorithm in order to use only pairing information from reference imagesm by removing the reconstruction losses for pairs of unlabelled samples as such information is not available in referencebased disentangling.
AffectNet  MNIST  
Happ  Sad  Sur  Fear  Disg  Ang  Compt  Avg.  R  G  B  Scale  Width  Avg.  
VAE  .554  .279  .383  .357  .256  .415  .439  .383  .099  .104  .101  [.034]  .085  .085 
DIPVAEI  .561  .269  [.401]  .367  .258  .397  .463  .388  [.055]  .064  .063  .038  .100  .064 
DIPVAEII  .548  .245  [.401]  [.389]  .268  .391  .463  .386  .077  .069  .076  .035  .098  .071 
VAE  .581  .283  .373  .323  .250  .415  .467  .384  .093  .099  .094  .039  .089  .083 
sVAE  .583  .251  .389  .349  .260  .391  .469  .384  .094  .092  .084  .036  .104  .082 
TCVAE  .563  .277  .393  .349  .256  [.427]  .467  .390  .098  .100  .099  [.034]  [.084]  .083 
[Mathieu et. al]  .567  .388  .312  .330  .295  .353  [.512]  .395  .116  .116  .114  .039  .104  .098 
RBDVAE  .536  .393  .379  .311  .320  .383  .421  392  .065  .069  .062  .061  .095  .070 
sRBDVAE  [.587]  [.405]  .387  .327  [.344]  .425  .483  [.422]  .057  [.053]  [.055]  .038  .095  [.060] 
Prediction of target factors from learned representations. We report accuracy and meanabsoluteerror as evaluation metrics for the AffectNet and MNIST datasets, respectively. Two best methods shown in bold, best result in brackets.
The different components of our method are implemented as deep neural networks. For this purpose, we have used convdeconv architectures as is standard in VAE and GANs literature. Specifically, we employ the main building blocks used by Karras et al. (2018)
, where the generator is implemented as a sequence of convolutions, LeakyReLU nonlinearities, and nearestneighbour upsampling operations. Encoder and discriminators follow a similar architecture, using average pooling for downsampling. See Appendix
A.4 for more details. For a fair comparison, we have developed our own implementation for all the evaluated methods in order to use the same network architectures and hyperparameters. During optimization, we use the Adam optimizer (Kingma & Ba, 2015)and a batch size of 36 images. For the MNIST and AffectNet , the models are learned for 30 and 20 epochs respectively. The number of latent variables for the encoders has been set to
for all the experiments and models. The parameter in the Laplace distribution is set to .5.3 Quantitative evaluation: Feature Learning
A common strategy to evaluate the quality of learned representations is to measure the amount of information that they convey about the underlying generative factors. In our setting, we are interested in modelling the target factors that are constant in the reference distribution.
Experimental Setup. Following a similar evaluation as Mathieu et al. (2016b)
, we use the learned representations as feature vectors and train a lowcapacity model estimating the target factors involved in each problem. Concretely, in the MNIST dataset we employ a set of linearregressors predicting the scale, width and color parameters for each digit. To predict the different expression classes in the AffectNet dataset, we use a linear classifier. For methods using the referenceset, we used the inferred latent variables
as features since they are expected to encode the information regarding the target factors. In unsupervised models we use all the latent variables. For evaluation, we split each dataset in three subsets. The first is used to learn each generative model. Then, the second is used for training the regressors or classifier. Finally, the third is used to evaluate the predictions in terms of the mean absolute error and perclass accuracy for the MNIST and AffectNet datasets, respectively. In MNIST, the second and third subset (5k images each) have been randomly generated from the original MNIST test set using the procedure described in Sec. 5.1. For AffectNet, we randomly select 500 images for each of the seven expressions from the original dataset, yielding 3,500 images per fold.It is worth mentioning that some recent works (Kumar et al., 2018; Chen et al., 2018) have proposed alternative criterias to evaluate disentanglement. However, the proposed metrics are specifically designed to measure how a single dimension of the learned representation corresponds to a single groundtruth label. Note, however, that the onetoone mapping assumption is not appropriate for real scenarios where we want to model highlevel generative factors. For instance, it is unrealistic to expect that a single dimension of the latent vector can convey all the information about a complex label such as the facial expression.
Results and discussion. Table 1 shows the results obtained by the different baselines considered and the proposed RbVAE and sRbVAE. For DIPVAE, VAE and TCVAE we tested different regularization parameters in the range , and report the best results. Note that the unsupervised approach DIPVAEI achieves better average results than RbVAE for MNIST. Moreover, in AffectNet, TCVAE achieves comparable or better performance in several cases. This may seem counterintuitive because, unlike RbVAE, DIPVAEI is trained without the weaksupervision provided by reference images. However, it confirms our hypothesis that the learning objective of RbVAE does not explicitly encourage the disentanglement between target and common factors. In contrast, we can see that in most cases sRbVAE obtains comparable or better results than rest of the methods. Moreover, it achieves the best average performance in both datasets. This demonstrates that the information provided by the reference distribution is effectively exploited by the symmetric KL objective used to train sRbVAE. Additionally, note that the better performance of our model compared to unsupervised methods is informative. The reason is that the latter must encode all the generative factors into a single featurevector. As a consequence, the target factors are entangled with the rest and the groundtruth labels are difficult to predict. In contrast, the representation learned by our model is shown to be more effective because nonrelevant factors are effectively removed, i.e. encoded into .
In order to further validate this conclusion, we have followed the same evaluation protocol for RbVAE and sRbVAE but considering the latent variables as features. The average performance obtained by RbVAE is .349 and .195 for AffectNet and MNIST respectively. On the other hand, sRbVAE achieves .335 and .189. Note that for both methods these results are significantly worse compared to using as a representation in Table 1. This shows that latent variable is mainly modelling the common factors between reference and unlabelled images. The qualitative results presented in the next section confirm this. To conclude, note that sRbVAE also obtains better performance than Mathieu et al. (2016b) in both datasets. So even though this method also uses referenceimages during training, sRbVAE is shown to better exploit the weaksupervision existing in referencebased disentangling.
5.4 Qualitative Evaluation
In contrast to unsupervised methods, reference images are used by our model in order to split target and common factors into two different subsets of latent variables. This directly enables tasks such as conditional image synthesis or attribute transfer. In this section, we illustrate the potential applications of our proposed model in this settings.
Conditional Image Synthesis. The goal is to transform real images by modifying only the target factors . For instance, given a face of an individual, we aim to generate images of the same subject exhibiting different facial expressions. For this purpose, we use our model in order to infer the common factors . Then, we sample a vector and use the generator network to obtain a new image from and . In Fig.(4) we show examples of samples generated by RbVAE and sRbVAE following this procedure. As we can observe, sRbVAE generates more convincing results than its nonsymmetric counterpart. In the AffectNet database, the amount of variability in RbVAE samples is quite low. In contrast, sRbVAE is able to generate more diverse expressions related with eyes, mouth and eyebrows movements. Looking at the MNIST samples, we can draw similar conclusions. Whereas both methods generate transformations related with the digit color, RbVAE does not model scale variations in , while sRbVAE does. This observation is coherent with results reported in Tab. 1, where RbVAE offers a poor estimation of the scale.
Visual Attribute Transfer. Here we transfer target factors between a pair of images A and B. For example, given two samples from the MNIST dataset, the goal is to generate a new image with the digit in A modified with the style in B. Using our model, this can be easily achieved by synthesizing a new image from latent variables and inferred from A and B respectively. Fig. 5 shows images generated by sRbVAE and RbVAE in this scenario. In this case, we can draw similar conclusions than the previous experiment. RbVAE is not able to swap target factors related with the digit scale in the MNIST dataset, unlike sRbVAE which better model this type of variation. On the AffectNet images, both methods are able to keep most of the information regarding the identity of the subject, but again RbVAE leads to weaker expression changes than sRbVAE.
These qualitative results demonstrate that the standard variational objective of VAE is suboptimal to train our model, and that the symmetric KL divergence objective used in sRbVAE allows to better disentangle the common and target factors. Additional results are shown in Appendix A.5.
6 Conclusions
In this paper we have introduced the referencebased disentangling problem and proposed referencebased variational autoencoders to address it. We have shown that the standard variational learning objective used to train VAE can lead to degenerate solutions when it is applied in our setting, and proposed an alternative training strategy that exploits adversarial learning. Comparing the proposed model with previous stateoftheart approaches, we have shown its ability to learn disentangled representations from minimal supervision and its application to tasks such as feature learning, conditional image generation and attribute transfer.
References
 Bengio et al. (2013) Bengio, Y., Courville, A., and Vincent, P. Representation learning: A review and new perspectives. PAMI, 2013.
 Bickel et al. (2009) Bickel, S., Brückner, M., and Scheffer, T. Discriminative learning for differing training and test distributions. In ICML, 2009.

Bouchacourt et al. (2018)
Bouchacourt, D., Tomioka, R., and Nowozin, S.
Multilevel variational autoencoder: Learning disentangled
representations from grouped observations.
AAAI Conference on Artificial Intelligence
, 2018.  Chen et al. (2018) Chen, T. Q., Li, X., Grosse, R., and Duvenaud, D. Isolating sources of disentanglement in variational autoencoders. NeurIPS, 2018.
 Chen et al. (2016) Chen, X., Duan, Y., Houthooft, R., Schulman, J., Sutskever, I., and Abbeel, P. Infogan: Interpretable representation learning by information maximizing generative adversarial nets. In NIPS, 2016.
 Denton et al. (2017) Denton, E. L. et al. Unsupervised learning of disentangled representations from video. In NIPS, 2017.
 Desjardins et al. (2012) Desjardins, G., Courville, A., and Bengio, Y. Disentangling factors of variation via generative entangling. ICML, 2012.
 Donahue et al. (2018) Donahue, C., Lipton, Z. C., Balsubramani, A., and McAuley, J. Semantically decomposing the latent spaces of generative adversarial networks. ICLR, 2018.
 Donahue et al. (2017) Donahue, J., Krähenbühl, P., and Darrell, T. Adversarial feature learning. ICLR, 2017.
 Dumoulin et al. (2017) Dumoulin, V., Belghazi, I., Poole, B., Mastropietro, O., Lamb, A., Arjovsky, M., and Courville, A. Adversarially learned inference. ICLR, 2017.
 Ekman & Rosenberg (1997) Ekman, P. and Rosenberg, E. L. What the face reveals: Basic and applied studies of spontaneous expression using the Facial Action Coding System (FACS). Oxford University Press, USA, 1997.
 Feng et al. (2018) Feng, Z., Wang, X., Ke, C., Zeng, A.X., Tao, D., and Song, M. Dual swap disentangling. In NeurIPS, pp. 5898–5908, 2018.
 GonzalezGarcia et al. (2018) GonzalezGarcia, A., van de Weijer, J., and Bengio, Y. Imagetoimage translation for crossdomain disentanglement. NeurIPS, 2018.
 Goodfellow et al. (2014) Goodfellow, I., PougetAbadie, J., Mirza, M., Xu, B., WardeFarley, D., Ozair, S., Courville, A., and Bengio, Y. Generative adversarial nets. In NIPS, 2014.
 Higgins et al. (2017) Higgins, I., Matthey, L., Pal, A., Burgess, C., Glorot, X., Botvinick, M., Mohamed, S., and Lerchner, A. BetaVAE: Learning basic visual concepts with a constrained variational framework. ICLR, 2017.
 Hsu et al. (2017) Hsu, W.N., Zhang, Y., and Glass, J. Unsupervised learning of disentangled and interpretable representations from sequential data. In NIPS, 2017.
 Huszár (2017) Huszár, F. Variational inference using implicit distributions. arXiv preprint arXiv:1702.08235, 2017.
 Karras et al. (2018) Karras, T., Aila, T., Laine, S., and Lehtinen, J. Progressive growing of gans for improved quality, stability, and variation. ICLR, 2018.
 Kim & Mnih (2018) Kim, H. and Mnih, A. Disentangling by factorising. ICML, 2018.
 Kingma & Ba (2015) Kingma, D. and Ba, J. Adam: A method for stochastic optimization. ICLR, 2015.
 Kingma & Welling (2014) Kingma, D. and Welling, M. Autoencoding variational Bayes. ICLR, 2014.
 Kingma et al. (2014) Kingma, D., Mohamed, S., Rezende, D. J., and Welling, M. Semisupervised learning with deep generative models. In NIPS, 2014.
 Kulkarni et al. (2015) Kulkarni, T. D., Whitney, W. F., Kohli, P., and Tenenbaum, J. Deep convolutional inverse graphics network. In NIPS, 2015.
 Kumar et al. (2018) Kumar, A., Sattigeri, P., and Balakrishnan, A. Variational inference of disentangled latent concepts from unlabeled observations. ICLR, 2018.
 Larsen et al. (2015) Larsen, A. B. L., Sønderby, S. K., Larochelle, H., and Winther, O. Autoencoding beyond pixels using a learned similarity metric. ICML, 2015.
 LeCun et al. (1998) LeCun, Y., Bottou, L., Bengio, Y., and Haffner, P. Gradientbased learning applied to document recognition. Proceedings of the IEEE, 1998.
 Li et al. (2017) Li, C., Liu, H., Chen, C., Pu, Y., Chen, L., Henao, R., and Carin, L. Alice: Towards understanding adversarial learning for joint distribution matching. In NIPS, 2017.
 Makhzani et al. (2016) Makhzani, A., Shlens, J., Jaitly, N., Goodfellow, I., and Frey, B. Adversarial autoencoders. ICLR, 2016.
 Mathieu et al. (2016a) Mathieu, M., Couprie, C., and LeCun, Y. Deep multiscale video prediction beyond mean square error. ICLR, 2016a.
 Mathieu et al. (2016b) Mathieu, M. F., Zhao, J. J., Zhao, J., Ramesh, A., Sprechmann, P., and LeCun, Y. Disentangling factors of variation in deep representation using adversarial training. In NIPS, 2016b.
 Mescheder et al. (2017) Mescheder, L., Nowozin, S., and Geiger, A. Adversarial variational bayes: Unifying variational autoencoders and generative adversarial networks. ICML, 2017.
 Mollahosseini et al. (2017) Mollahosseini, A., Hasani, B., and Mahoor, M. H. Affectnet: A database for facial expression, valence, and arousal computing in the wild. IEEE Transactions on Affective Computing, 2017.
 Narayanaswamy et al. (2017) Narayanaswamy, S., Paige, T. B., Van de Meent, J.W., Desmaison, A., Goodman, N., Kohli, P., Wood, F., and Torr, P. Learning disentangled representations with semisupervised deep generative models. In NIPS, 2017.
 Pu et al. (2016) Pu, Y., Gan, Z., Henao, R., Yuan, X., Li, C., Stevens, A., and Carin, L. Variational autoencoder for deep learning of images, labels and captions. In NIPS, 2016.
 Pu et al. (2018) Pu, Y., Chen, L., Dai, S., Wang, W., Li, C., and Carin, L. Symmetric variational autoencoder and connections to adversarial learning. AISTATS, 2018.

Rezende et al. (2014)
Rezende, D., Mohamed, S., and Wierstra, D.
Stochastic backpropagation and approximate inference in deep generative models.
In ICML, 2014.  Rosca et al. (2017) Rosca, M., Lakshminarayanan, B., WardeFarley, D., and Mohamed, S. Variational approaches for autoencoding generative adversarial networks. arXiv preprint arXiv:1706.04987, 2017.
 van den Oord et al. (2016) van den Oord, A., Kalchbrenner, N., Espeholt, L., Vinyals, O., Graves, A., et al. Conditional image generation with PixelCNN decoders. In NIPS, 2016.
 Villegas et al. (2017) Villegas, R., Yang, J., Hong, S., Lin, X., and Lee, H. Decomposing motion and content for natural video sequence prediction. ICLR, 2017.
 Xiong & De la Torre (2013) Xiong, X. and De la Torre, F. Supervised descent method and its applications to face alignment. In CVPR, 2013.
 Yan et al. (2016) Yan, X., Yang, J., Sohn, K., and Lee, H. Attribute2image: Conditional image generation from visual attributes. In ECCV. Springer, 2016.
 Yang et al. (2015) Yang, J., Reed, S. E., Yang, M.H., and Lee, H. Weaklysupervised disentangling with recurrent transformations for 3d view synthesis. In NIPS, 2015.
Appendix A Appendix
a.1 Mathematical derivations
Equivalence between and Eq. (3):
(8)  
(9)  
(10)  
We use and to denote the entropy of the reference and unlabelled distributions and respectively. Note that they can be ignored during the minimization since are constant w.r.t. parameters and . For the second equality, we have used the definitions , and assumed . Moreover, we have exploited the fact that and are defined as delta functions and, therefore, . We denote and for the sake of brevity.
Equivalence between and the expression in Eq. (5)
(11)  
(12)  
(13)  
(14) 
We have used the same definitions and assumptions previously discussed. Moreover, we denote and as the entropy of the priors and . Again, we can ignore these terms when we are optimizing w.r.t parameters and .
Equivalence between the minimization of the symmetric KL divergence in Eq. (4) and the expression in Eq. (6)
(15)  
(16)  
(17)  
(18) 
a.2 Pseudocode for adversarial learning procedure
Algorithm 1 shows pseudocode for the adversarial learning algorithm described in Sec. 4.3 of the paper.
a.3 Datasets
Examples of reference and unlabelled images for MNIST and AffectNet are shown in Fig. 6. In the following, we provide more information about the used datasets.
a.3.1 Mnist
We use slightly modified version of the MNIST images: the size is increased to pixels and an edge detection procedure is applied to keep only the boundaries of the digit. We obtain the samples in the unlabelled dataset by applying the following transformations over the MNIST images:

Width: Generate a random integer in the range
using a uniform distribution. Apply a dilation operation over the image using a squared kernel with pixelsize equal to the generated number.

Color: Generate a random 3D vector using a uniform distribution. Normalize the resulting vector as . Multiply the RGB components of all the pixels in the image by .

Size: Generate a random number in the range
using a uniform distribution. Downscale the image by a factor equal to the generated number. Apply zeropadding to the resulting image in order to recover the original resolution.
a.3.2 AffectNet
Reference Set Collection. We collected a reference set of face images with neutral expression. We applied specific queries in order to obtain a large amount of faces from image search engines. Then, five different annotators filtered them in order to keep only images showing a neutral expression. The motivation for this data collection was that we found that many neutral images in the AffectNet dataset (Mollahosseini et al., 2017) are not accurate. As detailed in the original paper, the interobserver agreement is significantly low for neutral images. In contrast, in our referenceset, each image was annotated in terms of “neutral” / “nonneutral” by two different annotators. In order to ensure a higher label quality compared to the AffectNet, only the images where both annotators agreed were added to the referenceset.
Preprocessing. In order to remove 2D affine transformations such as scaling or inplane rotations, we apply an alignment process to the face images. We localize facial landmarks using the approach of Xiong & De la Torre (2013). Then, we apply Procrustes analysis in order to find an affine transformation aligning the detected landmarks with a mean shape. Finally, we apply the transformation to the image and crop it. The resulting image is then resized to a resolution of pixels.
a.4 Network architectures
Fig. 7 illustrates the network architectures used in our experiments. CN refers to pixelwise normalization as described in (Karras et al., 2018). FC defines a fullyconnected layer. For Leaky ReLU nonlinearities, we have used an slope of . Given that we normalize the images in the range , we use an hyperbolic tangent function as the last layer of the generator. For the discriminator , we use the same architecture showed for but removing the input corresponding to . For the Adam optimizer (Kingma & Ba, 2015) , we used and . Note that the described architectures and hyperparameters follow standard definitions according to most of GAN/VAEs previous works.
In preliminary experiments, we found that the discriminator in sRbVAE can start to ignore the inputs corresponding to latent variables and
while focusing only on real and generated images. In order to mitigate this problem during training, we found it effective to randomly set to zero the inputs corresponding to latent variables and images of the last fullyconnected layer. Note that this strategy is only used for sRBVAE and sVAE in our experiments and it is not necessary in the other evaluated baselines. The reason is that these two methods are the only ones employing discriminators receiving images and features as input. We set the dropout probability to
. We found that this default value worked well for both methods in all the datasets and no specific finetuning of this hyperparameter was necessary to mitigate the described phenomena.a.5 Additional Results
Figures 8 and 9 show additional qualitative results for conditional image generation and visual attribute transfer, in the same spirit as the figures in Section 5.4
. In order to provide more results for the conditional image generation task, we also provide two videos in this supplementary material. These videos contain animations generated by interpolating over the latent space corresponding to variations
(results shown for MNIST and AffectNet dataset). In Fig. 10, we also show additional images generated by sRBVAE trained with the AffectNet dataset. Different from the previous cases, these images have been generated by just injecting random noise to the generator (over both latent variables and ). Note that different target factors generate similar expressions in images generated from different common factors . The additional results further support the conclusions drawn in the main paper.