Y-Autoencoders: disentangling latent representations via sequential-encoding

07/25/2019 ∙ by Massimiliano Patacchiola, et al. ∙ Snap Inc. 6

In the last few years there have been important advancements in generative models with the two dominant approaches being Generative Adversarial Networks (GANs) and Variational Autoencoders (VAEs). However, standard Autoencoders (AEs) and closely related structures have remained popular because they are easy to train and adapt to different tasks. An interesting question is if we can achieve state-of-the-art performance with AEs while retaining their good properties. We propose an answer to this question by introducing a new model called Y-Autoencoder (Y-AE). The structure and training procedure of a Y-AE enclose a representation into an implicit and an explicit part. The implicit part is similar to the output of an autoencoder and the explicit part is strongly correlated with labels in the training set. The two parts are separated in the latent space by splitting the output of the encoder into two paths (forming a Y shape) before decoding and re-encoding. We then impose a number of losses, such as reconstruction loss, and a loss on dependence between the implicit and explicit parts. Additionally, the projection in the explicit manifold is monitored by a predictor, that is embedded in the encoder and trained end-to-end with no adversarial losses. We provide significant experimental results on various domains, such as separation of style and content, image-to-image translation, and inverse graphics.



There are no comments yet.


page 6

page 7

page 8

Code Repositories


Official Tensorflow implementation of the paper "Y-Autoencoders: disentangling latent representations via sequential-encoding"

view repo
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Figure 1: The structure and functioning of a Y-AE at training time. The reconstruction is divided in two branches having shared parameters: left (actual explicit content), and right (random explicit content). At test time the two branches are merged resulting in a standard autoencoder.

In this article we present a new training procedure for conditional autoencoders (cAE) [3, 2] that allows a standard cAE to obtain remarkable results in multiple conditional tasks. We call the resulting model Y-Autoencoder (Y-AE), where the letter Y is a reference to the particular branching structure used at training time. Y-AEs generally represent explicit information via discrete latent units, and implicit information via continuous units.

Consider the family of generative models where an input is conditioned on some paired explicit information , such that we can encode the input through the function and then decode via to generate new samples . The explicit information can be any additional information about the inputs, such as labels, tags, or group assignments. We may replace and

by universal approximators, such as neural networks. Estimating the parameters of such a universal approximator describes the problem of fitting a cAE.

While cAEs have met with some success, they often struggle disentangling the latent representation. In other words the fitting procedure often ignores , since there is no effective regularization to enforce an effect. For this reason, recent work has mainly tackled conditional generation through Variational Autoencoders (VAEs) [15] and Generative Adversarial Networks (GANs) [10]. The former rely on a probabilistic approach, that can capture relevant properties of the input space and constraint them inside a latent distribution via variational inference. The latter are based on a zero-sum learning rule, that simultaneously optimize a generator and a discriminator until convergence. Both VAEs and GANs can be conditioned on the explicit information to generate new samples . Recent work in this direction has explored facial attributes generation [6], natural image descriptions [8], people in clothing [17], and video frame prediction [19].

However, both VAEs and GANs suffer of a variety of problems. GANs are notoriously difficult to train, and may suffer of mode collapse when the state space is implicitly multimodal [10]

. VAEs rarely include discrete units due to the inability to apply backpropagation through those layers (since discrete sampling operations create discontinuities giving the objective function zero gradient with respect to the weights), preventing the use of categorical variables to represent discrete properties of the input space. Both have difficult exploiting rich prior problem structures, and must attempt to discover these structures by themselves, leaving valuable domain knowledge unused.

The Y-AE provides a flexible method to train standard cAEs that avoid these drawbacks, and can compete with specialized approaches. As there is no structural change the Y-AE simply becomes a cAE at test time, and it it is possible to assign values to the discrete units in the explicit layer whilst keeping the implicit information unchanged. It is important to notice that for a Y-AE the definition of implicit and explicit is very broad. The explicit information can either be the label assigned to each element of the dataset, or a weak label that just identifies a group assignment.

The contribution of this article can be summarized in the following points:

  1. Our core contribution is a new deep architecture called Y-AE, a conditional generative model that can effectively disentangle implicit and explicit information.

  2. We define a new training procedure based on the sequential-encoding of the reconstruction, which exploit weight sharing to enforce latent disentanglement. This procedure is generic enough to be used in other contexts or merged with other methods.

  3. We perform quantitative and qualitative experiments to verify the effectiveness of Y-AEs and the possibility of using them in a large variety of domains with minimal adjustments.

  4. We provide the open source code to reproduce the experiments. 111https://github.com/mpatacchiola/Y-AE

1.1 Previous work

Autoencoders. Deep convolutional inverse graphics networks (DC-IGNs) [16] use a graphics code layer to disentangle specific information (e.g. pose, light). DC-IGNs are bounded by the need to organize the data into two sets of mini-batches, the first corresponding to changes in only a single extrinsic variable, the second to changes in only the intrinsic properties. In contrast Y-AEs do not require such a rigid constraint. Deforming Autoencoders [24] disentangle shape from appearance through the use of a deformable template. A spatial deformation warps the texture to the observed image coordinates.

VAEs. Conditional VAEs have been used in [26] to produce a disentangled latent, allowing for model generation and attributes manipulation. A variant of VAEs, named beta-VAE [12]

, has showed state-of-the-art results in disentangled factor learning through the adoption of a hyperparameter

that balances latent capacity and independence constraints. A cycle-consistent VAE was proposed in [11]. This VAE is based on the minimization of the changes applied in a forward and reverse transform, given a pair of inputs.

Adversarial. Adversarial autoencoders (aversarial-AE) [21]

achieve disentanglement by matching the aggregated posterior of the hidden code vector with an arbitrary prior distribution and using a discriminator trained with an adversarial loss. In

[13] the disentangled representation is learned through mixing and unmixing feature chunks in latent space with the supervision of an adversarial loss. The authors also use a form of sequential encoding that has some similarities with the one we propose. However, the key difference is that in a Y-AE part of the latent information is explicit and controllable whereas in [13] is not.

GANs. A conditional form of GAN has been introduced in [22], and it is constructed such that the explicit information is passed to both the generator and discriminator. [5] propose InfoGAN, an information-theoretic extension of GANs, with the aim of performing unsupervised disentanglement. [27] used a type of GAN named CycleGAN to concurrently optimize two mapping functions and two discriminators through an adversarial loss. Differently from CycleGANs, Y-AEs rely on a single network, and the consistency is ensured in the latent space with the aim of maximizing the distance between domains in the image space.

2 Description of the method

2.1 Notation

We define an autoencoder as a neural network consisting of two parts: encoder and decoder. The encoder is a function performing a non-linear mapping from an input to a latent representation . We refer to this latent representation as the code. The decoder is a function performing a non-linear mapping from the latent representation to a reconstruction . Encoder and decoder are parametrized by and

respectively - these are omitted in the rest of the article to keep the notation uncluttered. Parameters are adjusted during an online training phase via stochastic gradient descent, minimizing the mean squared error between the input and reconstruction

on a random mini-batch of samples. The network is designed such that , forming a bottleneck. This ensures that if is small, must be a compressed version of .

This article focuses on the particular case where we have access to some label information , and have divided the latent representations into two parts , where the stands for implicit and stands for explicit. The distinction between the two is that the explicit information should be approximately equal to , whereas the implicit information should be independent of it. We denote a decoder which takes a separable hidden state as input by . The label may take the form of a one-hot vector (in a classification setting) or a vector of real values (in a regression).

2.2 Overview

The encoding phase of a Y-AE is identical to a standard cAE, but the reconstruction is quite different as it is divided into two branches, left and right. These two branches share the same weights, similarly to a siamese network [4]. Thus, the Y-AE requires no more parameters than its cAE counterpart (see Figure 1).

The implicit information produced by is given as input to the two branches, whereas the explicit information is discarded. Instead of the left branch takes as input the actual label and the right branch takes as input a random label . The decoding phase produces two reconstructions and , where the subscript specifies the left branch, and the subscript the right branch. At this point we have two images which have identical implicit representations but different explicit ones.

The encoding stage is then applied to the two branches (a process we call sequential-encoding), producing two latent representations , , and , . The sequential encoding is used to verify that the implicit representations are not altered when only the explicit repesentation changes. In addition, the right branch ensures that the explicit information is not also hidden in the implicit data, since it must be able to propagate through (see Figure 2). It is important to notice that the sequential-encoding is only applied at training time as showed in Figure 1

. The second encoding stage concludes the forward pass. The backward pass is based on the simultaneous optimization of multiple loss function and it is described in the next section.

Figure 2: Graphical representation of sequential-encoding. (a) In the first encoding phase the input vector (yellow square) is encoded in two manifolds: explicit (green square) and implicit (red square). In the second encoding phase, the two branches have different purposes. (b) In the left branch the content is not manipulated and the latent representation has to be consistent with the first phase. (c) In the right branch the explicit content is manipulated meaning that it should change in accordance with the manipulation, while the implicit code should stay the same.

2.3 Definition of the loss function

The loss function consists of four separate components.

Firstly, in the left branch of a Y-AE the label is assigned explicitly, replacing the

component inferred by the encoder. This is done to avoid instability in the preliminary learning phase, when the classifier predictions are still inaccurate. To ensure appropriate reconstructions following this we penalize deviations between

and , using the standard least-squared error reconstruction loss:


where .

Secondly, we include a computationally cheap cross-entropy loss penalty between and ,


This is done as this particular part of the output of the encoder can be considered as the output of a predictor, identifying which type of explicit content is present in the input .

This predictor-like aspect is made direct use of in the right branch of the Y-AE, where it is deployed to verify the consistency of the relation . This is ensured using a third, cross-entropy loss:


Finally, on the left branch a sequential-encoding is also performed. The vector can be compared with the right counterpart. Since the implicit information has not been manipulated it should be consistent in the two branches. This constraint can be added as an Euclidean distance penalty:


The losses defined above are then integrated in the global loss function


where the relative contribution of the explicit and implicit losses can be controlled by altering and respectively. Note that the reconstruction and classification losses have not been given similar weightings, since the first is the main reconstruction objective, and the second only acts in support of the explicit loss (which is already accounted for). An ablation study of the effect of altering and is presented in the experimental section (Section 3.2).

3 Experiments

In order to demonstrate the efficacy of the Y-AE training scheme, we use a straightforward autoencoder architecture. In Section 3.2, we remove various parts of the Y-AE structure to show that they are all necessary, in Section 3.3, we compare the Y-AE training method to some simpler baseline training methods and in Section 3.4, we evaluate the Y-AE structure on a variety of different tasks in a qualitative manner to show it’s applicability to a variety of domains.

3.1 Implementation

The encoders used in these experiments are based on the principle of simultaneously halving the spatial domain whilst doubling the channel domain, as successfully used by [16]

(the opposite is done in the decoding phase). Each network module is made of three consecutive operations: convolution (or transpose convolution in the decoder), batch normalization, and leaky-ReLU. For inputs of size

four such modules have been used, increasing to six for inputs of size

. Reduction (or augmentation) is performed via stride-2 convolution (or transpose convolution). No pooling operations have been used at any stage. The input images have been normalized so to have continuous values

. The sigmoid activation function has been used in the implicit portion of the code, and softmax in the explicit part. All the other units use a leaky-ReLU with slope of

. The parameters have been initialized following the Glorot uniform initialization scheme [9]

. To stabilize the training in the first iterations, we initialized the parameters of the input to the implicit layer’s sigmoid activation function by randomly sampling from a Gaussian distribution (

, ) with the bias set to a negative value () such that the sigmoid is initially saturated toward zero. All the models have been trained using the Adam optimizer [14]

. The models have been implemented in Python using the Tensorflow library, and trained using a cluster of NVIDIA GPUs of the following families: TITAN-X, K-80, and GTX-1060. A detailed description of the networks structure and hyperparameters is reported in the supplementary material.

Figure 3: Ablation study for a Y-AE trained on the MNIST dataset. Top: plots of the explicit (red), implicit (blue) and reconstruction (green) loss for

iterations (100 epochs, 3 runs). Note that the plots have the same scale. Bottom: three random samples from the last iteration, for each group we report the input (left digit), reconstruction (central digit), and reconstruction with random content (right digit). (a) Explicit and implicit loss are not minimized; note that the digit content cannot be changed. (b) Only the explicit loss is minimized; changing the content alters the style (c) Only the implicit loss is minimized; changing the content works very poorly (d) Both explicit and implicit losses are minimized; the content can be changed without altering the style.

3.2 Ablation study

In this experimental section we compare against different ablations of the full loss (Equation 4), to provide a deeper understanding of the results presented in Section 3.4. To do this we vary the mixing coefficients and that regulate the weight of the explicit (Equation 3) and the implicit (Equation 4) losses, systematically setting each to or . As such, we either minimize neither of the losses (, ), only the explicit loss (, ), only the implicit loss (, ), or both (, ).

The focus of this section is a widely used benchmark: the Modified National Institute of Standards and Technology (MNIST) [18]. This dataset is composed of a training set of greyscale handwritten digit images and a test set of images. We aim at separating an implicit information such as the style of the digits (orientation, stroke, size, etc) and an explicit information represented by the digit value (from 0 to 9). We used a Y-AE with 32 units in the implicit portion of the code and 10 unit in the explicit portion. We trained the model for 100 epochs using the Adam optimizer with a learning rate of and no weight decay. The results are the average of three runs for each condition.

Table 1: Comparison of the average loss value (three runs) on the 10000 images of the MNIST test set for all the ablation conditions. is the global loss. is the reconstruction loss (MSE between and ). is the classification loss. is the explicit loss. is the implicit loss.

An overview of the results is reported in Figure 3 and the average loss on the test set in Table 1. In the first condition (, ; Figure 3-a) there are no constraints on the two losses and the reconstruction reach a value of on the test set. This is achieved by exploiting the implicit portion of the code and ignoring the information carried by the explicit part (note the large disparity in implicit and explicit loss). The triplet of samples in the first column show that the explicit information has been ignored as each displays three almost identical digits, indicating the two branches of the Y-AE produce identical outputs. In the second condition (, ; Figure 3-b) only the explicit loss is regularized. This forces the output of the right branch to take account of the explicit information. However, as there is no regularization on the implicit portion of the code, the network learns to use it to carry the explicit information. The samples produced by the right branch show that the explicit content has been kept but the style partially corrupted. This condition has been further investigated in Section 3.3. The third case (, ; Figure 3-c) only regularizes the implicit loss. The explicit loss rapidly diverges, indicating that the reconstruction on the right branch does not resemble the digit it ought to. The samples confirm this assumption, showing the right reconstructions as more similar to the inputs than to the random content. Finally, the fourth and last condition (, ; Figure 3-d) is the complete loss function with all the components being minimized. Both explicit and implicit losses rapidly converge toward zero, whereas the reconstruction loss moves down reaching a value of on the test set. The samples produced clearly show that the style of the input (left digit) is kept and the content changed (right digit).

An overall comparison between all the conditions shows that the implicit loss (blue curve) act as a regularizer, with the effect of inhibiting the reconstruction on the left branch. This is an expected result, since the implicit loss limits the capacity of the code and it ensures that only the high-level information about the style is considered. A qualitative analysis of the samples shows that only the use of both explicit and implicit losses (Figure 3-d) guarantees the disentanglement of style and content, supporting our hypothesis about the functioning of the Y-AE. In particular it is evident how the high-level style information has been correctly codified, with the generated samples incorporating orientation, stroke, and size of the inputs.

3.3 Comparison against baselines

In this section we compare the proposed method against different baselines on the MNIST dataset. In all cases, we encode the input image, change the explicit information (i.e. the number), then decode it to produce an image. We use a a pre-trained classifier to test whether the generated images have the right appearance. Also, since changing should change the digits, we test similarity of the generated digits against the original digits using MSE (which should be high) and the perceptual structural similarity measure, SSIM[25], which should be low.

We train the autoencoder models using the same set-up described in Section 3.2. The evaluation has been performed encoding all the inputs in the test set, extracting the implicit code , and randomly sampling (without replacement) of the possible contents , then and were used to get the reconstruction . This procedure generated a dataset with being input-label tuples , and being six times larger than the original test set. For the evaluation classifier, we train an ensemble of five LeNet [18] classifiers on the original dataset.

Method Accuracy (%) SSIM MSE
cAE + regularizer
adversarial-AE [21]
cVAE [15]
beta-VAE [12]
Y-AE + ablation [our]
Y-AE [our]
Table 2: Comparison of different methods for multiple metrics. Accuracy of the explicit reconstruction (percentage), measured through an independent ensemble of classifiers trained on the MNIST dataset. Internal SSIM and MSE between the input sample and all the 10 reconstructions . Notice that optimal internal metrics should have low SSIM and high MSE meaning that the samples are different from the input. The results are the average of three runs. Best results highlighted in bold.

Figure 4: Samples produced at test time by different methods given the same inputs (leftmost column). Not cherry picked. The Y-AE is able to use both implicit (style) and explicit (digit value) information. As expected, the Y-AE performance is affected by the removal of the implicit loss, showing the importance of both losses. The cAE and the adversarial-AE discard the explicit information loosing the ability of changing the content. The VAE has a poor style transfer, especially when the implicit information is pushed closer to the Gaussian prior through .

Conditional Autoencoders (cAEs). A cAE defines a conditional dependence on explicit information such that the reconstruction is conditioned on both the input and the labels . This can be considered as the main baseline, since a cAE has the same structure of a Y-AE but only relies on a standard MSE loss (see Equation 1) to minimize the distance between the inputs and the reconstructions.

cAE + regularization. We applied a series of regularizers on cAEs to push their performance. Strong regularization may enforce disentanglement of style and content limiting the amount of information codified in the latent space. We drastically reduced the number of epochs from 100 to 20, the number of implicit units from 32 to 16, and we applied a weight decay of . None of these regularizers has been used in the Y-AE.

Adversarial-AE. We performed a comparison against an adversarial-AE [21]

. As adversarial discriminator we used a multi-layer perceptron with 512 hidden units and leaky-ReLU. It was necessary to apply strong regularization in order to obtain decent results (8 units as code, 20 epochs,

weight decay).

VAE. We also compared the method against a conditional VAE (cVAE) [15] and a beta-VAE [12] ().

Y-AE + ablation. To check whether the Y-AE accuracy is just a result of the fact that it has been trained with a predictor in the loop, we tested against the ablated version of the model with and . This condition produces samples with consistent content but the style can be partially corrupted (Section 3.2). We expect to see the accuracy being lower than the Y-AE trained with the complete loss function, because in comparison the samples have lower quality.

The quantitative results are reported in Table 2, and the qualitative results in Figure 4. The accuracy of the Y-AEs is higher than most of the other methods (), meaning that the samples carry the right content. As a result we observe that the SSIM is low () and the MSE high (), as expected when style and content are well separated. Conversely, the accuracy of standard cAEs is close to chance level () because the model is producing the same digit (Figure 4-cAE) and ignoring the content information , meaning that just of the produced samples are correct. Interestingly, the accuracy of Y-AEs with ablation is pretty high () but inferior to the standard counterpart, with the samples showing stylistic artifacts caused by the ablation of the implicit loss. Strong regularization increases the performances of cAEs but the results are still far from both standard and ablated Y-AEs. The performance of the cVAE and beta-VAE is lower in SSIM and MSE when compared to the Y-AE, with beta-VAE being slightly better in terms of accuracy ( vs ). The digits produced by the beta-VAE are clear but the style does not significantly change among the inputs (Figure 4-beta-VAE). This is due to the pressure imposed by

on the Kullback-Leibler divergence that moves the latent space closer to the Gaussian prior resulting in low expressivity. In conclusion, the qualitative analysis of the samples (Figure

4) shows that Y-AEs are superior to other methods on the problem at hand (see Section 3.4 for additional samples).

3.4 Cross-domain evaluation

The evaluation of the method has been done in three ways in order to verify the performances on a wide set of problems. The first test is disentanglement of style and content, the second pose generation (inverse graphics), and the third unpaired image-to-image translation.

Disentanglement of style and content. This experiment shows how a Y-AE can be used to disentangle style and content. This is shown through two widely used datasets: the MNIST [18], and the Street View House Number (SVHN) [23]. In this task, the implicit information is the style (orientation, height, width, etc) and the explicit information is the content (digit value). We set and on the MNIST and and on the SVHN dataset. We report some of the generated samples in Figure 5. In the MNIST the implicit units have captured the most important underlying properties, such as orientation, size, and stroke. Similarly on the SVHN dataset the model has been able to retain the explicit information and codify the salient properties (digit style, background and foreground colours) in the implicit portion of the code.

Figure 5: Samples produced at test time by a Y-AE trained on the MNIST and SVHN datasets. The first column is the input, other columns are the reconstructions given all possible content values. The Y-AE is able to keep the style (orientation, stroke, height, width, etc) and change the content.

Figure 6: Samples produced by a Y-AE trained on the chairs dataset. Given the implicit encoding obtained through an input chair type (leftmost column) it is possible to generate a rotation of 360 degrees activating the corresponding discrete units in the explicit portion of the code and then decode. The network has never seen these chairs before at any orientation. For graphical constraints each row only shows 16 of the 31 generated poses.

Figure 7: Unpaired image-to-image translation (malefemale). Samples produced by a Y-AE trained on unpaired images of celebrities. Top: given the input image (left column) it is possible to produce a reconstruction (central column) or to change the sex (right column) manipulating the two units in the explicit portion of the code. Bottom: given a reconstruction (left column) and the reconstruction with switched sex (central column), we estimated the mean structural similarity between the two (right column) and highlight in gray the areas with major changes.

Pose generation (inverse graphics). Pose generation consists in producing a complete sequence of poses given a single frame of the sequence. This task is particularly challenging because relevant details of the object may be occluded in the input frame, and the network has to make a conjecture about any missing component. We tested the Y-AE on the 3D chairs dataset [1]. This dataset contains 1393 rendered models of chairs. Each model has two sequences of 31 frames representing a 360 degrees rotation around the vertical axis. Following a similar procedure reported in [16] we randomly selected 100 models and we used them as test set. In a similar way we also preprocessed the images, first we removed 50 pixels from each border, then we resized the images to

pixels greyscale using a bicubic interpolation over

pixel neighborhood. The explicit representation has been encoded in the Y-AE using 31 units, one unit for each discrete pose. The implicit information has been encoded with 481 units and used to codify the properties of the chair model. Results are showed in Figure 6. Even though the network never seen the test model before, at any orientation, it is able to generalize effectively and produce a full rotation of 360 degrees.

Figure 8: Samples produced by a Y-AE trained on image-to-image translation of a facial attribute (eyeglasses). Each group of images is divided in three columns: the left is the original image, the central is the reconstruction, and the right is the reconstruction with different attribute. Note that the reconstructions are more blurred compared to other samples because of the use of an norm.

Unpaired image-to-image translation. The aim of this series of experiments is to verify how Y-AEs behave when the explicit information is only provided by a weak label, being the group assignment. In unpaired training there are two separate sets and of input and targets that do not overlap, meaning that samples belonging to are not present in and vice-versa. The goal is to translate an image from one set to the other. Here we focus on three particular types of unpaired translation problems: malefemale, glassesno-glasses, and segmentednatural. For the malefemale and glassesno-glasses tasks we used the CelebA dataset [20], a database with more than 200K celebrity images, of size pixels, each with 40 attribute annotations. The images cover large pose variations and have rich annotations (gender, eyeglasses, moustache, face shape, etc). In the malefemale task we used an penalty on the reconstruction which generally gives sharper results. In the glassesno-glasses we used instead an reconstruction loss, so to compare the quality of the samples with both losses. We used a neural networks with implicit units and explicit units. For the problem of segmentednatural translation we used a subset of the Cityscapes dataset [7]. Cityscapes is based on a stereo video sequences recorded in streets from 50 different cities, and it includes both natural and semantically segmented images. To have unpaired samples, we randomly removed from the dataset one of the pair, so to have half natural and half segmented images. The final dataset consisted of training images of size pixels. This is a fairly limited amount of images, but we considered it as an additional test to verify the performance of the method on a limited amount of data. As regularization we just reduced the size of the code from to implicit units.

Figure 9: Unpaired image-to-image translation (segmentednatural). Samples produced by a Y-AE trained on unpaired images of segmented and natural cityscapes. Given the input image (left column) it is possible to produce a reconstruction (central column) or to change the reconstruction type (right column) by manipulating the two explicit units in the code. The passage from one domain to the other is not always consistent, possibly indicating overfitting problems.

Results obtained on the CelebA dataset are reported in Figure 7 and Figure 8. In the malefemale task, the transition from one sex to the other looks robust in most of the samples (Figure 7-top). To understand which attributes have been codified in the implicit portion of the code we computed the SSIM between the two reconstructions. We report in Figure 7-bottom the greyscale maps based on this metric. The SSIM shows that the most intense changes are localized in the eyes region, with a peak on eyebrows and eyelashes. Minor adjustments are applied around the mouth (lips and beard), forefront (wrinkles), cheekbones, ears, and hairline. It is important to notice that the model found these differences without any specific supervision, only through the weak labelling identifying the groups. In the glassesno-glasses task, the samples look more blurred because of the use of the loss, however also in this case the transition is robust. The Cityscapes dataset (Figure 9) translation task proved extremely challenging. The model has been able to successfully separate the two domains and identify the major factors of variation (e.g. sky, road, cars, trees, etc). However, minor details such as road signs and vehicle type have been discarded. We suspect this is due to the large difference between the two domains, small number of images, and highly lossy nature of the naturalsegmented translation. Further work is required to overcome the difficulties presented in this setting.

4 Discussion and conclusions

In this article we present a new deep model called Y-AE, allowing disentanglement of implicit and explicit information in the latent space without using variational methods or adversarial losses. The method splits the reconstruction in two branches (with shared weights) and performs a sequential encoding, with an implicit and an explicit loss ensuring the consistency of the representations. We show through a wide experimental session that the method is effective and that its performance is superior to similar methods.

Future work should mainly focus on applying the principles of Y-AEs to GANs and VAEs. For instance, codifying the implicit information as a Gaussian distribution it is possible to integrate Y-AEs and VAEs in a unified framework and having the best of both worlds.


  • [1] M. Aubry, D. Maturana, A. A. Efros, B. C. Russell, and J. Sivic. Seeing 3d chairs: exemplar part-based 2d-3d alignment using a large dataset of cad models. In

    Proceedings of the IEEE conference on computer vision and pattern recognition

    , pages 3762–3769, 2014.
  • [2] P. Baldi and K. Hornik.

    Neural networks and principal component analysis: Learning from examples without local minima.

    Neural networks, 2(1):53–58, 1989.
  • [3] D. H. Ballard. Modular learning in neural networks. In AAAI, pages 279–284, 1987.
  • [4] J. Bromley, I. Guyon, Y. LeCun, E. Säckinger, and R. Shah. Signature verification using a” siamese” time delay neural network. In Advances in neural information processing systems, pages 737–744, 1994.
  • [5] X. Chen, Y. Duan, R. Houthooft, J. Schulman, I. Sutskever, and P. Abbeel. Infogan: Interpretable representation learning by information maximizing generative adversarial nets. In Advances in neural information processing systems, pages 2172–2180, 2016.
  • [6] Y. Choi, M. Choi, M. Kim, J.-W. Ha, S. Kim, and J. Choo. Stargan: Unified generative adversarial networks for multi-domain image-to-image translation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 8789–8797, 2018.
  • [7] M. Cordts, M. Omran, S. Ramos, T. Rehfeld, M. Enzweiler, R. Benenson, U. Franke, S. Roth, and B. Schiele.

    The cityscapes dataset for semantic urban scene understanding.

    In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 3213–3223, 2016.
  • [8] B. Dai, S. Fidler, R. Urtasun, and D. Lin. Towards diverse and natural image descriptions via a conditional gan. In Proceedings of the IEEE International Conference on Computer Vision, pages 2970–2979, 2017.
  • [9] X. Glorot and Y. Bengio. Understanding the difficulty of training deep feedforward neural networks. In

    Proceedings of the thirteenth international conference on artificial intelligence and statistics

    , pages 249–256, 2010.
  • [10] I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio. Generative adversarial nets. In Advances in neural information processing systems, pages 2672–2680, 2014.
  • [11] A. Harsh Jha, S. Anand, M. Singh, and V. Veeravasarapu. Disentangling factors of variation with cycle-consistent variational auto-encoders. In Proceedings of the European Conference on Computer Vision (ECCV), pages 805–820, 2018.
  • [12] I. Higgins, L. Matthey, A. Pal, C. Burgess, X. Glorot, M. Botvinick, S. Mohamed, and A. Lerchner. beta-vae: Learning basic visual concepts with a constrained variational framework. In International Conference on Learning Representations, volume 3, 2017.
  • [13] Q. Hu, A. Szabó, T. Portenier, P. Favaro, and M. Zwicker. Disentangling factors of variation by mixing them. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 3399–3407, 2018.
  • [14] D. P. Kingma and J. Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014.
  • [15] D. P. Kingma and M. Welling. Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114, 2013.
  • [16] T. D. Kulkarni, W. F. Whitney, P. Kohli, and J. Tenenbaum. Deep convolutional inverse graphics network. In Advances in neural information processing systems, pages 2539–2547, 2015.
  • [17] C. Lassner, G. Pons-Moll, and P. V. Gehler. A generative model of people in clothing. In Proceedings of the IEEE International Conference on Computer Vision, pages 853–862, 2017.
  • [18] Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner. Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86(11):2278–2324, 1998.
  • [19] X. Liang, L. Lee, W. Dai, and E. P. Xing. Dual motion gan for future-flow embedded video prediction. In Proceedings of the IEEE International Conference on Computer Vision, pages 1744–1752, 2017.
  • [20] Z. Liu, P. Luo, X. Wang, and X. Tang. Deep learning face attributes in the wild. In Proceedings of International Conference on Computer Vision (ICCV), December 2015.
  • [21] A. Makhzani, J. Shlens, N. Jaitly, I. Goodfellow, and B. Frey. Adversarial autoencoders. arXiv preprint arXiv:1511.05644, 2015.
  • [22] M. Mirza and S. Osindero. Conditional generative adversarial nets. arXiv preprint arXiv:1411.1784, 2014.
  • [23] Y. Netzer, T. Wang, A. Coates, A. Bissacco, B. Wu, and A. Y. Ng. Reading digits in natural images with unsupervised feature learning. In NIPS workshop on deep learning and unsupervised feature learning, volume 2011, page 5, 2011.
  • [24] Z. Shu, M. Sahasrabudhe, R. Alp Guler, D. Samaras, N. Paragios, and I. Kokkinos. Deforming autoencoders: Unsupervised disentangling of shape and appearance. In Proceedings of the European Conference on Computer Vision (ECCV), pages 650–665, 2018.
  • [25] Z. Wang, A. C. Bovik, H. R. Sheikh, E. P. Simoncelli, et al. Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing, 13(4):600–612, 2004.
  • [26] X. Yan, J. Yang, K. Sohn, and H. Lee. Attribute2image: Conditional image generation from visual attributes. In European Conference on Computer Vision, pages 776–791. Springer, 2016.
  • [27] J.-Y. Zhu, T. Park, P. Isola, and A. A. Efros. Unpaired image-to-image translation using cycle-consistent adversarial networks. arXiv preprint, 2017.