Semi-Latent GAN: Learning to generate and modify facial images from attributes

04/07/2017 ∙ by Weidong Yin, et al. ∙ FUDAN University 0

Generating and manipulating human facial images using high-level attributal controls are important and interesting problems. The models proposed in previous work can solve one of these two problems (generation or manipulation), but not both coherently. This paper proposes a novel model that learns how to both generate and modify the facial image from high-level semantic attributes. Our key idea is to formulate a Semi-Latent Facial Attribute Space (SL-FAS) to systematically learn relationship between user-defined and latent attributes, as well as between those attributes and RGB imagery. As part of this newly formulated space, we propose a new model --- SL-GAN which is a specific form of Generative Adversarial Network. Finally, we present an iterative training algorithm for SL-GAN. The experiments on recent CelebA and CASIA-WebFace datasets validate the effectiveness of our proposed framework. We will also make data, pre-trained models and code available.



There are no comments yet.


page 1

page 3

page 6

page 8

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Analysis of faces is important for biometrics, non-verbal communication, and affective computing. Approaches that perform face detection

[3, 21], facial recognition [1, 33, 41]

, landmark estimation

[2, 44], face verification [36, 37]

and action coding have received significant attention over the past 20+ years in computer vision. However, an important problem of generation (or modification) of facial images based on high-level intuitive descriptions remains largely unexplored. For example, it would be highly desirable to generate a realistic facial composite based on eyewitness’ high level attributal description (, young, male, brown hair, pale skin) of the suspect. Further, modifying facial attributes of a given person can help to inform criminal investigation by visualizing how a suspect may change certain aspects of their appearance to avoid capture. In more innocuous use cases, modifying facial attributes may help a person visualize what he or she may look like with a different hair color, style, makeup and so on.

In this paper, we are interested in two related tasks: (i) generation of facial images based on high-level attribute descriptions and (ii) modifying facial images based on the high-level attributes. The difference in the two tasks is important, for generation one is interested in generating (a sample) image from a distribution of facial images that contain user-specified attributes. For modification, we are interested in obtaining an image of the pre-specified subject with certain attributes changed. In both cases, one must take care to ensure that the resulting image is of high visual quality; in the modification case, however, there is an additional constraint that identity of the person must be maintained. Intuitively, solving these two tasks requires a generative model that models semantic (attributal) space of faces and is able to decouple identity-specific and identity-independent aspects of the generative process.

Inspired by [10], we formulate the Semi-Latent Facial Attribute Space (SL-FAS) which is a composition of two, user-defined and latent, attribute subspaces. Each dimension of user-defined attribute subspace corresponds to one human labeled interpretable attribute. The latent attribute space is learned in a data-driven manner and learns a compact hidden structure from facial images.

The two subspaces are coupled, making learning of SL-FAS challenging. Recently, in [42], attribute-conditioned deep variational auto-encoder framework was proposed that can learn latent factors (i.e. attributes) of data and generate images given those attributes. In [42], only the latent factors are learned and the user-defined attributes are given as input. Because of this, they can not model the distribution of user-defined attributes for a given an image; leading to inability to modify the image using semantic attributes. Inspired by InfoGAN [4], we propose a network that jointly models the subspace of user-defined and latent attributes.

In this paper, to jointly learn the SL-FAS, we propose a Semi-Latent Generative Adversarial Network (SL-GAN) framework which is composed of three main components, namely, (i) encoder-decoder network, (ii) GAN and (iii) recognition network. In encoder-decoder network, the encoder network projects the facial images into SL-FAS and the decoder network reconstructs the images by decoding the attribute vector in SL-FAS. Thus the decoder network can be used as a generator to generate an image if given an attribute vector. The GAN performs the generator and discriminator min-max game to ensure the generated images are of good quality, by ensuring that generated images cannot be discriminated from the real ones. Recognition network is the key recipe of jointly learning user-defined and latent attributes from data. Particularly, the recognition network is introduced to maximize the mutual information between generated images and attributes in SL-FAS. Figure 1 gives the examples of generating and modifying the facial attributes. As shown in the first and third rows of modification Figure 1 (b), our SL-GAN can modify the attributes of facial images in very noise background.

Contributions. (1) To the best of our knowledge, there is no previous work that can do both generation and modification of facial images using visual attributes. Our framework only uses high-level semantic attributes to modify the facial images. (2) Our SL-GAN can systematically learn the user-defined and latent attributes from data by formulating a semi-latent facial attribute space. (3) A novel recognition network is proposed that is used to jointly learn the user-defined and latent attributes from data. (4) Last but not the least, we propose an iterative training algorithm to train SL-GAN in order to solve two related and yet different tasks at the same time.

2 Related Work

Attribute Learning. Attribute-centric semantic representations have been widely investigated in multi-task [31]

and transfer learning

[19]. Most early works [19] had assumed a space of user-defined namable properties as attributes. User-defined facial attributes [5, 9, 30, 40, 45, 18] had also been explored. These attributes, however, is hard and prohibitively expensive to specify, due to the manual effort required in defining the the attributal set and annotating images with that set. To this end, latent attributes [10] have been explored for mining the attributes directly from data. It is important to note that user-defined and latent attributes are complementary to one another and can be used and learned simultaneously, forming a semi-latent attribute space. Our SL-GAN model is a form of semi-latent attribute space specifically defined for generation and modification of facial images.

Deep Generative Image Modeling.

Algorithmic generation of realistic images has been the focus of computer vision for some time. Early attempts along these lines date back to 2006 with Deep Belief Networks (DBNs)

[13]. DBNs were successful for small image patch generation, but failed to generalize to larger images and more holistic structures. Recent models that address these challenges, include auto-regressive models [16, 38, 39], Variational Auto-Encoders (VAEs) [20, 35, 42], Generative Adversarial Networks (GANs) [4, 6, 7, 8, 11, 14, 17, 24, 27, 28, 29], and Deep Recurrent Attention Writer (DRAW) [12]. InfoGAN [4] and stackGAN [14] utilized the recognition network to model the latent attributes, while our SL-GAN extends the recognition network to jointly model user-defined and latent attributes and thus our framework can both generate and modify the facial images using attributes.

Semantic Image Generation. More recently, there has been a focus on generating images conditioned on semantic information (, text, pose or attributes). Reeds et al. [28] studied the problem of automatic synthesis of realistic images from text using deep convolutional GANs. Yan et al. [42] proposed an attribute-conditioned deep variational auto-encoder framework that enables image generation from visual attributes. Mathieu et al. [23] learned the hidden factors within a set of observations in the conditional generative model. However, their frameworks can only generate the images rather than modifying an existing image based on attributes or other form of semantic information.

Image Editing and Synthesis. Our work is also related to previous work on image editing [15, 26, 34, 46]. The formulation in [47] also enables task of image editing using GAN, where the user strokes are used to specify the attributes being changed. In contrast, our SL-GAN does not need user strokes as input. Very recently, two technical reports [26, 34] can also enable modifying the attribute of facial images. [34]

proposed a GAN-based image transformation networks for face attribute manipulation. In

[34], one trained model can only modify one type attribute in facial images; in contrast, our SL-GAN can learn to generate or modify many attributes simultaneously. In fact, our models on CelebA dataset can generate or modify different facial attributes all at ones.

3 Semi-Latent GAN (SL-GAN)

3.1 Background

Figure 2: Overview of our SL-GAN.

GAN [11] aims to learn to discriminate real data samples from generated samples; training the generator network to fool the discriminator network . GAN is optimized using the following objective,


where is the distribution of real data;

is a zero-mean Gaussian distribution

. The parameters of and

are updated iteratively in training. The loss function for generator and discriminator are

and respectively. To generate an image, generator draws a sample from prior (a.k.a., noise distribution) and then transforms that sample using a generator network , , .

InfoGAN [4] decomposes the input noise of GAN into latent representation and incompressible noise by maximizing the mutual information and it ensures no loss information of latent representation during generation. The mutual information term can be described as the recognition loss,

where is an approximation of the posterior . Thus the parameters of the generator can thus be updated as . InfoGAN can learn the disentangled, interpretable and meaningful representations in a completely unsupervised manner.

VAEGAN [20] combines the VAE into GAN and replaces the element-wise errors of GAN with feature-wise errors of VAEGAN in the data space. Specifically, it encodes the data sample to latent representation : and decodes the back to data space: . We can define the loss functions of the regularization prior as , is the approximation to the true posterior . The reconstruction error is where

is hidden representation of

layer of the discriminator. Thus this loss minimizes the sum of the expected log likelihood of the data representation of layer of discriminator. Thus the loss function of VAEGAN is


However the latent representation is totally unsupervised; there is no way to explicitly control the attributes over the data or modify facial images using attributes.

CVAE [42, 35] is the conditional VAE. The independent attribute variable is introduced to control the generating process of by sampling from ; where . The encoder and decoder networks of CVAE are thus and . The variable is introduced to control the generate process of by sampling from ; where . Nevertheless, is still sampled from data, rather than being directly optimized and learned from the data as our SL-GAN. Thus can be used to modify the attributes similar to proposed SL-GAN.

3.2 Semi-Latent Facial Attribute Space

The input noise of GAN can be further decomposed into two parts: (1) User-defined attributes  are the manually annotated attributes of each image , i.e. ; (2) Latent attributes  indicate the attributes that should be mined from the data111Note that the latent attribute also includes the incompressible noise, which is not explicitly modelled due to less impact to our framework. in a data-driven manner, i.e., .

Mathematically, the and can be either univariate or multivariate; and these attributes are mutual independent, i.e., . Each dimension of is clipped with one type of facial attribute annotated in real images; our SL-GAN train in a supervised manner. In contrast, each dimension of is trained totally unsupervised. We define semi-latent facial attribute space to combine both attributes and the latent attributes.

With the decomposed input noise, the form of generator is now. Directly learning from input data will lead to the trivial solution that the generator is inclined to ignore the latent attributes as . In contrast, we maximize the mutual information between the attributes and the generator distribution , which can be simplified as minimizing the recognition loss for the attribute and .

It is important to jointly learn the attributes and ; and make sure that can represent un-modeled aspects of the input facial images rather than re-discovering . The “rediscovering” means that some dimensions of have very similar distribution as the distribution of over the input images, , the same patterns in repeatedly discovered from latent attributes

3.3 Semi-Latent GAN

Our SL-GAN is illustrated in Fig. 2

; it is composed of three parts, namely, encoder-decoding network, GAN and recognition network. The user-defined and latent attributes are encoded and decoded by the encoder-decoder network. Recognition network helps learn the SL-FAS from data. In our SL-GAN, the recognition network and discriminator shares the same network structure and have different softmax layer at the last layer. The loss functions of the generator

and discriminator are thus,


where is the recognition loss on . For recognition loss on , we use and as the loss for the discriminator and generator respectively. We also define the decoder loss as ; and the encoder loss as .

Encoder loss

is the sum of reconstruction error of the variational autoencoder and a prior regularization term over the latent distribution

; thus it is defined as ; and the measures the KL-divergence between approximate posterior and the prior . The reconstruction loss measures loss of reconstructing generated images by sampling the attributes in SL-FAS. Here, is an approximation of

parameterized by a neural network,

e.g., the encoder.

The recognition loss of is trained on both generated data and real data. It aims at predicting the values of latent attributes. Suppose the latent is sampled from the distribution of encoder network. The update steps of generator and discriminator use the same recognition loss on defined as,


where measures the loss of predicting errors on real data; and rest three term are the loss functions on generated data. is still an approximation of the corresponding posterior data distribution. is the distribution of given parameterized by the encoder network; is the data distribution of on real data; is the data distribution of given on the real data; is the data distribution of on the real data; is the prior distribution of and we use the Gaussian distribution ; is the distribution of given and , and the distribution is parameterized by the decoder network; is the distribution of given and the distribution parameterized by the encoder network;

The recognition loss of can be trained on real data and generated data. In training stage of discriminator , we have the ground-truth attribute annotations for user-defined without using the generated data. The reason is that the quality of generated data relies on the training quality of generator , which is also trained from the real data; thus we will observe a phenomena of “semantic drift” if the data is generated by a not well trained generator. To that end, only the manually labeled attribute can be used for updating ; and the loss of updating is


is corresponding to the decoder-network, and thus only generated data can be used to compute the loss of generator,


Note that intrinsically in each term in Eq (5) and (7) could be weighted by a coefficient. Here we omit these coefficients for the ease of notation.

3.4 The training algorithms

Our SL-GAN aims at solving generation and modification of facial images via attributes. The key ingredients are to learn a disentangled representation of data-driven attributes and user-defined attributes unified in our SL-GAN framework. Our SL-GAN is composed of three key components and aims at solving two related and yet very different problems – generation and modification of facial images. The conventional way of training GAN does not work in our problem. Thus we propose a new training algorithms to train our SL-GAN. Specifically, one iteration of our algorithm needs to finish the following three stages,

Learning facial image reconstruction. This stage mainly updates the encoder-decoder network and learns to reconstruct the image given corresponding user-defined attributes with the steps of,

  • Sampling a batch of images and attributes ,

  • Updating the SL-GAN by minimizing the encoder , Decoder and Discriminator with iteratively.

Learning to modify the facial image. We sample an image and the attribute from all data. This stage trains the SL-GAN to be able to modify the image by supplying . Note that the image does not necessarily have the attribute . Another important question here is how to keep the same identity of sampled images when modifying the facial image; two strategies are employed here: first, is sampled from parameterized by the encoder network which is not updated in this sub-step; second, our SL-GAN minimizing the essentially to guarantee the identity of the same person. Thus this step encourages the generator to learn a disentangled representation of and .

  • Learning to modify the attributes: Sample a batch of images and the attribute , ;

  • Update the SL-GAN by minimizing the Decoder and Discriminator with iteratively;

Learning to generate facial images. We sample from their prior distributions and from the distribution of data, i.e.

  • Sample a batch of latent vectors and attribute vectors .

  • Update the SL-GAN by minimizing the Decoder and Discriminator with iteratively;

Once the network is trained, we can solve the task of generation and modification; particularly,

  • Generating new facial images with any attributes. This can be achieved by sampling from and setting to any desired attributes. We can then get from the generator the image .

  • Modifying the existing images with any attributes. Given an image and the desired attribute , we can sample . Then the modified image can be generated by .

4 Experiments

Figure 3: Attribute Errors of user-defined attributes on CelebA and CASIA- dataset. The lower values, the better results. The attribute names are listed in the X-axis.

4.1 Experimental settings

Figure 4: Inception Scores on two datasets. The higher values, the better results.

Dataset. We conduct the experiments on two datasets. The CelebA dataset [22] contains approximately images of identities. Each image is annotated with landmarks (two eyes, the nose tips, the mouth corners) and binary labels of 40 attributes. Since the distribution of the binary annotations of attributes is very unbalanced, we select 17 attributes with relatively balanced annotations for our experiments. We use the standard split for our model: first 160k images are used for training, 20k images for validation and remaining 20k for test. CASIA-WebFace dataset is currently the largest publicly released dataset for face verification and identification tasks. It contains celebrities and face images which are crawled from the web. Each person has images on average. The CASIA-WebFace [43] doesnot have any facial attribute labels. We use the 17 attributes of CelebA dataset [22]

to train the facial attribute classifier,

i.e., MOON model [30]. The trained model can be utilized to predict the facial attributes on the CAISIA-WebFace dataset. The predicted results are used as the facial attribute annotation. We will release these annotations, trained models and code upon acceptance.


We employ different evaluation metrics. (1) For the generation task, we utilize inception score and attribute errors.

Inception score [32] measures whether varied images are generated and whether the generated images contains meaningful objects. Also, inspired by the attribute similarity in [42], we propose the attribute error. Specifically, we uses the MOON attribute [30] models to predict the user-defined attributes of generated images and the predicted attributes are compared against the specified attributes by mean square error. (2) For the modification task, we employ the user-study for the evaluation.

Implementation details. Training converges in 9-11 hours on CelebA dataset on GeForce GTX 1080; our model needs around 2GB GPU memory. The input size of images is . The methods to which we compare have code available on the web, which we use to directly compare to our results.

Figure 5: Qualitative results of the generation task.

4.2 Generation by user-defined attributes

Competitors. We compare various open-source methods on this task, including VAE-GAN [20], AC-GAN [25], and Attrb2img [42]. Attrb2img is an enhanced version of CVAE. All the methods are trained with the same settings and have the same number dimensions for attribute representation. For a fair comparison, each method only trains one model on all 17 user-defined attributes.

Attribute errors are compared on CelebA and CASIA-WebFace dataset and illustrated in Fig. 3. Since VAEGAN doesnot model the attributes, our model can be only compared against AC-GAN and Attrib2img methods. On most of the 17 attributes, our method has lower attribute errors than the other methods. This indicates the efficacy of our SL-GAN on learning user-defined attributes. In contrast, the relatively higher attribute errors of Attrib2img were largely due to the fact that Attrib2img is based on CVAE and user-defined attributes are only given as the model input, rather than explicitly learned as in our SL-GAN and AC-GAN. The advantages of our SL-GAN results over AC-GAN are in part due to our generator model with feature-wise error encoded by in Eq (3).

Inception scores are also compared on two datasets and shown in Fig. 4. We compare the inception scores on both generated and reconstructed image settings. The differences between generated and reconstructed images lies in how to obtain the attribute vectors: the attribute vectors of reconstructed images is computed by the encoder network, while such vectors of generated images are either sampled from Gaussian prior (for ) or pre-defined (for ).

As an objective evaluation metric, inception score, first proposed in [32], was found to correlate well with the human evaluation of visual quality of samples. Thus higher inception scores can reflect the relatively better visual quality. (1) On generated image setting, we use the same type of attribute vector to generate the facial images for VAEGAN, AC-GAN, and Attrib2img. On both dataset, our SL-GAN has higher inception scores than all the other methods, still thanks to our training algorithm for more efficiently and explicitly learning user-defined and latent attribute in SL-FAS. These results indicate that our generated images in general have better visual quality than those generated by the other methods. We note that AC-GAN has relatively lower inception scores since it does not model the feature-wise error as our SL-GAN. (2) On reconstructed image setting, AC-GAN is not compared since it does not have the encoder-decoder structure to reconstruct the input image. On both dataset, our SL-GAN again outperforms the other baselines, which suggests that the visual quality of our reconstructed images is better than that from the other competitors.

Figure 6: Qualitative results of comparing different modification methods. The red box indicates that the attribute modified is inverse to the attribute of each row.
Metric Saliency Quality Similarity Guess
Attrb2img 3.02 4.01 4.43
icGAN 4.10 3.83 4.30
SL-GAN 4.37 4.20 4.45
Table 1: The user-study of modification of user-defined attributes. The “Guess” results are reported as the accuracy of guessing.

Qualitative results. Figure 5 gives some qualitative examples of the generated images by VAEGAN, AC-GAN, Attrib2img and SL-GAN as well as the ground-truth images. For all the methods, the same attribute vectors are used for all methods to generate the images. The generated images are compared against the ground-truth image which is annotated by the corresponding attribute vector. As we can see, samples generated by Attrib2img has successfully generated the images with rich and clear details of human faces and yet very blurred hair styles222This point is also consistency with the example figures given [42] which has blurred hair style details.. In contrast, the image generated by AC-GAN can model the details of both human faces and hair styles. However, it has lower visual quality than our SL-GAN, which generates a more consistency and nature style of human faces and hair styles.

Figure 7: Results of modifying attributes on CelebA dataset. Each column indicates modifying by adding one type of attribute to image, while the red box means the image is modified to not have that attribute.

4.3 Modification by user-defined attributes

Competitors. We compare various open source methods on this task, including attrib2img [42] and icGAN [26]

. Note that attrb2img can not directly modify the attributes of images. Instead, we take it as an “attribute-conditioned image progression” which interpolates the attributes by gradually changing the values along attribute dimension. We still use the same settings for all the experiments.

User-study experiments. We design a user study experiment to compare attrib2img with our SL-GAN. Specifically, ten students who are kept unknown from our projects are invited for our user study. Given one image, we employ attrib2img and SL-GAN to modify the same attribute and obtain two images; totally images are sampled and the results are generated. We ask the participants to compare each generated image with the original given image; and they should rate their judgement on a five-point scale from the less to the most likely by the following metrics: (1) Saliency: the salient degree of the attributes that has been modified in the image. (2) Quality: the overall quality of generated image; (3) Identity: Whether the generated image and the original image are the same person. (4) Guess: we also introduce a guessing game as the fourth metrics. Given one modified image and the original image, we ask the participants to guess which attributes have been modified from four candidate choices; among these candidate choices, only one is the correct answer (chance = ).

Results. The user-study results are compared in Tab. 1. Our results of SL-GAN outperforms those of Attrb2img on all the metrics. These results suggest that our SL-GAN can not only saliently modify the image attribute given, but also guarantee the overall quality of generated image and keep the same identity as the original input image. To better understand the difference of different methods, we qualitatively compare these three methods in Fig. 6. As observed from the figure, our results has better overall visual quality and more salient modified attributes.

Qualitative results of our modification examples. We further show some qualitative visualization of modifying user-defined attribute as illustrated in Fig. 7. Please refer to supplementary material for larger size figures. All the attributes are trained by only one SL-GAN model. In fact, our model can not only change the very detailed local attribute such as “rosy cheeks”, “arched eyebrows”, “Bags under eyes”, but also modify the global attributes, such as “Male”, “Pale skin”, and “Smiling”. Furthermore, our methods are also able to change the hair styles and hair color; and such details of hairs are usually not captured by Attrb2img.

5 Conclusion

In this paper, we introduce a semi-latent facial attribute space to jointly learn the user-defined and latent attributes from facial images. To learn such a space, we propose a unified framework– SL-GAN which for the first time can both learn to generate and modify facial image attributes. Our model is compared against the state-of-the-art methods and achieves better performance.


  • [1] T. Ahonen, A. Hadid, and M. Pietikainen. Face description with local binary patterns: Application to face recognition. IEEE TPAMI, 2006.
  • [2] X. P. Burgos-Artizzu, P. Perona, and P. Dollár. Robust face landmark estimation under occlusion. In ICCV, 2013.
  • [3] D. Chen, S. Ren, Y. Wei, X. Cao, and J. Sun. Joint cascade face detection and alignment. In ECCV, 2014.
  • [4] X. Chen, Y. Duan, R. Houthooft, J. Schulman, I. Sutskever, and P. Abbeel. Infogan: Interpretable representation learning by information maximizing generative adversarial nets. In ICML, 2016.
  • [5] A. Datta, R. Feris, and D. Vaquero. Hierarchical ranking of facial attributes. In IEEE FG, 2011.
  • [6] E. Denton, S. Chintala, A. Szlam, and R. Fergus. Deep generative image models using a laplacian pyramid of adversarial networks. In NIPS, 2015.
  • [7] A. Dosovitskiy, J. T. Springenberg, M. Tatarchenko, and T. Brox.

    learning to generate chairs with convolutional neural network.

    In CVPR, 2015.
  • [8] I. Durugkar, I. Gemp, and S. Mahadevan. Generative multi-adversarial networks. In ICLR, 2017.
  • [9] M. Ehrlich, T. J. Shields, T. Almaev, and M. R. Amer. Facial attributes classification using multi-task representation learning. In CVPR Workshops, 2016.
  • [10] Y. Fu, T. M. Hospedales, T. Xiang, and S. Gong. Learning multi-modal latent attributes. IEEE TPAMI, 2013.
  • [11] I. J. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, DavidWarde-Farley, S. Ozair, A. Courville, and Y. Bengio. Generative adversarial nets. In NIPS, 2014.
  • [12] K. Gregor, I. Danihelka, A. Graves, D. J. Rezende, and D. Wierstra.

    Draw: A recurrent neural network for image generation.

    In ICML, 2015.
  • [13] G. E. Hinton and R. R. Salakhutdinov. reducing the dimensionality of data with neural networks. 2006.
  • [14] X. Huang, Y. Li, O. Poursaeed, J. Hopcroft, and S. Belongie. Stacked generative adversarial networks. In CVPR, 2017.
  • [15] I. Kemelmacher-Shlizerman, S. Suwajanakorn, and S. M. Seitz. Illumination-aware age progression. In CVPR, 2014.
  • [16] K.Gregor, I. Danihelka, A. Mnih, C.Blundell, and D.Wierstra. Deep autoregressive networks. In ICML, 2014.
  • [17] T. Kulkarni, W.Whitney, P. Kohli, and J. Tenenbaum. Deep convolutional inverse graphics network. In NIPS, 2015.
  • [18] N. Kumar, A. C. Berg, P. N. Belhumeur, and S. K. Nayar. Attribute and simile classifiers for face verification. In ICCV, 2009.
  • [19] C. H. Lampert, H. Nickisch, and S. Harmeling. Attribute-based classification for zero-shot visual object categorization. IEEE TPAMI, 2013.
  • [20] A. B. L. Larsen, S. K. Sonderby, H. Larochelle, and OleWinther. Autoencoding beyond pixels using a learned similarity metric. In ICML, 2016.
  • [21] H. Li, Z. Lin, X. Shen, J. Brandt, and G. Hua. A convolutional neural network cascade for face detection. In CVPR, 2015.
  • [22] Z. Liu, P. Luo, X. Wang, and X. Tang. Deep learning face attributes in the wild. In ICCV, pages 3730–3738, 2015.
  • [23] M. Mathieu, J. Zhao, P. Sprechmann, A. Ramesh, and Y. LeCun. Disentangling factors of variation in deep representations using adversarial training. In nips, 2016.
  • [24] A. Nguyen, J. Yosinski, Y. Bengio, A. Dosovitskiy, and J. Clune. Plug & play generative networks: Conditional iterative generation of images in latent space. In arxiv, 2016.
  • [25] A. Odena, C. Olah, and J. Shlens. Conditional image synthesis with auxiliary classifier gans. In arxiv, 2016.
  • [26] G. Perarnau, J. van deWeijer, B. Raducanu, and J. M. Álvarez. Invertible conditional gans for image editing. In NIPS Workshop on Adversarial Training, 2016.
  • [27] S. Reed, Z. Akata, S. Mohan, S. Tenka, B. Schiele, and H. Lee. Learning what and where to draw. In nips, 2016.
  • [28] S. Reed, Z. Akata, X. Yan, L. Logeswaran, B. Schiele, and H. Lee. Generative adversarial text-to-image synthesis. In ICML, 2016.
  • [29] S. Reed, K. Sohn, Y. Zhang, and H. Lee. learning to disentangle factors of variation with manifold interaction. In ICML, 2014.
  • [30] E. M. Rudd, M. Gunther, and T. E. Boult. Moon:a mixed objective optimization network for the recognition of facial attributes. In ECCV, 2016.
  • [31] R. Salakhutdinov, A. Torralba, and J. Tenenbaum. Learning to share visual appearance for multiclass object detection. In CVPR, 2011.
  • [32] T. Salimans, I. Goodfellow, W. Zaremba, V. Cheung, A. Radford, and X. Chen. Improved techniques for training gans. In nips, 2016.
  • [33] F. Schroff, D. Kalenichenko, and J. Philbin. Facenet: A unified embedding for face recognition and clustering. In CVPR, 2015.
  • [34] W. Shen and R. Liu. Learning residual images for face attribute manipulation. In arxiv, 2017.
  • [35] K. Sohn, X. Yan, and H. Lee. Learning structured output representation using deep conditional generative models. In NIPS, 2016.
  • [36] Y. Sun, Y. Chen, X. Wang, and X. Tang. Deep learning face representation by joint identification-verification. In NIPS, 2014.
  • [37] Y. Taigman, M. Yang, M. Ranzato, and L. Wolf. Deepface: Closing the gap to human-level performance in face verification. In CVPR, 2014.
  • [38] L. Theis and M. Bethge. Generative image modeling using spatial lstms. In NIPS, 2015.
  • [39] A. van den Oord, N. Kalchbrenner, O. Vinyals, L. Espeholt, A. Graves, and K. Kavukcuoglu. conditional image generation with pixelcnn decoders. In NIPS, 2016.
  • [40] J. Wang, Y. Cheng, and R. Feris. Walk and learn: Facial attribute representation learning from egocentric video and contextual data. CVPR, 2016.
  • [41] Y. Wen, K. Zhang, Z. Li, and Y. Qiao. A discriminative feature learning approach for deep face recognition. In ECCV, 2016.
  • [42] X. Yan, J. Yang, K. Sohn, and H. Lee. attribute2image: conditional image generation from visual attributes. In ECCV, 2016.
  • [43] D. Yi, Z. Lei, S. Liao, and S. Z. Li. Learning face representation from scratch. arXiv, 2014.
  • [44] Z. Zhang, P. Luo, C. C. Loy, and X. Tang. Facial landmark detection by deep multi-task learning. In ECCV. 2014.
  • [45] Y. Zhong, J. Sullivan, and H. Li. Face attribute prediction using off-the-shelf cnn features. In IEEE ICB, 2016.
  • [46] T. Zhou, S. Tulsiani, W. Sun, J. Malik, and A. A. Efros. View synthesis by appearance flow. In ECCV, 2016.
  • [47] J.-Y. Zhu, P. Krähenbühl, E. Shechtman, and A. A. Efros. Generative visual manipulation on the natural image manifold. In ECCV, 2016.