Facial attributes are descriptions or labels that can be given to a face to describe its appearance kumar_ttributes . In the biometrics community, attributes are also referred to as soft-biometrics softbio . Various methods have been developed in the literature for predicting facial attributes from images DeepAtt , kumar2008facetracer , zhang2014panda . For instance, Kumar et al.kumar2008facetracer proposed a facial part-based method for attribute predication. Zhang et al.zhang2014panda
proposed a method which combines part-based models and deep learning for learning attributes. Similarly, Liu et al.DeepAtt proposed a convolutional neural network (CNN) based approach which combines two CNNs for localizing face region and extracting high-level features from the localized region for predicting attributes.
While several methods have been proposed in the literature for inferring attributes from images, the inverse problem of synthesizing faces from their corresponding attributes is a relatively unexplored problem (see Figure 1). Visual description-based face synthesis has many applications in law enforcement and entertainment. For example, visual attributes are commonly used in law enforcement to assist in identifying suspects involved in a crime when no facial image of the suspect is available at the crime scene. This is commonly done by constructing a composite or forensic sketch of the person based on the visual attributes.
Reconstructing an image from attributes or text descriptions is an extremely challenging problem. Several recent works have attempted to solve this problem by using recently introduced CNN-based generative models such as conditional variational autoencoder (CVAE) sohn2015learning and generative adversarial network (GAN) goodfellow2014generative . For instance, Yan et al.sohn2015learning proposed a CVAE-based method for attribute-conditioned image generation. In a different approach, Reed et al.reed2016generative proposed a GAN-based method for synthesizing images from detailed text descriptions. Similarly, Zhang et al.zhang2016stackgan proposed a stacked GAN method for synthesizing photo-realistic images from text.
In contrast to the above mentioned methods, we propose a different approach to the problem of face image reconstruction from attributes. Rather than directly reconstructing a face from attributes, we first synthesize a sketch image corresponding to the attributes and then reconstruct the face image from the synthesized sketch. Our approach is motivated by the way forensic sketch artists render the composite sketches of an unknown subject using a number of individually described parts and attributes.
In particular, the proposed framework consists of three stages (see Figure 2). In the first stage, we adapt a CVAE-based framework to generate a sketch image from visual attributes. The generated sketch images from the first stage are often of poor quality. Hence, in the second stage, we further enhance the sketch images using a GAN-based framework in which the generator sub-network leverages advantages of UNet ronneberger2015u and DenseNet huang2017densely architectures, which is inspired from jegou2017one
. Finally, in the third stage, we reconstruct a color face image from the enhanced sketch image with the help of attributes using another GAN-based framework. The Stage 3 formulation is motivated by the disentangled representation learning framework proposed indisentangled . In particular, the attribute information is fused with the latent representation vector to learn a disentangled representation. Once the three-stage network is trained, one can synthesize sketches and face images by inputing visual attributes along with noise as shown in Figure 3.
To summarize, this paper makes the following contributions:
We formulate the attribute-to-face reconstruction problem as a stage-wise learning problem (i.e. attribute-to-sketch, sketch-to-sketch, sketch-to-face).
A novel attribute-preserving dense UNet-based generator architecture, called AUDeNet, is proposed which incorporates the encoded texture attributes and the coarse sketches from stage 1 to generate sharper sketches.
A new sketch-to-face synthesis generator is proposed which reconstructs the face image from the sketch image using attributes. This generator is based on a new UNet structure and is able to preserve the attributes of the reconstructed image and improves the overall image quality.
We use the combination of L1 loss, adversarial loss and perceptual loss johnson2016perceptual in different stages for the purpose of image synthesis.
Extensive experiments are conducted to demonstrate the effectiveness of the proposed image synthesis method. Furthermore, an ablation study is conducted to demonstrate the improvements obtained by different stages of our framework.
Rest of the paper is organized as follows. In Section 2, we review a few related works. Details of the proposed attribute-to-face image synthesis method are given in Section 3. Experimental results are presented in Section 4, and finally, Section 5 concludes the paper with a brief summary.
Code is available at
2 Background and Related Work
Recent advances in deep learning have led to the development of various deep generative models for the problem of image synthesis and image-to-image translationlarochelle2011neural , kingma2013auto , goodfellow2014generative , rezende2014stochastic , radford2015unsupervised , sohn2015learning , larsen2015autoencoding , denton2015deep , dosovitskiy2017learning , salimans2016improved , metz2016unrolled , arjovsky2017towards , che2016mode , gauthier2014conditional , odena2016conditional . Among them, variational autoencoder (VAE) kingma2013auto , rezende2014stochastic , GANs goodfellow2014generative , radford2015unsupervised , salimans2016improved , and Autoregression larochelle2011neural are the most widely used approaches.
2.1 Conditional VAE (CVAE)
VAEs are powerful generative models that use deep networks to describe distribution of observed and latent variables. A VAE consists of two networks, with one network encoding a data sample to a latent representation and the other network decoding latent representation back to data space. VAE regularizes the encoder by imposing a prior over the latent distribution. Conditional VAE (CVAE) sohn2015learning yan2016attribute2image is an extension of VAE that models latent variables and data, both conditioned on side information such as a part or label of the image. The CVAE is trained by maximizing the variational lower bound
where and are input, output and latent variables, respectively, and and are the parameters. Here,
is assumed to be an isotropic Gaussian distribution andand are multivariate Gaussian distributions.
2.2 Conditional GAN
GANs goodfellow2014generative are another class of generative models that are used to synthesize realistic images by effectively learning the distribution of training images. The goal of GAN is to train a generator, , to produce samples from training distribution such that the synthesized samples are indistinguishable from actual distribution by the discriminator, . Conditional GAN is another variant where the generator is conditioned on additional variables such as discrete labels, text or images. The objective function of a conditional GAN is defined as follows
where , the input noise, , the output image, and , the observed image, are sampled from distribution and they are distinguished by the discriminator, . While for the generated fake sampled from distributions would like to fool .
Recently, several variants based on this game theoretic approach have been proposed for image synthesis and image-to-image translation tasks. Isola et al.isola2016image proposed Conditional GANs mirza2014conditional
for several tasks such as labels to street scenes, labels to facades, image colorization, etc. In an another variant, Zhu et al.zhu2017unpaired proposed CycleGAN that learns image-to-image translation in an unsupervised fashion. Berthelot et al.berthelot2017began proposed a new method for training auto-encoder based GANs that is relatively more stable. Their method is paired with a loss inspired by Wasserstein distance arjovsky2017wasserstein . Reed et al.reed2016generative proposed a conditional GAN network to generate reasonable images conditioned on the text description. Zhang et al.zhang2016stackgan proposed a two-stage stacked GAN method which achieves the state-of-art image synthesis results. Recently, Bao et al.bao2017cvae proposed a fine-grained image generation method based on a combination of CVAE and GANs. Yan et al.yan2016attribute2image proposed a CVAE method using a disentangled representation in the latent and the original data distribution to achieve impressive attribute-to-image synthesis results.
Note that the approach we take in this paper is different from the above mentioned methods in that we make use of an intermediate representation (i.e. sketch) for the problem of image synthesis from attributes. In contrast, some of the other methods attempt to directly reconstruct the image from attributes. The only method that is closest to our approach is StackGAN zhang2016stackgan , where the original image synthesis problem is broken into more manageable sub-problems through primitive shape and color refinement process. Another important difference is that zhang2016stackgan was specifically designed for text-to-image translation, while our approach is for the problem of attribute-to-face image reconstruction. Furthermore, as will be shown later, our approach produces much better face reconstructions compared to zhang2016stackgan .
3 Proposed Method
In this section, we provide details of the proposed Attribute2Sketch2Face method for image reconstruction from attributes. It consists of three stages: attribute-to-sketch (A2S), sketch-to-sketch (S2S), and sketch-to-face (S2F) (see Figure 2). Note that the training phase of our method requires ground truth attributes and the corresponding sketch and face images. Furthermore, the attributes are divided into two separate groups - one corresponding to texture and the other corresponding to color. Since sketch contains no color information, we use only texture attributes in A2S and S2S stages as indicated in Figure 2.
3.1 Stage 1: Attribute-to-Sketch (A2S)
In the A2S stage, we adapt the CVAE architecture from yan2016attribute2image . Figure 4 gives an overview of the Stage 1 network architecture. Given a texture attribute vector , noise vector , and ground-truth sketch , we aim to learn a model which can model the distribution of and generate . Here, denotes the decoder with parameter and denotes the encoder with parameter . In this approach, the objective is to find the best parameter which maximizes the log-likelihood . In conditional VAE, the objective is to maximize the following variational lower bound,
where is an isotropic multivariate Gaussian distribution and and
are two multivariate Gaussian distributions. The purpose of this function is to approximate the true conditional probabilitywith error by maximizing the loss.
takes noise and attribute vectors as input. The overall loss function of the A2S stage is as follows
The first two terms in (3.1), and , are the regularization terms in order to enforce the latent variable and
both match the prior normal distribution,.
The encoder network has two modules: one encoding the input sketch (in blue) and the other encoding the texture attribute (in yellow). The encoding module for sketch consists of the following components: CONV5(64) - CONV5(128) - CONV3(256) - CONV3(512) - CONV4(1024), where CONVk(N) denotes N-channel convolutional layer with kernel of size
. In particular, CONV5(64) and CONV5(128) consist of the convolutional layers followed by ReLU and 2-stride max pooling layer, respectively. The next two layers CONV3(256) and CONV3(512) consists of the convolutional layers followed by a batch normalization and ReLU layer, respectively. The final CONV4(1024) layer consists of convolutional layers with kernel of sizewith 1024-channel output. The other encoding module for attribute is a fully-connected network with 256-dimension output followed by 1D batch normalization and ReLU layers.
The encoder , which takes the noise and attributes as input, also consists of the encoding module for attributes as in (shown in yellow) and the encoding module for noise (shown in purple). The noise encoding module consist of one fully-connected layer with 1024-dimensional output along with 1D batch normalization and ReLU layers. For the decoder (shown in green), we first concatenate the encoded attributes with the encoded image/noise together and implement the reparameterization trick as in kingma2013auto . Then reshape the mixed latent vector into a size feature maps. Then, we implement four UpsampleBlock which consists of 2D nearest upsampling layer followed by a convolutional layer, batch normalization and ReLU layers.
3.2 Stage 2: Sketch-to-Sketch (S2S)
As shown in Figure 5, sketch reconstructions from Stage 1 are often of poor quality. Hence, we propose a conditional GAN-based framework to generate sharper sketch images from blurry images. As shown in Figure 6, the proposed network consists of a generator sub-network (based on UNet ronneberger2015u and DenseNet huang2017densely architectures) conditioned on the encoded attribute vector from the A2S stage and a patch-based discriminator sub-network . takes blurry sketch images as input and attempts to generate sharper sketch images, while attempts to distinguish between real and generated images. The two sub-networks are trained iteratively.
3.2.1 Generator (G2)
Deeper networks are known to better capture high-level concepts, however, the vanishing gradient problem affects convergence rate as well as the quality of convergence. Several works have been developed to overcome this issue among which UNetronneberger2015u and DenseNet huang2017densely are of particular interest. While UNet incorporates longer skip connections to preserve low-level features, DenseNet employs short range connections within micro-blocks resulting in maximum information flow between layers in addition to an efficient network. Motivated by these two methods, we propose AUDeNet for the generator sub-network in which, the UNet architecture is seamlessly integrated into the DenseNet network in order to leverage advantages of both the methods. This novel combination enables more efficient learning and improved convergence quality. Furthermore, in order to generate attribute preserving reconstructions, we concatenate the latent attribute vector from A2S with the latent vector from the encoder as shown in Figure 6.
A set of 3 dense-blocks (along with transition blocks) are stacked in the front, followed by a set of 5 dense-block layers (transition blocks). The initial set of dense-blocks are composed of 6 bottleneck layers. For efficient training and better convergence, symmetric skip connections are involved into the generator sub-network, similar to mao2016image . Details regarding the number of channels for each convolutional layer are as follows: C(64) - M(64) - D(256) - T(128) - D(512) - T(256) - D(1024) - T(512) - D(1024) - DT(256) - D(512) - DT(128) - D(256) - DT(64) - D(64) - D(32) - D(32) - DT(16) - C(3), where C(K) is a set of -channel convolutional layers followed by batch normalization and ReLU activation. M is max-pooling layer. D(K) is the dense-block layer with -channel output, T(K) is transition layer with -channel output for downsampling. DT(K) is similar to T(K) except for transposed convolutional layer instead of convolutional layer for upsampling.
3.2.2 Discriminator (D2)
Motivated by isola2016image , patch-based discriminator is used and it is trained iteratively along with . The primary goal of
is to learn to discriminate between real and synthesized samples. This information is backpropagated intoso that it generates samples that are as realistic as possible. Additionally, patch-based discriminator ensures preserving of high-frequency details which are usually lost when only L1 loss is used. All the convolutional layers in have a filter size of .
3.2.3 Objective function
The network parameters for the S2S stage are learned by minimizing the following objective function:
where is the adversarial loss, is the loss based on the -norm between the synthesized image and the target, is the perceptual loss, and are weights. Adversarial loss is based primarily on the discriminator sub-network . Given a set of synthesized sketch images, , the entropy loss from that is used to learn the parameters of is defined as
The L1 loss measures the reconstruction error between the synthesized sketch image and the corresponding target sketch and is defined as
Finally, the perceptual loss johnson2016perceptual
is used to measure the distance between high-level features extracted from a pre-trained CNN and is defined as
Here, and indicate target and synthesised images respectively and is a particular layer of the VGG-16 network. In our work, the output from the conv1-2 layer of a pre-trained VGG-16 network simonyan2014very is used as the feature representation. Note that, the coarse sketches from the previous Stage 1, along with the corresponding target sketches, are used to train the network.
3.3 Stage 3: Sketch-to-Face (S2F)
The objective of Stage 3 is to reconstruct a color face image from the sketch image generated from the S2S stage. We propose a GAN-based framework for this problem where we make use of another UNet-based architecture for the generator sub-network. In particular, the visual attribute vector is combined with the latent representation to produce attribute-preserved image reconstructions. Figure 7 gives an overview of the proposed network architecture for S2F.
3.3.1 Generator (G3)
The Stage 3 generator consists of five convolutional layers and five transposed convolutional layers. Details regarding the number of channels for each convolutional and transposed convolutional layers are as follows: C(64) - C(128) - C(256) - C(512) - C(512) - R(512) - DC(512) - DC(256) - DC(128) - DC(64) - DC(1), where C(K) is a set of -channel convolutional layers followed by batch normalization and leaky ReLU activation. DC(K) denotes a set of -channel transposed convolutional layers along with ReLU and batch normalization layers. R(C) is a two-layer ResNet Block as in StackGAN zhang2016stackgan to fuse the attribute vector with the UNet latent vector. Note that unlike Stages 1 and 2, the attribute vector here consists of both texture and color attributes.
3.3.2 Discriminator (D3)
Similar to , a patch-based discriminator , consisting of 4 downsampling blocks, is used and it is trained iteratively along with .
3.3.3 Objective function
The network parameters for the S2F stage are learned by minimizing (5). In particular, a combination of
loss, adversarial loss and perceptual loss is used. As before, the perceptual loss is measured by using the deep feature representations from the conv1-2 layer of a pre-trained VGG-16 networksimonyan2014very . We use the enhanced sketch from the previous stage along with the target face image to train this network.
Figure 3 shows the testing phase of the proposed method. Attribute and noise vectors are first passed through the encoder/decoder structure corresponding to the A2S stage. The encoded texture attribute vector along with the generated sketch from the A2S stage are fed into an AUDeNet-based generator (G2) to produce a sharper sketch image. Finally, a UNet-based attribute-conditioned generator (G3) corresponding to the S2F stage is used to reconstruct a high-quality face image from the sketch image generated from the S2S stage. In other words, our method takes noise and attribute vectors as input and generates high-quality face images via sketch images.
4 Experimental Results
In this section, experimental settings and evaluation of the proposed method are discussed in detail. Results are compared with several state-of-the-art generative models: CVAE sohn2015learning adapted from yan2016attribute2image , text2img reed2016generative and stackGAN zhang2016stackgan . In addition, we compare the performance of our method with a baseline, attr2face, in which we attempt to recover the image directly from attributes without going to the intermediate stage of sketch. The entire network in Figure 2
is trained stage-by-stage using Pytorch111https://github.com/pytorch/pytorch.
We conduct experiments using three publicly available datasets: CelebA liu2015faceattributes , deep funneled LFW Huang2012a and CUHK wang2009face . The CelebA database contains about 202,599 face images, 10,177 different identities and 40 binary attributes for each face image. The deep funneled LFW database contains about 13,233 images, 5,749 different identities and 40 binary attributes for each face image which are from the LFWA dataset liu2015faceattributes . The CUFS dataset wang2009face consists of 88 real sketches and photos for training, and 100 real sketches and photos for testing. For each face image in the CUHK dataset, the corresponding sketch image was drawn by an artist when viewing this photo. Note that the training part of our network requires original face images and the corresponding sketch images as well as the corresponding list of visual attributes. The CelebA and the deep funneled LFW datasets consist of both the original images and the corresponding attributes while the CUHK dataset consists of face-sketch image pairs. To generate the missing sketch images in the CelebA and the deep funneled LFW datasets, we use a pencil-sketch synthesis method 222http://www.askaswiss.com/2016/01/how-to-create-pencil-sketch-opencv-python.html to generate the sketch images from the face images. The missing attributes in the CUHK dataset were manually labeled. Figure 8(a) shows some sample generated sketch images from the CelebA and the deep funneled LFW datasets. Figure 8(b) shows the synthetic sketches, real sketches and real face images examples from CUHK.
The MTCNN method zhang2016joint was used to detect and crop faces from the original images. The detected faces were rescaled to the size of . Since many attributes from the original list of 40 attributes were not significantly informative, we selected 23 most useful attributes for our problem. Furthermore, the selected attributes were further divided into 17 texture and 6 color attributes as shown in Table 1. During experiments, the texture attributes were used for generating sketches in the A2S and S2S stages while all 23 attributes were used for generating high-quality face images in the final S2F stage.
4.3 Ablation Study
In this section, we perform an ablation study to demonstrate the effects of different modules in the proposed method. The following three configurations are evaluated.
Omit attributes while enhancing the sketch images generated from the A2S stage. This will show the significance of using attributes while enhancing the sketch images in the S2S stage.
Remove the second stage of sketch image enhancement from the entire pipeline. In other words, reconstruct the face image directly from the blurry sketch generated from A2S without enhancement. This will clearly show the significance of the S2S stage.
Remove the attribute concatenation from the final S2F stage. This will show the significance of using attributes in the final stage of sketch-to-image generation.
Results corresponding to the above three configurations are shown in Figure 9. Results corresponding to the first experiment are shown in Figure 9(b), where the first, second and third columns indicate, the outputs from the S2S stage of our method, reconstructions without the use of attributes in S2S, the reference sketch, respectively. From this figure we clearly see that attribute-conditioned generator produces sketches that are much better than the ones where sketches are enhanced directly without conditioning on the attributes.
Results corresponding to the third experiment are shown in Figure 9(a), where the first, second and third columns show the reconstruction results from our method, reconstructions without using attributes in S2F, and reference images, respectively. As can be seen from this figure, the absence of attributes in the final stage results in reconstructions with wrong face features such as gender and hair. When attributes are used along with the sketch from S2S, the produced results have attributes that are very close to the ones corresponding to the original images. This can be clearly seen by comparing the first and last columns in Figure 9(a).
In the final experiment, we omit the second stage of S2S from our pipeline and attempt to reconstruct the image from attributes in a two-stage procedure. In other words, sketch images generated from the A2S stage are directly fed into the S2F stage. Results are shown in Figure 9(c) and (d). In both figures, first, middle and last columns show reconstructions from our method, without the second stage and reference images, respectively. As can be seen from these figures, omission of the S2S stage from our pipeline produces images that are of poor quality (see results in Figure 9(d)). The enhancement of sketches in Stage 2 not only produces sharper results but also with correct attributes (see results in Figure 9(c)).
4.4 CelebA Dataset Results
The CelebA dataset liu2015faceattributes consists of 162,770 training samples, 19,867 validation samples and 19,962 test samples. After preprocessing and combining the training and validation sets, we obtain 182,468 samples which we use for training our three-stage network. After preprocessing, the number of samples in the test set remain the same. During training, we used a batch size of 128. The ADAM algorithm adam_opt
with learning rate of 0.0002 is used. We keep this initial learning rate for the first 10 epochs. For the next 10 epochs, we let it drop by 1/decay_epoch of its previous value after every epoch which is 1/10. The total training time was about 20 hours in a single Titan X GPU.
Sample image reconstruction results corresponding to different methods from the CelebA test set are shown in Figure 10. As can be seen from this figure, text2img and StackGAN methods are able to provide attribute-preserved reconstructions, but the synthesized face images are distorted and contain many artifacts. The CVAE method is able to reconstruct the images without distortions but they are blurry. Also, some of the attributes are difficult to see in the reconstructions from the CVAE method. For example, hair color is hard to see in the reconstructed images. The attr2face baseline provides reasonable reconstructions but images are distorted. In comparison to these methods, the proposed method, as shown in (c), provides the best attribute-preserved reconstructions. This can be seen by comparing the attributes of images in (i) with (c). To show the improvements obtained from different stages of our method, we also show the results from Stage 1 and Stage 2 in (a) and (b), respectively.
4.5 LFWA Dataset Results
Images in the LFWA dataset come from the LFW dataset Huang2012a , LFWTech , and the corresponding attributes come from liu2015faceattributes . This dataset contains the same 40 binary attributes as in the CelebA dataset. After preprocessing, the training and testing subsets contain 6,263 and 6,880 samples, respectively. The learning strategy for the ADAM method is the same as the one used for the CelebA dataset except that the initial learning rate is kept the same for the first 20 epochs and is dropped by 1/decay_epoch of its previous value after every epoch which is 1/20.
Sample results corresponding to different methods on the the LFWA dataset are shown in Figure 11. As can be seen from the results, the CVAE method produces reconstructions which are blurry and distorted. Attibute-conditioned GAN-based approaches such as text2img and StackGAN produce poor quality results with many distortions. The attr2face baseline and the proposed method show better reconstruction compared to the other methods. By comparing the reconstructions from our method in (c) with the images in (i) we see that the proposed method is able to reconstruct high-quality attribute-preserved face images. Again, outputs from Stage 1 and Stage 2 of our method are shown in (a) and (b), respectively.
4.6 CUHK Dataset Results
Instead of using the composed sketches as was done for the experiments on the CelebA and LFWA datasets, in this section, we implemented our algorithm using real sketches and photos from the CUFS dataset wang2009face . The CUFS dataset is a relatively small dataset. After preprocessing and data augmentation, such as flipping and rotation, we obtained 264 samples for training, and 300 samples for testing. The batch size of 8 was used while training our network. The other settings are kept the same as the CalebA dataset. Since this dataset does not come with attribute annotations, we manually annotated 23 attributes on this dataset.
Results corresponding to different methods are shown in Figure 12. We obtain similar results as we did in the CelebA and LFWA datasets. The text2img, StackGAN, and attr2face methods generate images with some visual artifacts, while the CVAE method produces blurry results. In contrast, our method produces the best results and generates photo-realistic and attribute-preserved face reconstructions.
4.7 Face Synthesis
|Metric||Dataset||text2img reed2016generative||StackGAN zhang2016stackgan||CVAE yan2016attribute2image||attr2face||Attribute2Sketch2Face|
In this section, we show the image synthesis capability of our network by manipulating the input attribute and noise vectors. Note that, the testing phase of our network takes attribute vector and noise as inputs and produces face reconstruction as the output. In the first set of experiments with image synthesis, we keep the random noise vector the same, i.e. and change the attribute weights corresponding to a particular attribute as follows: . The corresponding results on the CelebA dataset are shown in Figure 13. From this figure, we can see that when we give higher weights to a certain attribute, the corresponding appearance changes. For example, one can synthesize an image with a different gender by changing the weights corresponding to the gender attribute as shown in Figure 13(a). Each row shows the progression of gender change as the attribute weights are changed from -1 to 1 as described above. Similarly, figures (b), (c) and (d) show the synthesis results when a neutral face image is transformed into a smily face image, skin tones are changed to pale skin tone, and hair colors are changed to black, respectively. It is interesting to see that when the attribute weights other than the gender attribute are changed, the identity of the person does not change. Only the attributes change.
In the second set of experiments, we keep the input attribute vector frozen but now change the noise vector by inputing different realizations of . Sample results corresponding to this experiment are shown in Figure 14(a) and (b) using the CelebA and LFWA datasets, respectively. Each column shows how the output changes as we change the noise vector. Different subjects are shown in different rows. It is interesting to note that, as we change the noise vector, attributes stay the same while the identity changes. This can be clearly seen by comparing the reconstructions in each row.
4.8 Quantitative Results
In addition to the qualitative results presented in Figures 10, 11 and 12, we present quantitative comparisons based on the Inception Score salimans2016improved and Attribute -norm. The inception scores are used to evaluate the realism and diversity of the generated samples and has been used before to evaluate the performance of deep generative methods bao2017cvae , zhang2016stackgan . Attribute -norm is used to compare the quality of attributes corresponding to different images. We extract the attributes from the synthesized images as well as the reference image using the MOON attribute prediction method rudd2016moon . Once the attributes are extracted, we simply take the -norm of the difference between the attributes as follows
where and are the 23 extracted attributes from the reference image and the synthesized image, respectively. Note that higher values of the Inception Score and lower values of the Attribute measure imply the better performance. The quantitive results corresponding to different methods on the CalebA, LFW and CUHK datasets are shown in Table 2
. Results are evaluated on the test splits of the corresponding dataset and the average performance along with the standard deviation are reported in Table2.
As can be seen from this table, the proposed Attribute2Sketch2Face method produces the highest inception scores implying that the images generated by our method are more realistic than the ones generated by other methods. Furthermore, our method produces the lowest Attribute scores. This implies that our method is able to generate attribute-preserved images better than the other compared methods. This can be clearly seen by comparing the images synthesized by different methods in Figures 10, 11 and 12.
We presented a novel deep generative framework for reconstructing face images from visual attributes. Our method makes use of an intermediate representation to generate photo realistic images. The training part of our method consists of three stages - A2S, S2S and S2F. The A2S stage is based on the CVAE model while the S2S and S2F stages are based on GANs. Novel UNet-based generators are proposed for the S2S and S2F stages. Various experiments on three publicly available datasets show the significance of the proposed three-stage synthesis framework. In addition, an ablation study was conducted to show the importance of different components of our network. Various experiments showed that the proposed method is able to generate high-quality images and achieves significant improvements over the state-of-the-art methods.
Acknowledgements.This research is based upon work supported by the Office of the Director of National Intelligence (ODNI), Intelligence Advanced Research Projects Activity (IARPA), via IARPA R&D Contract No. 2014-14071600012. The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of the ODNI, IARPA, or the U.S. Government. The U.S. Government is authorized to reproduce and distribute reprints for Governmental purposes notwithstanding any copyright annotation thereon.
- (1) Arjovsky, M., Bottou, L.: Towards principled methods for training generative adversarial networks. arXiv preprint arXiv:1701.04862 (2017)
- (2) Arjovsky, M., Chintala, S., Bottou, L.: Wasserstein gan. arXiv preprint arXiv:1701.07875 (2017)
- (3) Bao, J., Chen, D., Wen, F., Li, H., Hua, G.: Cvae-gan: Fine-grained image generation through asymmetric training. arXiv preprint arXiv:1703.10155 (2017)
- (4) Berthelot, D., Schumm, T., Metz, L.: Began: Boundary equilibrium generative adversarial networks. arXiv preprint arXiv:1703.10717 (2017)
- (5) Che, T., Li, Y., Jacob, A.P., Bengio, Y., Li, W.: Mode regularized generative adversarial networks. arXiv preprint arXiv:1612.02136 (2016)
- (6) Dantcheva, A., Elia, P., Ross, A.: What else does your biometric data reveal? a survey on soft biometrics. IEEE Transactions on Information Forensics and Security 11(3), 441–467 (2016)
- (7) Denton, E.L., Chintala, S., Fergus, R., et al.: Deep generative image models using a￼ laplacian pyramid of adversarial networks. In: Advances in neural information processing systems, pp. 1486–1494 (2015)
- (8) Dosovitskiy, A., Springenberg, J.T., Tatarchenko, M., Brox, T.: Learning to generate chairs, tables and cars with convolutional networks. IEEE transactions on pattern analysis and machine intelligence 39(4), 692–705 (2017)
- (9) Gauthier, J.: Conditional generative adversarial nets for convolutional face generation
- (10) Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., Courville, A., Bengio, Y.: Generative adversarial nets. In: Advances in neural information processing systems, pp. 2672–2680 (2014)
Huang, G., Liu, Z., van der Maaten, L., Weinberger, K.Q.: Densely connected
In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (2017)
- (12) Huang, G., Liu, Z., Weinberger, K.Q., van der Maaten, L.: Densely connected convolutional networks. arXiv preprint arXiv:1608.06993 (2016)
- (13) Huang, G.B., Mattar, M., Lee, H., Learned-Miller, E.: Learning to align from scratch. In: NIPS (2012)
- (14) Huang, G.B., Ramesh, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. Tech. Rep. 07-49, University of Massachusetts, Amherst (2007)
- (15) Isola, P., Zhu, J.Y., Zhou, T., Efros, A.A.: Image-to-image translation with conditional adversarial networks. arXiv preprint arXiv:1611.07004 (2016)
- (16) Jégou, S., Drozdzal, M., Vazquez, D., Romero, A., Bengio, Y.: The one hundred layers tiramisu: Fully convolutional densenets for semantic segmentation. In: Computer Vision and Pattern Recognition Workshops (CVPRW), 2017 IEEE Conference on, pp. 1175–1183. IEEE (2017)
Johnson, J., Alahi, A., Fei-Fei, L.: Perceptual losses for real-time style transfer and super-resolution.In: European Conference on Computer Vision, pp. 694–711. Springer (2016)
- (18) Kingma, D., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014)
- (19) Kingma, D.P., Welling, M.: Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114 (2013)
- (20) Kumar, N., Belhumeur, P., Nayar, S.: Facetracer: A search engine for large collections of images with faces. In: European conference on computer vision, pp. 340–353. Springer (2008)
- (21) Kumar, N., Berg, A., Belhumeur, P.N., Nayar, S.: Describable visual attributes for face verification and image search. IEEE Transactions on Pattern Analysis and Machine Intelligence 33(10), 1962–1977 (2011)
Larochelle, H., Murray, I.: The neural autoregressive distribution estimator.
In: Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, pp. 29–37 (2011)
- (23) Larsen, A.B.L., Sønderby, S.K., Larochelle, H., Winther, O.: Autoencoding beyond pixels using a learned similarity metric. arXiv preprint arXiv:1512.09300 (2015)
- (24) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: 2015 IEEE International Conference on Computer Vision (ICCV), pp. 3730–3738 (2015)
- (25) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proceedings of International Conference on Computer Vision (ICCV) (2015)
- (26) Mao, X.J., Shen, C., Yang, Y.B.: Image denoising using very deep fully convolutional encoder-decoder networks with symmetric skip connections. arXiv preprint (2016)
- (27) Metz, L., Poole, B., Pfau, D., Sohl-Dickstein, J.: Unrolled generative adversarial networks. arXiv preprint arXiv:1611.02163 (2016)
- (28) Mirza, M., Osindero, S.: Conditional generative adversarial nets. arXiv preprint arXiv:1411.1784 (2014)
- (29) Odena, A., Olah, C., Shlens, J.: Conditional image synthesis with auxiliary classifier gans. arXiv preprint arXiv:1610.09585 (2016)
- (30) Radford, A., Metz, L., Chintala, S.: Unsupervised representation learning with deep convolutional generative adversarial networks. arXiv preprint arXiv:1511.06434 (2015)
Reed, S., Akata, Z., Yan, X., Logeswaran, L., Schiele, B., Lee, H.: Generative
adversarial text to image synthesis.
In: M.F. Balcan, K.Q. Weinberger (eds.) Proceedings of The 33rd International Conference on Machine Learning,Proceedings of Machine Learning Research, vol. 48, pp. 1060–1069. New York, New York, USA (2016)
- (32) Rezende, D.J., Mohamed, S., Wierstra, D.: Stochastic backpropagation and approximate inference in deep generative models. arXiv preprint arXiv:1401.4082 (2014)
- (33) Ronneberger, O., Fischer, P., Brox, T.: U-net: Convolutional networks for biomedical image segmentation. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 234–241. Springer (2015)
- (34) Rudd, E.M., Günther, M., Boult, T.E.: Moon: A mixed objective optimization network for the recognition of facial attributes. In: European Conference on Computer Vision, pp. 19–35. Springer (2016)
- (35) Salimans, T., Goodfellow, I., Zaremba, W., Cheung, V., Radford, A., Chen, X.: Improved techniques for training gans. In: Advances in Neural Information Processing Systems, pp. 2234–2242 (2016)
- (36) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014)
- (37) Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. In: Advances in Neural Information Processing Systems, pp. 3483–3491 (2015)
- (38) Tran, L., Yin, X., Liu, X.: Disentangled representation learning gan for pose-invariant face recognition. In: In Proceeding of IEEE Computer Vision and Pattern Recognition. Honolulu, HI (2017)
- (39) Wang, X., Tang, X.: Face photo-sketch synthesis and recognition. IEEE Transactions on Pattern Analysis and Machine Intelligence 31(11), 1955–1967 (2009)
- (40) Yan, X., Yang, J., Sohn, K., Lee, H.: Attribute2image: Conditional image generation from visual attributes. In: European Conference on Computer Vision, pp. 776–791. Springer (2016)
- (41) Zhang, H., Xu, T., Li, H., Zhang, S., Huang, X., Wang, X., Metaxas, D.: Stackgan: Text to photo-realistic image synthesis with stacked generative adversarial networks. arXiv preprint arXiv:1612.03242 (2016)
- (42) Zhang, K., Zhang, Z., Li, Z., Qiao, Y.: Joint face detection and alignment using multitask cascaded convolutional networks. IEEE Signal Processing Letters 23(10), 1499–1503 (2016)
- (43) Zhang, N., Paluri, M., Ranzato, M., Darrell, T., Bourdev, L.: Panda: Pose aligned networks for deep attribute modeling. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 1637–1644 (2014)
- (44) Zhu, J.Y., Park, T., Isola, P., Efros, A.A.: Unpaired image-to-image translation using cycle-consistent adversarial networks. arXiv preprint arXiv:1703.10593 (2017)