Semi-supervised Adversarial Learning to Generate Photorealistic Face Images of New Identities from 3D Morphable Model

04/10/2018 ∙ by Baris Gecer, et al. ∙ University of Surrey Imperial College London 0

We propose a novel end-to-end semi-supervised adversarial framework to generate photorealistic face images of new identities with wide ranges of expressions, poses, and illuminations conditioned by a 3D morphable model. Previous adversarial style-transfer methods either supervise their networks with large volume of paired data or use unpaired data with a highly under-constrained two-way generative framework in an unsupervised fashion. We introduce pairwise adversarial supervision to constrain two-way domain adaptation by a small number of paired real and synthetic images for training along with the large volume of unpaired data. Extensive qualitative and quantitative experiments are performed to validate our idea. Generated face images of new identities contain pose, lighting and expression diversity and qualitative results show that they are highly constraint by the synthetic input image while adding photorealism and retaining identity information. We combine face images generated by the proposed method with the real data set to train face recognition algorithms. We evaluated the model on two challenging data sets: LFW and IJB-A. We observe that the generated images from our framework consistently improves over the performance of deep face recognition network trained with Oxford VGG Face dataset and achieves comparable results to the state-of-the-art.



There are no comments yet.


page 8

page 10

page 11

page 19

page 20

page 21

Code Repositories

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Deep learning has shown an great improvement in performance of several computer vision tasks [ren2015faster, he2017mask, geccer2016detection, dong2016image, dosovitskiy2015flownet, yuan2017bighand2] including face recognition [parkhi2015deep, schroff2015facenet, xiong2015conditional, liu2017sphereface, xiong2016convolutional] in the recent years. This was mainly thanks to the availability of large-scale datasets. Yet the performance is often limited by the volume and the variations of training examples. Larger and wider datasets usually improve the generalization and overall performance of the model [schroff2015facenet, bansal2017s].

The process of collecting and annotating training examples for every specific computer vision task is laborious and non-trivial. To overcome this challenge, additional synthetic training examples along with limited real training examples can be utilised to train the model. Some of the recent works such as 3D face reconstruction [richardson20163d]

, gaze estimation 

[zhang2015appearance, wood2016learning], human pose, shape and motion estimation [varol2017learning] etc. use additional synthetic images generated from 3D models to train deep networks. One can generate synthetic face images using a 3D morphable model (3DMM) [blanz1999morphable] by manipulating identity, expression, illumination, and pose parameters. However, the resulting images are not photorealistic enough to be suitable for in-the-wild face recognition tasks. It is beacause the information of real face scans is compressed by the 3DMM and the graphical engine that models illumination and surface is not perfectly accurate. Thus, the main challenge of using synthetic data obtained from 3DMM model is the discrepancy in nature and quality of synthetic and real images which pose the problem of domain adaptation [patel2015visual]. Recently, adversarial training methods [shrivastava2016learning, sixt2016rendergan, costa2017towards] become popular to mitigate such challenges.

Figure 1: Our approach aims to synthesize photorealistic images conditioned by a given synthetic image by 3DMM. It regularizes cycle consistency [zhu2017unpaired] by introducing an additional adversarial game between the two generator networks in an unsupervised fashion. Thus the under-constraint cycle loss is supervised to have correct matching between the two domains by the help of a limited number of paired data. We also encourage the generator to preserve face identity by a set-based supervision through a pretrained classification network.

Generative Adversarial Network (GAN), introduced by Goodfellow et al[goodfellow2016nips], and its variants [radford2015unsupervised, karras2017progressive, berthelot2017began, dumoulin2016adversarially] are quite successful in generating realistic images. However, in practice, GANs are likely to stuck in mode collapse for large scale image generation. They are also unable to produce images that are 3D coherent and globally consistent [goodfellow2016nips]

. To overcome these drawbacks, we propose a semi-supervised adversarial learning framework to synthesize photorealistic face images of new identities with numerous data variation supplied by a 3DMM. We address these shortcomings by exciting a generator network with synthetic images sampled from 3DMM and transforming them into photorealistic domain using adversarial training as a bridge. Unlike most of the existing works that excite their generators with a noise vector

[radford2015unsupervised, berthelot2017began], we feed our generator network by synthetic face images. Such a strong constraint naturally helps in avoiding the mode collapse problem, one of the main challenges faced by the current GAN methods. Fig. 1 shows the general overview of the proposed method. We discuss the proposed method in more details in Sec. 3.

In this paper, we address the challenge of generating photorealistic face images from 3DMM rendered faces of different identities with arbitrary poses, expressions, and illuminations. We formulate this problem as a domain adaptation problem i.e. aligning the 3DMM rendered face domain into realistic face domain. One of the previous works closest to ours  [isola2016image] address style transfer problem between a pair of domains with classical conditional GAN. The major bottleneck of this method is, it requires a large number of paired examples from both domains which are hard to collect. CycleGAN [zhu2017unpaired]

, another recent method and closest to our work, proposes a two-way GAN framework for unsupervised image-to-image translation. However, the cycle consistency loss proposed in their method is satisfied as long as the transitivity of the two mapping networks is maintained. Thus, the resulting mapping is not guaranteed to produce the intended transformation. To overcome the drawbacks of these methods 

[isola2016image, zhu2017unpaired], we propose to use a small amount of paired data to train an inverse mapping network as a matching aware discriminator. In the proposed method, the inverse mapping network plays the role of both the generator and the discriminator. To the best of our knowledge, this is the first attempt for adversarial semi-supervised style translation for an application with such limited paired data.

Adding realism to the synthetic face images and preserving their identity information is a challenging problem. Although synthetic input images, 3DMM rendered faces, contain distinct face identities, the distinction between them vanishes as a result of the virtue of non-linear transformations while the discriminator encourages realism. To tackle such problem, prior works either employ a separate pre-trained network 

[yin2017towards] or embed Identity labels (id) [tran2017disentangled] into the discriminator. Unlike existing works, which are focused on generating new images of existing identities, we are interested in generating multiple images of new identities itself. Therefore, such techniques are not directly applicable to our problem. To address this challenge, we propose to use set-based center [wen2016discriminative]

and pushing loss functions 


on top of a pre-trained face embedding network. This will keep track of the changing average of embeddings of generated images belonging to same identity (i.e. centroids). In this way identity preservation becomes adaptive to changing feature space during the training of the generator network unlike softmax layer that converges very quickly at the beginning of the training before meaningful images are generated.

Our contributions can be summarized as follows:

  • We propose a novel end-to-end adversarial training framework to generate photorealistic face images of new identities constrained by synthetic 3DMM images with identity, pose, illumination and expression diversity. The resulting synthetic face images are visually plausible and can be used to boost face recognition as additional training data or any other graphical purposes.

  • We propose a novel semi-supervised adversarial style transfer approach that trains an inverse mapping network as a discriminator with paired synthetic-real images.

  • We employ a novel set-based loss function to preserve consistency among unknown identities during GAN training.

2 Related Works

In this Section we discuss the prior art that is closely related to the proposed method.

Domain Adaptation.

As stated in the introduction, our problem of generating photorealistic face images from 3DMM rendered faces can be seen as a domain adaptation problem. A straightforward adaptation approach is to align the distributions at the feature level by simply adding a loss to measure the mismatch either through second-order moments 

[sun2015subspace] or with adversarial losses [tzeng2015simultaneous, tzeng2017adversarial, ganin2016domain].

Recently, pixel level domain adaptation becomes popular due to practical breakthroughs on Kullback-Leibler divergence 

[goodfellow2014generative, goodfellow2016nips, radford2015unsupervised], namely GANs which optimize a generative and discriminative network through a mini-max game. It has been applied to a wide range of applications including fashion clothing [lassner2017generative], person specific avatar creation [wolf2017unsupervised], text-to-image synthesis [zhang2016stackgan], face frontalization [yin2017towards], and retinal image synthesis [costa2017towards].

Pixel domain adaptation can be done in a supervised manner simply by conditioning the discriminator network [isola2016image] or directly the output of the generator [chen2017photographic] with the expected output when there is enough paired data from both domains. Please note collecting a large number of paired training examples is expensive, and often requires expert knowledge. [reed2016generative] proposes a text-to-image synthesis GAN with a matching aware discriminator. They optimize their discriminator for image-text matching beside requiring realism with an additional mismatched text-image pair.

For the cases where paired data is not available, many approaches take an unsupervised way such as pixel-level consistency between input and output of the generator network [bousmalis2016unsupervised, shrivastava2016learning], an encoder architecture that is shared by both domains[bousmalis2016domain] and adaptive instance normalization [huang2017arbitrary]. An interesting approach is to have two way translation between domains with two distinct generator and discriminator networks. They constrain the two mappings to be inverses of each other with either ResNet [zhu2017unpaired] or encoder-decoder network [liu2017unsupervised] as the generator.

Synthetic Training Data Generation.

The usage of synthetic data as additional training data is shown to be helpful even if they are graphically rendered images in many applications such as 3D face reconstruction [richardson20163d], gaze estimation  [zhang2015appearance, wood2016learning], human pose, shape and motion estimation [varol2017learning]. Despite the availability of almost infinite number of synthetic images, those approaches are limited due to the domain difference from that of in-the-wild images.

Many existing works utilized adversarial domain adaptation to translate images into photorealistic domain such that they are more useful as a training data. [zheng2017unlabeled] generates many unlabeled samples to improve person re-identification in a semi-supervised fashion. RenderGAN [sixt2016rendergan] proposes a sophisticated approach to refine graphically rendered synthetic images of tagged bees to be used as training data for bee tag decoding application. WaterGAN [li2017watergan] synthesizes realistic underwater images by modeling camera parameters and environment effects explicitly to be used as training data for color correction task. Some studies deform existing images by a 3D model to augment diverse set of dataset [masi2016we] without adversarial learning.

One of the recent works, simGAN [shrivastava2016learning], generates realistic synthetic data to improve eye gaze and hand pose estimation. It optimizes pixel level correspondence between input and output of the generator network to preserve content of the synthetic image. This is in fact a limited solution since the pixel-consistency loss encourages the generated images to be similar to synthetic input images and it partially contradicts adversarial realism loss. Instead, we employ an inverse translation network similar to cycleGAN [zhu2017unpaired] with an additional pair-wise supervision to preserve the initial condition without hurting realism. This network also behaves as a discriminator to a straight mapping network with a real paired data to avoid possible biased translation.

Identity Preservation.

To preserve the identity/category of the synthesized images, some of the recent works such as [chen2016infogan, tran2017disentangled] keep categorical/identity information in discriminator network as an additional task. Some of the others propose to employ a separate classification network which is usually pre-trained [lu2017conditional, yin2017towards]

. In both these cases, the categories/identities are known beforehand and are fixed in number. Thus it is trivial to include such supervision in a GAN framework by training the classifier with real data. However such setup is not feasible in our case as images of new identities to-be-generated are not available to pre-train a classification network (see Section

3.3 for further discussion)

To address the limitation of existing methods of retaining identity/category information of synthesized images, we employ a combination of different set-based supervision approaches for unknown identities to be distinct in the pre-trained embedding space. We keep track of moving averages of same-id features by the momentum-like centroid update rule of center loss [wen2016discriminative] and penalize distant same-id samples and close different-id samples by a simplified variant of magnet loss[rippel2015metric] without its sampling process and with only one cluster per identity.

3 Adversarial Identity Generation

In this Section, we describe in details the proposed method. Fig. 1 shows the detailed schematic diagram of our method. Specifically, the synthetic image set is formed by a graphical engine for the randomly sampled 3DMM, pose and lighting parameters . Then they are translated into more photorealistic domain through the network and mapped back to synthetic domain () through the network to retain . Adversarial synthetic and real domain translation of and networks are supervised by the discriminator networks and , with an additional adversarial game between and as generator and discriminator respectively. During training, generated identities by 3DMM is preserved with a set-based loss on a pre-trained embedding network . In the following sub-sections, we further describe these components i.e. domain adaptation, real-synthetic pair discriminator, and identity preservation.

3.1 Unsupervised Domain Adaptation

Given a 3D morphable model (3DMM) [blanz1999morphable]

, we synthesize face images of new identities sampled from its Principal Components Analysis (PCA) coefficients’ space with random variation of expression, lighting and pose. Similar to 

[zhu2017unpaired], a synthetic input image () is mapped to photorealistic domain by a residual network () and mapped back to synthetic domain by a 3DMM fitting network () to complete forward cycle only. To preserve cycle consistency, the resulting image is encouraged to be the same as input by a pixel level loss:


In order to encourage the resulting images and to have similar distribution as real and synthetic domains respectively, those refiner networks are supervised by discriminator networks and with images of the respective domains. The discriminator networks are formed as auto-encoders as in boundary equilibrium GAN (BEGAN) architecture [berthelot2017began] in which the generator and discriminator networks are trained by the following adversarial training formulation:


where for each training step and the network we update the balancing term with . As suggested by [berthelot2017began], this term helps to balance between generator and discriminator and stabilize the training.

3.2 Adversarial Pair Matching

(a) DC-GAN[radford2015unsupervised]
(b) BEGAN [berthelot2017began]
(c) Ours
(d) GAN-CLS [reed2016generative]
Figure 2: Comparison of our pair matching method to the related work. (a) In the traditional GAN approach, discriminator module align the distribution of real and synthetic images which is designed as a classification network. (b) BEGAN[berthelot2017began]

and many others showed that alignment of error distribution offers more stable training and better results. (c) We propose to utilize this autoencoder approach to align the distribution of pairs to encourage generated image to be a correct transformation to the realistic domain with a game between real and synthetic pairs. (d) An alternative to our method is to introduce wrongly labeled generated images to the discriminator to teach pair-wise matching. 

[reed2016generative] used such approach for text to images synthesis.

Cycle consistency loss ensures bijective transitivity of functions and which means generated image should be transformed back to . Convolutional networks are highly under-constrained and they are free to make any unintended changes as long as the cycle consistency is satisfied. Therefore, without additional supervision, it is not guaranteed to achieve the correct mapping that preserves shape, texture, expression, pose and lighting attributes of the face image from domains to and to . This problem is often addressed by introducing pixel-level penalization between input and output of the networks [zhu2017unpaired, shrivastava2016learning] which is sub-optimal for domain adaptation as it encourages to stay in the same domain.

To overcome this issue, we propose an additional pair-wise adversarial loss that assign network an additional role as a pair-wise discriminator to supervise network. Given a set of paired synthetic and real images , the discriminator loss is computed by BEGAN as follows:


While network is itself a generator network () with a separate discriminator (), we use it as a third pair-matching discriminator to supervise by means of distribution of paired correspondence of real and synthetic images. Thus while cycle-loss optimizes for biject correspondence, we expect resulting pairs of to have similar correlation distribution as paired training data . Fig 2 shows its relation to the previous related arts and comparison to an alternative which is matching aware discriminator with paired inputs for text to image synthesis as suggested by [reed2016generative]. Please notice that how BEGAN autoencoder architecture is utilized to align the distribution of pair of synthetic and real images with synthetic and generated images.

3.3 Identity Preservation

Although identity information is provided by the 3DMM in shape and texture parameters, it may be lost to some extent by virtue of a non-linear transformation. Some studies [yin2017towards, tran2017disentangled] address this issue by employing identity labels of known subjects as additional supervision either with a pre-trained classification network or within the discriminator network. However, we intend to generate images of new identities sampled from 3DMM parameter space and their photorealistic images simply do not exist yet. Furthermore, training a new softmax layer and the rest of the framework simultaneously becomes a chicken-egg problem and results in failed training.

In order to preserve identity on the changing image space, we propose to adapt a set-based approach over a pre-trained face embedding network. We import the idea of pulling same-id samples as well as pushing close samples from different identities in the embedding space such that same-id images are gathered and distinct from other identities regardless of the quality of the images during the training. At the embedding layer of a pre-trained network , generator network () is supervised by a combination of center [wen2016discriminative] and pushing loss [gecer2017learning], which is also a simplified version of Magnet loss [rippel2015metric] formulation which is as following for a given mini-batch (M):


where stands for the identity label of provided by 3DMM sampling. Margin term

is set to 1 and the variance is computed by


While the quality of images is improved during the training, their projection on the embedding space is shifting. In order to adapt to those changes, we update identity centroids () with a momentum of when new images of id is available. Following [wen2016discriminative], for a given , moving average of a identity centroid is calculated by where , if the condition is satisfied and if not. Centroids () are initialized with zero and after few iterations, they converge to embedding centers and then continue updating to adapt to the changes caused by the simultaneous training of . Fig. 3 shows quality of 9 images of 3 identities over training iterations. Please notice the difference of the images after convergence with the images at the beginning of the training which Softmax layer might converge and fail to supervise for the forthcoming images in later iterations.

Figure 3: Quality of 9 images of 3 identities (per row) during the training. Background plot shows the error by the proposed identity preservation layer over the iterations. Notice the changes on the level of fine-details on the faces which is the main motivation of using set-based identity preservation.

Full Objective

Overall, the framework is optimized by the following updates simultaneously:


where parameters balance the contribution of different modules. The selection of those parameters is discussed in the next section.

4 Implementation Details

Network Architecture:

For the generator networks ( and ), we use a shallow ResNet architecture as in [johnson2016perceptual] which supplies smooth transition without changing the global structure because of its limited capacity with only 3 residual blocks. In order to benefit from 3DMM images fully, we also add skip connections to the network . Additionally, we add dropout layers after each block in the forward pass with a 0.9 keep rate in order to introduce some noise that could be caused by uncontrolled environmental changes.

We construct the discriminator networks ( and ) as autoencoders trained by boundary equilibrium adversarial learning with Wasserstein distance as proposed by [berthelot2017began]. The classification network , is a shallow FaceNet architecture [schroff2015facenet], more specifically we use NN4 network with an input size of where we randomly crop, rotate and flip generated images which are in size of .


Our framework needs a large amount of real and synthetic of face images. For real face images, we use CASIA-Web Face Dataset [yi2014learning] that consists of 5̃00K face images of 1̃0K individuals.

Please recall that the proposed method trains the network as a discriminator () with a small number of paired examples of real and synthetic images. For that, we use a combination of 300W-3D and AFLW2000-3D datasets as our paired training set [zhu2016face] which consist of 5K real images with their corresponding 3DMM parameter annotations. We render synthetic images by those latent parameters and pair them with matching the real images. This dataset relatively small compared to the ones used by fully supervised transformation GANs (i.e. Amazon Handbag dataset used by [isola2016image] contains 137K bag images)

We randomly sample 500K face images of 10K identities as our synthetic data set using Large Scale Face Model (LSFM) [booth20163d] and Face Warehouse model for expressions [cao2014facewarehouse]

. While shape and texture parameters of new identities are sampled to be under Gaussian distribution of the original model, expression, lighting and pose parameters are sampled with the same Gaussian distribution as synthetic samples of 300W-3D and AFLW2000-3D. For our experiments, we align the faces using MTCNN 

[zhang2016joint] and centre crop them to the size of pixels.

Training Details:

We train all the components of our framework together from scratch except the classification network which is pre-trained by using a subset of Oxford VGG Face Dataset [parkhi2015deep]. The whole framework takes about 70 hours to converge on a Nvidia GTX 1080TI GPU for 248K iterations with batch size of 16. We start with a learning rate of with ADAM solver [kingma2014adam] and halve it at after 128Kth, 192Kth, 224Kth, 240Kth, 244Kth, 246Kth and 247Kth iterations.

As shown in Eqn. 8, 9, is a balancing factor which controls the contribution of each optimization. We set , , to balance between realism, cycle-consistency, identity preservation and the supervision by the paired data. We also add identity loss () as suggested by [zhu2017unpaired] to regularize the training with a balancing term . During the training, we keep track of moving averages of the network parameters to generate images.

As side notes, in our experiments, we observed that it is beneficial to keep non-adversarial signals weak to avoid mode collapse. We also observed that the approach of keeping the history of refined images proposed by [bousmalis2016unsupervised] breaks adversarial training in our case due to the auto-encoder discriminators.

5 Results and Discussions

Figure 4: Random samples from GANFaces dataset. Each row belongs to same identity. Notice the variation in pose, expression and lighting.

In this section, we show qualitative and quantitative results of the proposed framework. We also discuss and show the contribution of each module (i.e. , , ) with an ablation study in the supplementary materials. For the experiments, we generate 500,000 images of 10,000 different identities with variations on expression, lighting and poses. We name this synthetic dataset GANFaces. Please see Fig.4 for random samples from the dataset. The dataset, training code, pre-trained models and face recognition experiments can be viewed at

5.1 Visually Plausible 3DMM Generation

One of the main goals of this work is to generate the face images guided by the attributes of synthetic input images i.e. shape, expression, lighting, and poses. We can see from the Fig. 5 that our model is capable of generating photorealistic images preserving the attributes conditioned by the synthetic input images. In the Figure, top row shows the variations of pose and expression on input synthetic faces and the left column shows the input synthetic faces of different identities. And, the rest of the images are the images generated by our model conditioned on the corresponding attributes from top row and left column. We can clearly see that the conditioned attributes are preserved on the images generated by our model. We can also observe that fine-grained attributes such as shapes of chin, nose and eyes are also retained on the images generated by our model. In case of extreme poses, the quality of the image generated by our model becomes less sharp as the CASIA-WebFace dataset, which we used to learn the parameters of discriminator network , lacks sufficient number of examples with extreme poses.

in 1,…,2 in 03,05,07,09,11
ıin 06,11,13,18,19,26,27,30,33,34 in 1,…,2 in 03,05,07,09,11

Figure 5: Images generated by the proposed approach conditioned with identity variation in vertical axis, normalized and mouth open expression in left and right blocks and pose variation in horizontal axis. Images in this figure are not included in the training

5.2 The Added Realism and Identity Preservation

In order to show that synthetic images are effectively transformed to the realistic domain with preserving identities, we perform a face verification experiments on GANFaces dataset. We took pre-trained face-recognition CNN network, namely FaceNet NN4 architecture [schroff2015facenet] trained on CASIA-WebFace [yi2014learning] to compute the features of the face images. The verification performance of the network on LFW is accuracy and 1-EER which shows that the model is well optimized for in-the-wild face verification. We created 1000 similar (belonging to same identity) and 1000 dis-similar(belonging to different identities) face image pairs from GANFaces. Similarly, we also generated the same number of similar and dis-similar face images pairs from VGG face dataset [parkhi2015deep] and the synthetic 3DMM rendered faces dataset. Fig. 6

shows histogram of euclidean distances between similar and dis-similar images measured in the embedding space for the three datasets. The addition of realism and preservation of identities of the GANFaces can be seen from the comparison of its distribution to the 3DMM synthetic dataset’s distribution. As the images become more realistic, they become better separable in the pre-trained embedding space. We also observe that the separation of positive and negative pairs of GANFace’s faces are better than that of VGG faces pairs. The probable reason of VGG does not having better separation than GANFaces is due to noisy face labels and this is indicated on its original study 


Figure 6: Distances of 1000 positive and 1000 negative pairs from three different datasets (GANFaces, 3DMM synthetic images, Oxford VGG) embedded on a NN4 network that is trained with CASIA Face dataset

5.3 Face Recognition with GANFaces dataset

We augmented GANFaces with real face dataset i.e. VGG Faces [parkhi2015deep] and train VGG19 [Simonyan14c] network and tested performance on two challenging datasets: Labeled Faces in the Wild (LFW) [huang2007labeled] and IJB-A [klare2015pushing]. We restrict ourselves from limited access to full access of real face dataset and train deep network on different combination of real and GANFaces. Following [masi2016we], we use a pre-trained VGGNet by [Simonyan14c]

with 19 layers trained on ImageNet dataset 

[russakovsky2015imagenet] and took these parameters as initial parameters. We train the network with different portion of Oxford VGG Face dataset [parkhi2015deep] augmented with the GANFaces dataset. We remove the last layer of deep VGGNet and add two soft-max layers to the previous layer, one for each of the datasets. Learning rate is set to 0.1 for the soft-max layers and 0.01 to the pre-trained layers with ADAM optimizer. Also we halve the gradient coming from GANFaces soft-max. We decrease the learning rate exponentially and train for 80,000 iterations where all of our models are well converged without overfitting. For a given input size of , we randomly crop and flip patches and overall training takes around 9 hours on a NVIDIA 1080TI GPU.

We train 6 models with , and of the VGG Face dataset with and without the augmentation of GANFaces. We evaluate the models on LFW and IJB-A datasets and the benchmark scores is improved with the usage of GANFaces dataset even though low resolution images. The contribution of GANFaces increase inversely proportional to the number of images included from VGG dataset which indicates more synthetic images might improve the results even further. Further details can be seen in Fig. 7.

We compare our best model with full VGG dataset and GANFaces to the other state of the art methods. Despite the very low resolution compared to the others, GANFaces was able to improve our baseline to the numbers comparable to the state-of-the-arts. Please note that generative methods such as [masi2016we, yin2017towards], do generation (i.e. pose augmentation and normalization) in the test time where we use only given test images. Together with low resolution, this makes our models more efficient at test time. Given that we only generated 500K images show that the accuracy can be boosted even further by generating more (i.e. 5 times larger from the real set as [masi2016we]).

Method Real Synth Test time Synth Image size Acc. () 100 - EER
FaceNet [schroff2015facenet] 200M - No 256256 98.87 -
VGG Face [parkhi2015deep] 2.6M - No 256256 98.95 99.13
Masi et al[masi2016we] 495K 2.4M Yes 256256 98.06 98.00
Yin et al[yi2013towards] 495K 495K Yes 256256 96.42 -
VGG() 1.8M - No 108108 94.8 94.6
VGG() + GANFaces 1.8M 500K No 108108 94.9 95.1
Table 1: Comparison with state-of-the-art studies on LFW performances
Figure 7: Face recognition benchmark experiments. (Left) Number of images used from the two datasets in the experiments. Total number of images of VGG Data set is 1.8M since some images were removed from the URL (Middle) Performances on the LFW dataset with and without GANFaces dataset. (Right) True Positive Rates on IJB-A verification task with and without GANFaces dataset.

6 Conclusions

In this paper, we propose a novel end-to-end semi-supervised adversarial training framework to generate photorealistic faces of new identities with wide ranges of poses, expressions, and illuminations from 3DMM rendered faces. Our extensive qualitative and quantitative experiments show that the generated images are realistic and identity preserving.

We generated a dataset of 500,000 face images and combined it with a real face image dataset to train a face recognition CNN and improve the performances in recognition and verification tasks. Despite the limited the number of images generated, they were still enough to improve recognition rates. In the future, we plan to generate millions of high resolution images of thousands of new identities to boost the state-of-the-art face recognition.


This work was supported by the EPSRC Programme Grant ‘FACER2VM’
(EP/N007743/1). Baris Gecer is funded by the Turkish Ministry of National Education. This study is morally motivated to improve face recognition to help prediction of genetic disorders visible on human face in earlier stages.


7 Ablation Study

7.1 Quantitative Results

We investigate the contributions of three main components of our framework by an ablation study. Namely, identity preservation module (), adversarial pair matching and cycle consistency loss 111Here we do not investigate the contribution of the discriminators as their effect is shown by many other studies.. We train our framework from scratch in the same way as explained in the paper by removing each of these modules separately (i.e. for VGG (50) version). In table 2, we show the contribution of each module and compare them to the whole framework as a baseline and to the performance of a model trained by only half of the VGG dataset.

Method IJB-A Ver. @FAR=0.01 IJB-A Ver. @FAR=0.001
Ours without 0.50532 0.00433 0.17636 0.00611
Ours without 0.48034 0.00402 0.15300 0.00348
Ours without 0.49701 0.00558 0.18341 0.00670
VGG (50) 0.49751 0.00484 0.17580 0.00557
Ours (VGG(50)+GANFaces) 0.53507 0.00575 0.18768 0.00388
Table 2: Quantitative ablation study. Each of the modules removed from the proposed framework and the performance of generated images are measured on IJB-A verification task

7.2 Qualitative Results

Fig. 8 shows visual comparisons between the proposed framework and its versions without each of its components. For the framework and its three variants, we show generated images for 12 3DMM input images of 4 different identities with random illumination, pose and expression variations. We evaluate the quality of the images by identity preservation, the visual plausibility and diversity (i.e. avoiding mode collapse). Regarding these criteria, our framework clearly generates better images than all of its variants. Without (Fig.8(c)), namely identity preservation module, the framework forgets identity information throughout the network as there is no direct signal to encourage identity preservation. Please notice the identity consistency of our framework (Fig.8(b)) compared to (Fig.8(c)) in the details of faces (i.e. shape of nose, eyes, month, eyebrows and their relative distances) or simply by visual gender test. Without adversarial pair matching mechanism (Fig.8(d)), we observe local mode collapse across different identities such as shape of nose and eyes in the figure seems to be similar compared to Fig.8(b). This mode collapse is also verified by the quantitative experiments (Table. 2). Cycle consistency loss (Fig.8(e)) helps to retain the initial shape given by 3DMM and improves the overall image quality. We also observe noise reduction in the generated images due to the additional supervision provided by all the modules.

/ in c/a,s/b,f/c,d/d,r/e() ıin 5,6,7,3 in a,b,c

Figure 8: Columns: divided into blocks of 3 images from the same identity. Rows: (a) 3DMM synthetic images. (b) Generated images by the framework (Ours). (c) Ours without . (d) Ours without . (e) Ours without .

8 Identity and Illumination Interpolations

In this section, we show the generalization ability of our framework for unseen synthetic identities by interpolating in the identity space of our 3DMM model. Fig .

9 shows how well shapes introduced by the 3DMM is learned so that the transition is smooth and accurate in the photorealistic space. The smooth transition between the two identities with pose variation also shows that the network did not overfit to the given synthetic data and is able to generate more photo-realistic images even without further training. Figure 10 shows that the framework also learned changes in the illumination strength and able to generate images with a controlled lighting variation.

in 9,11,12,13,14,15,16,17,18,19
ıin 20,18,16,14,12,10,08,06,04,02,01 in 9,11,12,13,14,15,16,17,18,19

Figure 9: Identity interpolation between first and second identities of the GANFaces dataset (Fig. 4 first two rows). Interpolation is done in the 3DMM space and projected onto realistic space by our framework. The vertical axis shows the identity interpolation under neutral lighting and expression with pose variation at the horizontal axis. Top-most and left-most 3DMM images indicate the respective identity and pose. Images in this figure are not included in the training.

ıin 16,15,14,13,12,11,10,09,08,07,06
in 1 ıin 16,15,14,13,12,11,10,09,08,07,06

Figure 10: Effect of illumination changes to the generated images. Top row contains 3DMM synthetic images and the bottom contains the generated images by the framework given input images as the top row. Extreme lighting conditions result in blurry images as the real training set does not contain images of similar conditions.