Representation Learning by Rotating Your Faces

05/31/2017 ∙ by Luan Tran, et al. ∙ Michigan State University 0

The large pose discrepancy between two face images is one of the fundamental challenges in automatic face recognition. Conventional approaches to pose-invariant face recognition either perform face frontalization on, or learn a pose-invariant representation from, a non-frontal face image. We argue that it is more desirable to perform both tasks jointly to allow them to leverage each other. To this end, this paper proposes a Disentangled Representation learning-Generative Adversarial Network (DR-GAN) with three distinct novelties. First, the encoder-decoder structure of the generator enables DR-GAN to learn a representation that is both generative and discriminative, which can be used for face image synthesis and pose-invariant face recognition. Second, this representation is explicitly disentangled from other face variations such as pose, through the pose code provided to the decoder and pose estimation in the discriminator. Third, DR-GAN can take one or multiple images as the input, and generate one unified identity representation along with an arbitrary number of synthetic face images. Extensive quantitative and qualitative evaluation on a number of controlled and in-the-wild databases demonstrate the superiority of DR-GAN over the state of the art in both learning representations and rotating large-pose face images.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 7

page 9

page 10

page 11

page 12

page 13

page 14

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Face recognition is one of the most widely studied topics in computer vision due to its wide application in law enforcement, biometrics, marketing, and etc. Recently, great progress has been achieved in face recognition with deep learning-based methods 

[1, 2, 3]. For example, surpassing human performance is reported by Schroff et al. [3]

on Labeled Faces in the Wild (LFW) database. However, one of the shortcomings of the LFW database is that it does not offer a high degree of pose variation — the variance that has been shown to be a major challenge in face recognition. Up to now, the key ability of Pose-Invariant Face Recognition (PIFR) desired by real-world applications is far from solved 

[4, 5, 6, 7, 8]. A recent study [9] observes a significant drop, over , in performance of most algorithms from frontal-frontal to frontal-profile face verification, while human performance only degrades slightly. This indicates that the pose variation remains to be a significant challenge in face recognition and warrants future study.

In PIFR, the facial appearance change caused by pose variation often significantly surpasses the intrinsic appearance differences between individuals. To overcome these challenges, a wide variety of approaches have been proposed, which can be grouped into two categories. First, some work employ face frontalization on the input image to synthesize a frontal-view face, where traditional face recognition algorithms are applicable [10, 11], or an identity representation can be obtained via modeling the face frontalization/rotation process [12, 13, 14]. The ability to generate a realistic identity-preserved frontal face is also beneficial for law enforcement practitioners to identify suspects. Second, other work focus on learning discriminative representations directly from the non-frontal faces through either one joint model [2, 3] or multiple pose-specific models [15, 16]. In contrast, we propose a novel framework to take the best of both worlds — simultaneously learn pose-invariant identity representation and synthesize faces with arbitrary poses, where face rotation is both a facilitator and a by-product for representation learning.

Fig. 1: Given one or multiple in-the-wild face images as the input, DR-GAN can produce a unified identity representation, by virtually rotating the face to arbitrary poses. The learnt representation is both discriminative and generative, i.e., the representation is able to demonstrate superior PIFR performance, and synthesize identity-preserved faces at target poses specified by the pose code.

As shown in Fig. 1, we propose Disentangled Representation learning-Generative Adversarial Network (DR-GAN) for PIFR. Generative Adversarial Networks (GANs) [17] can generate samples following a data distribution through a two-player game between a generator and a discriminator . Despite many recent promising developments [18, 19, 20, 21, 22], image synthesis remains to be the main objective of GAN. To the best of our knowledge, this is the first work that utilizes the generator in GAN for representation learning. To achieve this, we conduct with an encoder-decoder structure (Fig. 2 (d)) to learn a disentangled representation for PIFR.

The input to the encoder is a face image of any pose, the output of the decoder is a synthetic face at a target pose, and the learnt representation bridges and . While serves as a face rotator, is trained to not only distinguish real vs. synthetic (or fake) images, but also predict the identity and pose of a face. With the additional classifications, strives for the rotated face to have the same identity as the input real face, which has two effects on : 1) The rotated face looks more like the input subject in terms of identity. 2) The learnt representation is more inclusive or generative for synthesizing an identity-preserved face.

In conventional GANs,

takes a random noise vector to synthesize an image. In contrast, our

takes a face image, a pose code , and a random noise vector as the input, with the objective of generating a face of the same identity with the target pose that can fool . Specifically, learns a mapping from the input image to a feature representation. The representation is then concatenated with the pose code and the noise vector to feed to for face rotation. The noise models facial appearance variations other than identity or pose. Note that it is a crucial architecture design to concatenate one representation with varying randomly generated pose codes and noise vectors. This enables DR-GAN to learn a disentangled identity representation that is exclusive or invariant to pose and other variations, which is the holy grail for PIFR when achievable.

Most existing face recognition algorithms only takes one image for testing. In practice, there are many scenarios when an image collection of the same individual is available [23]. In this case, prior work fuse results either in the feature level [24] or the distance-metric level [25, 26]. Differently, our fusion is conducted within a unified framework. Given multiple images as the input, operates on each image, and produces an identity representation and a coefficient, which is an indicator of the quality of that input image. Using the dynamically learned coefficients, the representations of all input images are linearly combined as one representation. During testing, takes any number of images and generates a single identity representation, which is used by for face synthesis along with the pose code.

Our generator is essential to both representation learning and image synthesis. We propose two techniques to further improve and respectively. First, we have observed that our can always outperform in representation learning for PIFR. Therefore, we propose to replace the identity classification part of with the latest during training so that a superior can push to further improve itself. Second, since our learns a mapping from the feature space to the image space, we propose to improve the learning of by regularizing the average representation of two representations from different subjects to be a valid face, assuming a convex space of face identities. These two techniques are shown to be effective in improving the generalization ability of DR-GAN.

A preliminary version of this work was published in 2017 IEEE Conference on Computer Vision and Pattern Recognition 

[27]. We extend it in numerous ways: 1) Instead of having an extra dimension of the fake class in the identity classification task of the discriminator, we split it into two tasks: real/fake and identity classification. 2) We propose two techniques to improve model generalization during training. 3) We conduct all experiments using the new models with color image input, and add numerous experiments to reveal how DR-GAN works including the disentangled representation, the coefficients analysis, etc.

In summary, this paper makes the following contributions.

  • We propose DR-GAN via an encoder-decoder structured generator that can frontalize or rotate a face with an arbitrary pose, even the extreme profile.

  • Our learnt representation is explicitly disentangled from the pose variation via the pose code in the generator and the pose estimation in the discriminator. Similar disentanglement is conducted for other variations, e.g., illumination.

  • We propose a novel scheme to adaptively fuse multiple faces to a single representation based on the learnt coefficients, which empirically shows to be a good indicator of the face image quality.

  • We propose two techniques to improve the generalization ability of our generator via model switch and representation interpolation.

  • We achieve state-of-the-art face frontalization and face recognition performance on multiple benchmark datasets, including Multi-PIE [28], CFP [9], and IJB-A [23].

2 Prior Work

Generative Adversarial Network (GAN) Goodfellow et al. [17] introduce GAN to learn generative models via an adversarial process. With a minimax two-player game, the generator and discriminator can both improve themselves. GAN has been used for image synthesis [19, 29]

, image super resolution 

[30], and etc. More recent work focus on incorporating constraints to or leveraging side information for better synthesis. E.g., Mirza and Osindero [18] feed class labels to both and to generate images conditioned on class labels. In [31] and [32]

, GAN is generalized to learn a discriminative classifier where

is trained to not only distinguish between real vs. fake, but also classify the images. In InfoGAN [21], applies information regularization to the optimization by using the additional latent code. In contrast, this paper proposes a novel DR-GAN aiming for face representation learning, which is achieved via modeling the face rotation process. In Sec. 3.6, we will provide in-depth discussion on our difference to most relevant work in GANs.

One crucial issue with GANs is the difficulty for quantitative evaluation. Previous work either perform human study to evaluate the quality of synthetic images [19] or use the features in the discriminator for image classification [20]. In contrast, we innovatively construct the generator for representation learning, which can be quantitatively evaluated for PIFR.

Fig. 2: Comparison of previous GAN architectures and our proposed DR-GAN.

Face Frontalization Generating a frontal face from a profile face is very challenging due to self-occlusion. Prior methods in face frontalization can be classified into three categories: D-based methods [11, 10, 33], statistical methods [34], and deep learning methods [13, 35, 14, 12, 36]. E.g., Hassner et al. [10] use a mean D face model to generate a frontal face for any subject. A personalized face model could be used but accurate D face reconstruction remains a challenge [37, 38]. In [34], a statistical model is used for joint frontalization and landmark localization by solving a constrained low-rank minimization problem. For deep learning methods, Kan et al. [12] propose SPAE to progressively rotate a non-frontal face to a frontal one via auto-encoders. Yang et al. [35] apply the recurrent action unit to a group of hidden units to incrementally rotate faces in fixed yaw angles.

All prior work frontalize only near frontal in-the-wild faces [10, 11] or large-pose controlled faces [14, 13]. In contrast, we can synthesize arbitrary-pose faces from a large-pose in-the-wild face. We use the adversarial loss to improve the quality of the synthetic images and identity classification in the discriminator to preserve identity.

Representation Learning Designing the appropriate objectives for learning a good representation is an open question [39]. The work in [40] is among the first to use an encoder-decoder structure for representation learning, which, however, is not explicitly disentangled. DR-GAN is similar to DC-IGN [41]

— a variational autoencoder-based method to disentangled representation learning. However, DC-IGN achieves disentanglement by providing batch training samples with one attribute being fixed, which may not be applicable to unstructured in-the-wild data.

Prior work also explore joint representation learning and face rotation for PIFR where [13, 14] are most relevant to our work. In [13]

, Multi-View Perceptron 

[13]

is used to untangle the identity and view representations by processing them with different neurons and maximizing the data log-likelihood. Yim

et al. [14] use a multi-task CNN to rotate a face with any pose and illumination to a target pose, and the loss-based reconstruction of the input is the second task. Both work focus on image synthesis and the identity representation is a by-product during the network learning. In contrast, DR-GAN focuses on representation learning, of which face rotation is both a facilitator and a by-product. We differ to [13, 14] in four aspects. First, we explicitly disentangle the identity representation from pose variations by pose codes. Second, we employ the adversarial loss for high-quality synthesis, which drives better representation learning. Third, none of them applies to in-the-wild faces as we do. Finally, our ability to learn the representation from multiple unconstrained images has not been observed in prior work.

Face Image Quality Estimation Image quality estimation is important for biometric recognition systems [42, 43, 44]. Numerous methods have been proposed to measure the image quality of different biometric modalities including face [45, 46, 47], iris [48, 49], fingerprint [50, 51], and gait [52, 53]. In the scenario of face recognition, an effective algorithm for face image quality estimation can help to either (i) reduce the number of poor images acquired during enrollment, or (ii) improve feature fusion during testing. Both cases can improve the face recognition performance. Abaza et al. [45] evaluate multiple quality factors such as contrast, brightness, sharpness, focus and illumination as a face image quality index for face recognition. However, they did not consider pose variance, which is a major challenge in face recognition. Ozay et al. [47]

employ a Bayesian network to model the relationships between predefined quality related image features and face recognition, which is show to boost the performance significantly. The authors in 

[54] propose a patch-based face image quality estimation method, which takes into account of geometric alignment, pose, sharpness, and shadows.

In this work, we employ quality estimation in a unified GAN framework that considers all factors of image quality presented in the dataset, with no direct supervision. For each input image, DR-GAN can generate a coefficient that indicates the quality of the input image. The representations from multiple images of the same subject are fused based on the learnt coefficients to generate one unified representation. We will show that the learnt coefficients are correlated to the image quality, i.e., a measurement of how good it can be used for face recognition.

3 The Proposed DR-GAN Model

Our proposed DR-GAN has two variations: the basic model can take one image per subject for training, termed single-image DR-GAN, and the extended model can leverage multiple images per subject for both training and testing, termed multi-image DR-GAN. We start by introducing the original GAN, followed by two DR-GAN variations, and the proposed techniques to improve the generalization of our generator. Finally, we will compare our DR-GAN with previous GAN variations in detail.

3.1 Generative Adversarial Network

Generative Adversarial Network consists of a generator and a discriminator that compete in a two-player minimax game. The discriminator tries to distinguish between a real image and a synthetic image . The generator tries to synthesize realistic-looking images from a random noise vector that can fool , i.e., being classified as a real image. Concretely, and

play the game with the following loss function:

(1)

It is proved in [17] that this minimax game has a global optimum when the distribution of the synthetic samples and the distribution of the real samples are the same. Under mild conditions (e.g., and have enough capacity), converges to . In the beginning of training, the samples generated from are extremely poor and are rejected by with high confidences. In practice, it is better for to maximize instead of minimizing  [17]. This objective results in the same fixed point of the dynamics of and but provides much stronger gradients early in learning. As a result, and are trained to alternatively optimize the following objectives:

(2)
(3)

3.2 Single-Image DR-GAN

Our single-image DR-GAN has two distinctive novelties compared to prior GANs. First, it learns an identity representation for a face image by using an encoder-decoder structured generator, where the representation is the encoder’s output and the decoder’s input. Since the representation is the input to the decoder to synthesize various faces of the same subject, i.e., virtually rotating his/her face, it is a generative representation.

Second, the appearance of a face is determined by not only the identity, but also the numerous distractive variations, such as pose, illumination, expression. Thus, the identity representation learned by the encoder would inevitably include the distractive side variations. E.g., the encoder would generate different identity representations for two faces of the same subject with and yaw angles. To remedy this, in addition to the class labels similar to semi-supervised GAN [31], we employ side information such as pose and illumination to explicitly disentangle these variations, which in turn helps to learn a discriminative representation.

3.2.1 Problem Formulation

Given a face image with label , where represents the label for identity and for pose, the objectives of our learning problem are twofold: 1) to learn a pose-invariant identity representation for PIFR, and 2) to synthesize a face image with the same identity but at a different pose specified by a pose code . Our approach is to train a DR-GAN conditioned on the original image and the pose code with its architecture illustrated in Fig. 2 (d).

Different from the discriminator in conventional GAN, our is a multi-task CNN consisting of three components: . is for real/fake image classification. is for identity classification with as the total number of subjects in the training set. is for pose classification with as the total number of discrete poses. Note that, in our preliminary work [27], is implemented as an additional element of , which has the problem of unbalanced training data for each dimension in , i.e., the number of synthetic images ( dimension) equals to the summation of all images in the real classes (the first dimensions). This version fixes this problem and is referred as “split” in Tab. IV. Given a face image , aims to classify it as the real image class, and estimate its identity and pose; while given a synthetic face image from the generator , attempts to classify as fake, using the following objectives:

(4)
(5)
(6)

where and are the th element in and

. For clarity, we will eliminate all subscripts for expected value notations, as all random variables are sampled from their respected distributions

. The final objective for training is the weighted average of all objectives:

(7)

where we set .

and
Layer

Filter/Stride

Output Size Layer Filter/Stride Output Size
FC
Conv11 FConv52
Conv12 FConv51
Conv21 FConv43
Conv22 FConv42
Conv23 FConv41
Conv31 FConv33
Conv32 FConv32
Conv33 FConv31
Conv41 FConv23
Conv42 FConv22
Conv43 FConv21
Conv51 FConv13
Conv52 FConv12
Conv53 ) FConv11
AvgPool )
FC ( only)
TABLE I: The structures of , and networks in single-image and multi-image DR-GAN. Blue texts represent extra elements to learn the coefficient in the of multi-image DR-GAN.

Meanwhile, consists of an encoder and a decoder . aims to learn an identity representation from a face image . aims to synthesize a face image with identity and a target pose specified by , and is the noise modeling other variations besides identity or pose. The pose code is a one-hot vector with the target pose being . The goal of is to fool to classify to the identity of input and the target pose with the following objectives:

(8)
(9)
(10)

Similarly, the final objective for training the discriminator is the weighted average of each objective:

(11)

where we set .

and improves each other during the alternative training process. With being more powerful in distinguishing real vs. fake images and classifying poses, strives for synthesizing an identity-preserved face with the target pose to compete with . We benefit from this process in three aspects. First, the learnt representation will preserve more discriminative identity information. Second, the pose classification in guides the pose of the rotated face to be more accurate. Third, with a separate pose code as input to , is trained to disentangle the pose variation from , i.e., should encode as much identity information as possible, but as little pose information as possible. Therefore, is not only generative for image synthesis, but also discriminative for PIFR.

3.2.2 Network Structure

The network structure of single-image DR-GAN is shown in Tab. I. We adopt CASIA-Net [55]

with batch normalization (BN) for

and

. Besides, since the stability of the GAN game suffers if sparse gradient layers (MaxPool, ReLU) are used, we replace them with strided convolution and exponential linear unit (ELU) respectively.

is trained to optimize Eqn. 7 by adding a fully connected layer with the softmax loss for real vs. fake, identity, and pose classifications respectively. includes and that are bridged by the to-be-learned identity representation , which is the AvgPool output in our . is concatenated with a pose code and a random noise

. A series of fractionally-strided convolutions (FConv) 

[20] transforms the -dim concatenated vector into a synthetic image , which is the same size as . is trained to maximize Eqn. 11 when a synthetic face is fed to and the gradient is back-propagated to update .

Previous work in face rotation use loss [13, 14]

to enforce the synthetic face to be similar to the ground truth face at the target pose. This line of work requires the training data to include face image pairs of the same identity at different poses, which is achievable for controlled datasets such as Multi-PIE, but hard to fulfill for in-the-wild datasets. On contrary, DR-GAN does not require image pairs since there is no direct supervision on the synthetic images. This enables us to utilize extensive real-world unstructured datasets for model training. To initialize the training, given a training image, we randomly sample the pose code with equal probability for each pose view. Such a random sampling is conducted at

eachepoch during the training, for the purpose of assigning multiple

pose codes to one training image. For the noise vector, we also randomly sample each dimension independently from the uniform distribution in the range of [

].

3.3 Multi-Image DR-GAN

Our single-image DR-GAN extracts an identity representation and performs face rotation by processing one single image. Yet, we often have multiple images per subject in training and sometimes in testing. To leverage them, we propose multi-image DR-GAN that can benefit both the training and testing stages. For training, it can learn a better identity representation from multiple images that are complementary to each other. For testing, it can enable template-to-template matching, which addresses a crucial need in real-world surveillance applications.

The multi-image DR-GAN has the same as single-image DR-GAN, but a different as shown in Fig. 3. Given images of the same identity at various poses as input, besides extracting the feature representation , also estimates a confident coefficient for each image, which predicts the quality of the learnt representation. The fused representation of images is the weighted average of all representations,

(12)

This fused representation is then concatenated with and and fed to to generate a new image, which is expected to have the same identity as all input images and a target pose specified by the pose code. Thus, each sub-objective for learning has terms:

(13)
Fig. 3: Generator in multi-image DR-GAN. From an image set of a subject, we can fuse the features to a single representation via dynamically learnt coefficients and synthesize images in any pose.

The similar extension applied for and . The coefficient in Eqn. 12 is learned so that an image with a higher quality contributes more to the fused representation. The quality is an indicator of the PIFR performance of the image, rather than the low-level image quality. Face quality prediction is a classic topic where many prior work attempt to estimate the former from the latter [47, 54]. Our coefficient learning is essentially the quality prediction, from novel perspectives in contrast to prior work. That is, without explicit supervision, it is driven by through the decoded image , and learned in the context of, as a byproduct of, representation learning. Note that, jointly training multiple images per subject results in one, but not multiple, generator, i.e., all in Fig. 3 share the same parameters. This makes it flexible to take an arbitrary number of images during testing for representation learning and face rotation.

For the network structure, multi-image DR-GAN only makes minor modification from the single-image counterpart. Specifically, at the end of , we add one more convolutional filter to the layer before AvgPool to estimate the coefficient . We apply activation to constrain in the range of []. During training, despite unnecessary, we keep the number of input images per subject the same for the sake of convenience in image sampling and network training. To mimic the variation in the number of input images, we use a simple but effective trick: applying Dropout on the coefficients . Hence, during training, the network takes any number of inputs varying from to .

DR-GAN can be used in PIFR, image quality prediction, and face rotation. While the network in Fig. 2 (d) is used for training, our network for testing is much simplified. First, for PIFR, only is used to extract the representation from one or multiple images. Second, for quality prediction, only is used to compute from one image. Thirdly, both and are used for face rotation by specifying a target pose and a noise vector.

3.4 Improving via Model Switch

The ultimate goal of DR-GAN is to learn a disentangled representation for PIFR. Our aims for identity representation learning. While our aims for identity classification, it also learns an identity representation that could be used for face recognition during testing, the same as most previous work [55, 56]. The fact that both and can be used for face recognition motivates us to explore two questions. First, whether can outperform for PIFR. Second, whether a better will lead to a better in representation learning.

Fig. 4: Recognition performance of and when training DR-GAN with different on Multi-PIE dataset.

To answer the above questions, we conduct a bounding experiment to compare the face recognition performance of and . Specifically, using the Multi-PIE training set, we train a single-task CNN-based face recognition model for epochs. We save the models at th, th, th, and th epochs, termed as , , , respectively. These four models can be used as and to train four single-image DR-GAN models. Each model is trained until converged where we only update with being fixed, which leads to four termed as respectively.

Both and are used to extract identity features for face recognition on Multi-PIE, with the results in Fig. 4. We have three observations. First, the performance of shows that . This is expected since the performance increases as the model is being trained for more epochs. Second, the performance of also shows a similar trend with , which indicates that a better indeed leads to a better . Third, consistently outperforms , which suggests that the learnt representation in is more discriminative than the representation in conventional CNN-based face recognition models.

Based on the above observations, we propose an iterative scheme to switch between and in order to further improve . As shown in Tab. I, and shares the same network structure except that has an additional convolutional filter for the coefficient estimation. During training, we iteratively replace with the latest by removing the additional convolutional filter after several epochs. Since can always outperform , we will expect a better after model switch. Moreover, a better will lead to a better , which is then used as for the next switch. This iterative switch will lead to a better representation and thus better PIFR performance.

3.5 Improving via Representation Interpolation

Our learns a mapping from the image space to a representation space and learns the mapping from the representation space to the image space. is important for PIFR while is crucial for face synthesis. The usage of pose code, random noise, as well as the model switch techniques are useful for learning a better disentangled representation for . However, even with a perfect representation from , a poor may synthesize unsatisfactory face images.

To learn a better , we propose to employ representation interpolation to regularize the learning process. Prior GANs [20] have observed that interpolation between two noise vectors can still produce a valid image. Similarly in our work, by assuming a convex identity space, the interpolation between two representations , extracted from the face images , of two different identities should still be a valid face but with an unknown identity. During training, we randomly pair images with different identities to generate an interpolated representation:

(14)

We use the average, , for simplicity. Other weights for combining the two face representations can be used as well. Similar to the objectives for and in multi-image DR-GAN, we have additional terms to regularize the averaged representation. aims to classify the generated image to the fake class by having the following extra term:

(15)

And aims to generate an image that can fool to classify it as the real class and the target pose, and ignore the identity part, with two additional terms in and :

(16)
(17)

With the proposed techniques to improve both and , we expect to improve the generalization ability of DR-GAN for both representation learning and image synthesis. As will be shown in the experiments, the proposed techniques are effective in improving the performance of DR-GAN.

Fig. 5: The mean faces of pose groups in CASIA-Webface. The blurriness shows the challenges of pose estimation for large poses.

3.6 Comparison to Prior GANs

We compare DR-GAN with three most relevant GAN variants, as shown in Fig. 2.

Conditional GAN Conditional GAN [18, 57] extends GAN by feeding the labels to both and to generate images conditioned on labels, either class labels, modality information, or even partial data for inpainting. It has been used to generate MNIST digits conditioned on the class label and to learn multi-modal models. In conditional GAN, is trained to classify a real image with mismatched conditions to a fake class. In DR-GAN, classifies a real image to the corresponding class based on the labels.

Semi-Supervised GAN Salimans et al. [31] and Odena [32] simultaneously reformulate GAN to learn a discriminative classifier where is trained to not only distinguish between real and fake, but also classify real images into classes. outputs a -dim vector with the last dimension for the real/fake decision. The trained is used for image classification. DR-GAN shares a similar loss for but has two additions. First, we expand with an encoder-decoder structure. Second, we have an additional side information classification on the pose while training .

Adversarial Autoencoder (AAE) In AAE [58], is the encoder of an autoencoder. AAE has two objectives in order to turn an autoencoder into a generative model: the autoencoder reconstructs the input image, and the latent vector generated by the encoder matches an arbitrary prior distribution by training . DR-GAN differs to AAE in two aspects. First, the autoencoder in [58] is trained to learn a latent representation similar to an imposed prior distribution, while our encoder-decoder learns discriminative identity representations. Second, in AAE is trained to distinguish real/fake distributions while our is trained to classify real/fake images, the identity and pose of the images.

4 Experiments

DR-GAN can be used for face recognition by using the learnt representation from , and face rotation by specifying different pose codes and noise vectors with . We evaluate DR-GAN quantitatively for PIFR and qualitatively for face rotation. We further conduct experiments to analyze the training strategy, disentangle representation, and image coefficients. Our experiments are conducted for both controlled and in-the-wild databases.

4.1 Experimental Settings

Databases Multi-PIE [28] is the largest database for evaluating face recognition under pose, illumination, and expression variations in controlled setting. For fair comparison, we follow the setting in [13]: using subjects with neutral expression, poses within , and illuminations. The first subjects are used for training and the rest subjects for testing. In the testing set, one image per subject with frontal view and neutral illumination forms the gallery set and the others are the probe set. For Multi-PIE experiments, we add an additional illumination code similar to the pose code to disentangle the illumination variation. Therefore, we have , , . Further, to demonstrate our ability in synthesizing large-pose faces, we train a second model with training faces up to (i.e., ).

For the in-the-wild setting, we train on CASIA-WebFace [55] and AFLW [59], and test on CFP [9] and IJB-A [23]. CASIA-WebFace includes near-frontal faces of subjects. We add the AFLW ( images) to the training set to supply more pose variation. Since there is no identity information in this dataset, those images only used to compute GAN, pose related losses. CFP consists of subjects each with frontal and profile images. The evaluation protocol includes frontal-frontal (FF) and frontal-profile (FP) face verification, each having folders with same-person pairs and different-person pairs. As another large-pose database, IJB-A has images and video frames of subjects. It defines template-to-template face recognition where each template has one or multiple images. We remove overlap subjects between CASIA-Webface and IJB-A from the training. We have , . We set , for both settings.

Implementation Details Following [55], we align all face images to a canonical view of size . We randomly sample regions from the aligned face images for data augmentation. Image intensities are linearly scaled to the range of . To provide pose labels for CASIA-WebFace, we apply D face alignment [60, 61] to classify each face to one of poses. The mean face image of each pose group is shown in Fig. 5. The mean faces of profile faces are less sharp than those of the near-frontal pose groups, which indicates the pose estimation error caused by the face alignment algorithm.

Our implementation is extensively modified from a publicly available implementation of DC-GAN. We follow the optimization strategy in [20]. The batch size is set to be

. All weights are initialized from a zero-centered normal distribution with a standard deviation of

. Adam optimizer [62] is used with a learning rate of and momentum .

Method Frontal-Frontal Frontal-Profile
DR-GAN: n=
DR-GAN: n=
DR-GAN: n=
TABLE II: Comparison of single vs. multi-image DR-GAN on CFP.
single-image (avg.)
multi-image (avg.)
multi-image (fuse)
TABLE III: Comparison of the number of testing images on Multi-PIE.

Evaluation The proposed DR-GAN aims for both face representation learning and face image synthesis. The cosine distance between two representations is used for face recognition. We also evaluate the performance of face recognition w.r.t. different numbers of images in both training and testing. For image synthesis, we show qualitative results by comparing different losses and interpolation of the learnt representations. We also evaluate the various effects of different components in our method.

Frontal-Frontal Frontal-Profile
Method Accuracy () EER () AUC () Accuracy () EER () AUC ()
Sengupta et al. [9]
Sankarana et al. [63]
Chen et al. [64]
Human
DR-GAN [27]
DR-GAN (color + split)
DR-GAN (color + split+interpolation)
TABLE IV: Performance comparison on CFP dataset.
Method Average
Zhu et al. [65]
Zhu et al. [13]
Yim et al. [14]
Using loss
DR-GAN [27]
DR-GAN
TABLE V: Identification rate () comparison on Multi-PIE dataset.
Verification Identification
 Method @FAR= @FAR= @Rank- @Rank-
 OpenBR [23]
 GOTS [23]
 Wang et al. [25]
 DCNN [24]
 PAM [15]
 PAMs [15]
 DR-GAN [27]
 DR-GAN (avg.)
 DR-GAN (fuse)
TABLE VI: Performance comparison on IJB-A dataset.

4.2 Representation Learning

Single vs. Multiple Training Images We evaluate the effect of the number of training images () per subject on the face recognition performance on CFP. Specifically, with the same training set, we train three models with , where denotes single-image DR-GAN and denotes multi-image DR-GAN. The face verification performance on CFP using of each model are shown in Tab. II. We observe the advantage of multi-image DR-GAN over the single-image counterpart despite they use the same amount of training data, which attributes to more constraints in learning that leads to a better representation. However, we do not keep increasing due to the limited computation capacity. In the rest of the paper, we use multi-image DR-GAN with unless specified.

Single vs. Multiple Testing Images We also evaluate the effect of the number of testing images () per subject on the face identification rate on Multi-PIE. We mimic IJB-A to generate image sets as the probe set while the gallery set remains the same with one image per subject. Specifically, from the Multi-PIE probe set, we select a subsect of images with large poses ( to ), which are used to form different probe sets with ranging from to . First, we randomly select one image per subject from to form . Second, based on , we construct by adding one random image of each subject from . We construct , , in a similar way.

We compare three combinations of models and decision metrics: (i) single-image DR-GAN with the averaged cosine distances of representations, (ii) multi-image DR-GAN with the averaged cosine distances of representations, and (iii) multi-image DR-GAN with the cosine distance of the fused representation. As shown in Tab. III, comparing (ii) and (iii), using the coefficients learned by the network for representation fusion is superior over the conventional score averaging. There is a consistent improvement of . While there is some improvement from (i) to (ii), the margin decreases as increases.

Results on Benchmark Databases We compare our method with state-of-the-art face recognition methods on CFP, Multi-PIE, and IJB-A. Table IV shows the comparison on CFP evaluated with Accuracy, Equal Error Rate (EER), and Area Under the Curve (AUC). Results are reported with the average with standard deviation over folds. For our results, the first row shows the performance of our preliminary work [27]. “color+split” represents the model trained with the separated and color images. And “+interpolation” represents the additional changes made by the representation interpolation proposed in Sec. 3.5, which is shown to be effective in improving the face recognition performance. Overall, we achieve comparable performance on frontal-frontal verification while having improvement on the frontal-profile verification.

Table V shows the face identification performance on Multi-PIE compared to the methods with the same setting. Our method shows a significant improvement for large-pose faces, e.g., there is more than improvement margin at poses. The variation of recognition rates across different poses is much smaller than the baselines, which suggests that our learnt representation is more robust to the pose variation.

Table VI shows the performance of both face identification and verification on IJB-A. The second row of our results shows the performance of score fusion via average cosine distances. The third row shows the results of the proposed represetation fusion strategy. Compared to the state of the art, DR-GAN achieves superior results on both verification and identification. The proposed fusion scheme via learnt coefficients is superior to the averaged cosine distances of representations. Finally, our work has made substantial improvement over the preliminary version [27]. These in-the-wild results show the power of DR-GAN for PIFR.

Representation vs. Synthetic Image for PIFR Many prior work [10, 11] use frontalized faces for PIFR. To evaluate the identity preservation of synthetic images from DR-GAN, we also perform face recognition using our frontalized faces. Any face feature extractor could be applied to them, including or . However, both are trained on real images of various poses. To specialize to synthetic frontal faces, we fine-tune with the synthetic images and denote as . As shown in Tab. VII, although the performance of synthetic images (and its score-level fusion denoted as ) is not as good as the learnt representation, using the fine-tuned on synthetic frontal still achieves comparable perfromance to the previous methods, which shows the identity preservation ability of DR-GAN.

Verification Identification
 Features @FAR= @FAR= @Rank- @Rank-
TABLE VII: Representation vs. synthetic image on IJB-A.
Fig. 6: Face rotation comparison on Multi-PIE. Given the input (in illumination and pose), we show synthetic images of loss (top), adversarial loss (middle), and ground truth (bottom). Column - show the ability of DR-GAN in simultaneous face rotation and re-lighting.
Fig. 7: Interpolation of , , and . (a) Synthetic images by interpolating between the identity representations of two faces (Column and ). Note the smooth transition between different genders and facial attributes. (b) Pose angles are available in the training set. DR-GAN interpolates in-between unseen poses via continuous pose codes, shown above Row . (c) For each image at Column , DR-GAN synthesizes two images at (Column ) and (Column ), and in-between images by interpolating along two .

4.3 Face Rotation

Adversarial Loss vs. L2 loss Prior work [65, 14, 35] on face rotation normally employ the loss to learn a mapping between two views. To compare the loss with our adversarial loss, we train a model where is supervised by an loss on the ground truth face with the target view. The training process is kept the same for a fair comparison. As shown in Fig. 6, DR-GAN can generate far more realistic faces that are similar to the ground truth faces in all views. Meanwhile, images synthesized by the loss cannot maintain high frequency components and are blurry. In fact, loss treats each pixel equally, which leads to the loss of discriminative information. This inferior synthesis is also reflected in the lower PIFR performance in Tab. V. In contrast, by integrating the adversarial loss, we expect to learn a more discriminative representation for better recognition, and a more generative representation for better face synthesis.

Fig. 8: Face rotation on CFP: (a) input, (b) frontalized faces, (c) real frontal faces, (d) rotated faces at , , poses. We expect the frontalized faces to preserve the identity, rather than all facial attributes. This is very challenging for face rotation due to the in-the-wild variations and extreme profile views. The artifact in the image boundary is due to image extrapolation in pre-processing. When the inputs are frontal faces with variations in roll, expression, or occlusions, the synthetic faces can remove these variations.
Fig. 9: Face frontalization on IJB-A. For each of four subjects, we show input images with estimated coefficients overlaid at the top left corner (first row) and their frontalized counter part (second row). The last column is the groundtruth frontal and synthetic frontal from the fused representation of all images. Note the challenges of large poses, occlusion, and low resolution, and our opportunistic frontalization.
Fig. 10: Face frontalization on IJB-A for an image set (first subject) and a video sequence (second subject). For each subject, we show input images (first row), their respective frontalized faces (second row) and the frontalized faces using incrementally fused representations from all previous inputs up to this image (third row). In the last column, we show the groundtruth frontal face.

Variable Interpolations Taking two images of different subjects , we extract features and from . The interpolation between and can generate many representations, which can be fed to to synthesize face images. In Fig. 7 (a), the top row shows a transition from a female subject to a male subject with beard and glasses. Similar to [20], these smooth semantic changes indicate that the model has learned essential identity representations for image synthesis.

Similar interpolation can be conducted for the pose codes as well. During training, we use a one-hot vector to specify the discrete pose of the synthetic image. During testing, we could generate face images with continuous poses, whose pose code is the weighted average, i.e., interpolation, of two neighboring pose codes. Note that the resultant pose code is no longer a one-hot vector. As in Fig. 7 (b), this leads to smooth pose transition from one view to many views unseen to the training set.

We can also interpolate the noise vector . We synthesize frontal faces at and (a vector of all s) and interpolate between two . Given the fixed identity representation and pose code, the synthetic images are identity-preserved frontal faces. As in Fig. 7 (c), the change of leads to the change of the background, illumination condition, and facial attributes such as beard, while the identity is well preserved and faces are of the frontal view. Thus, models less significant face variations. Note that while utilizing , the noise vector is randomly generated and thus the effective appearance variation induced by would be less than that of Fig. 7 (c), which has two extreme .

Face Rotation on Benchmark Databases Our generator is trained to be a face rotator. Given one or multiple face images with arbitrary poses, we can generate multiple identity-preserved faces at different views. Figure 6 shows the face rotation results on Multi-PIE. Given an input image at any pose, we can generate multi-view images of the same subject but at a different pose by specifying different pose codes or in a different lighting condition by varying illumination code. The rotated faces are similar to the ground truth with well-preserved attributes such as eyeglasses.

One application of face rotation is face frontalization. Our DR-GAN can be used for face frontalization by specifying the frontal-view as the target pose. Figure 8 shows the face frontalization on CFP. Given an extreme profile input image, DR-GAN can generate a realistic frontal face that has similar identity characteristics as the real frontal face. To the best of our knowledge, this is the first work that is able to frontalize a profile-view in-the-wild face image

. When the input image is already in the frontal view, the synthetic images can correct the pitch and roll angles, normalize illumination and expression, and impute occluded facial areas, as shown in the last few examples of Fig. 

8.

Figure 9 shows face frontalization results on IJB-A. For each subject or template, we show images and their respective frontalized faces, and the frontalized face generated from the fused representation. For each input image, the estimated coefficient is shown on the top-left corner of each image, which clearly indicates the quality of the input image as well as the frontalized image. For example, coefficients for low-quality or large-pose input images are very small. These images will have very little contribution to the fused representation. Finally, the face from the fused representation has superior quality compared to all frontalized images from a single input face. This shows the effectiveness of our multi-image DR-GAN in taking advantage of multiple images of the same subject for better representation learning.

To further evaluate face frontalization results w.r.t. different numbers of input images, we vary the number of input images from to and visualize the frontalized images from the incrementally fused representations. As shown in Fig. 10, the individually frontalized faces have varying degrees of resemblance to the true subject, according to the qualities of different input images. The synthetic images from fused representations (third row) improve as the number of images increases.

Fig. 11: Coefficient distributions on IJB-A (a) and CFP (b). For IJB-A, we visualize images at four regions of the distribution. For CFP, we plot the distributions for frontal faces (blue) and profile faces (red) separately and show images at the heads and tails of each distribution.
Fig. 12: The correlation between the estimated coefficients and the classification probabilities.

4.4 Confident Coefficients

In multi-image DR-GAN, we learn a confident coefficient for each input image by assuming that the learnt coefficient is indicative of the image quality, i.e., how good it can be used for face recognition. Therefore, a low-quality image should have a relatively poor representation and small coefficients so that it would contribute less to the fused representation. To validate this assumption, we compute the confident coefficients for all images in IJB-A and CFP databases and plot the distribution as shown in Fig. 11.

For IJB-A, we show four example images with low, medium-low, medium-high, and high coefficients. It is obvious that the learnt coefficients are correlated to the image quality. Images with relatively low coefficients are usually blurring, with large poses or failure cropping. While images with relatively high coefficients are of very high quality with frontal faces and less occlusion. Since CFP consists of frontal faces and profile faces, we plot their distributions separately. Despite some overlap in the middle region, the profile faces clearly have relatively low coefficients compared to the frontal faces. Within each distribution, the coefficient are related to other variations expect yaw angles. The low-quality images for each pose group are with occlusion and/or challenging lighting conditions, while the high-quality ones are with less occlusion and under normal lighting.

To quantitatively evaluate the correlation between the coefficients and face recognition performance, we conduct an identity classification experiment on IJB-A. Specifically, we randomly select all frames of one video for each subject and select half of images for training and remaining for testing. The training and testing sets share the same identities. Therefore, in the testing stage, we can use the output of the softmax layer as the probability of each testing image belonging to the right identity class. This probability is an indicator of how well the input image can be recognized as the true identity. Given the estimated coefficients, we plot these two values for the testing set, as shown in Fig. 

12. These two values are highly correlated to each other with a correlation of , which again supports our assumption that the learnt coefficients are indicative of the image quality.

One common application of image quality is to prevent low-quality images from contributing to face recognition. To validate whether our coefficients have such usability, we design the following experiment. For each template in IJB-A, we keep images whose coefficients are larger than a predefined threshold , or if all are smaller we keep one image with the highest . Tab. VIII reports the performance on IJB-A, with different . With being , all test images are kept and the result is the same as Tab. VI. These results show that keeping all or majority of the samples are better than removing them. This is encouraging as it reflects the effectiveness of DR-GAN in automatically diminishing the impact of low-quality images, without removing them by thresholding.

Selected Verification Identification
() @FAR= @FAR= @Rank- @Rank-
TABLE VIII: Performance of IJB-A when removing images by threshold . “Selected” shows the percentage of retained images.
Epoch No.
Identification rate (%)
TABLE IX: Performance of on Multi-PIE when keep switching to . At Epoch , is trained with only the softmax loss.

4.5 Further Analysis

Model Switch In Sec. 3.4, we propose to improve via model switch, i.e., replacing with during training. Table IX shows the performance of for face recognition on Multi-PIE. At the beginning, is initilized with a model trained with the softmax loss for identity classification. We use to replace and retrain with random initialization. When converges, we replace with and repeat above steps. Empirically, for Multi-PIE dataset, always converges in less than epochs. Hence, in Tab. IX, we perform model switch every epochs and report face recognition performance of at each switch. Clearly, the performance keeps improving as training goes on. This study implies that DR-GAN might leverage the future development of face recognition, by using a rd party recognizer as and further improve upon it.

Disentangled Representation In DR-GAN, we claim that the learnt representation is disentangled from pose variations via the pose code. To validate this, following the energy-based weight visualization method proposed in [56], we perform feature visualization on the FC layer, denoted as , in . Our goal is to select two out of the filters that have highest responses for identity and pose respectively. The assumption is that if the learnt representation is pose-invariant, there should be separate neurons to encode the identity features and pose features.

Recall that we concatenate , and into one feature vector, which multiplies with a weight matrix and generates the output with being the feature output of one filter in FC. Let denote the weight matrix with three sub-matrices, which would multiply with

respectively. Taking the identity matrix as an example, we have

where . We compute an energy vector with each element as: . We then find the filter with the highest energy in as . Similarly, by partitioning , we find another filter, denoted as , with the highest energy for pose.

Given the representation of one subject, along with a pose code and noise , we can compute the responses of two filters via and . By varying the subjects and pose codes, we generate two arrays of responses in Fig. 13, for identity () and pose () respectively. For both arrays, each row represents the responses of the same subject and each column represents the same pose. The responses for identity encode the identity features, where each row shows similar patterns and each column does not share similarity. On contrary, for pose responses, each column share similar patterns while each row is not related. This visualization supports our claim that the learnt representation is pose-invariant.

Fig. 13: Responses of two filters: filter with the highest responses to identity (left), and pose (right). Responses of each row are of the same subject, and each column are of the same pose. Note the within-row similarity on the left and within-column similarity on the right.

Vector Dimensionality In this subsection, we explore how the dimensionalities of representations () and noise vectors () affect the recognition performance of the learnt model. The recognition results on CFP are reported in Tab. X. The dimensionality of noise vectors has negligible effect on the recognition performance. We choose for its minor improvement over others and its ability to incorporate variations other than poses during the synthesis. In contrast, the dimensionality of representations has more impact and appears to perform the best.

Frontal-Frontal Frontal-Profile

TABLE X: Affects of vector dimensions on CFP performance.

5 Conclusions

This paper presents DR-GAN to learn a disentangled representation for PIFR, by modeling the face rotation process. We are the first to construct the generator in GAN with an encoder-decoder structure for representation learning, which can be quantitatively evaluated by performing PIFR. Using the pose code for decoding and pose classification in the discriminator lead to the disentanglement of pose variation from the identity features. We also propose multi-image DR-GAN to leverage multiple images per subject in both training and testing to learn a better representation. This is the first work that is able to frontalize an extreme-pose in-the-wild face. We attribute the superior PIFR and face synthesis capabilities to the discriminative yet generative representation learned in . Our representation is discriminative since the other variations are explicitly disentangled by the pose/illumination codes, and random noise, and is generative since its decoded (synthetic) image would still be classified as the original identity.

References

  • [1] Y. Taigman, M. Yang, M. Ranzato, and L. Wolf, “Deepface: Closing the gap to human-level performance in face verification,” in CVPR, 2014.
  • [2] O. M. Parkhi, A. Vedaldi, and A. Zisserman, “Deep face recognition,” in BMVC, 2015.
  • [3] F. Schroff, D. Kalenichenko, and J. Philbin, “FaceNet: A unified embedding for face recognition and clustering,” in CVPR, 2015.
  • [4] X. Liu and T. Chen, “Pose-robust face recognition using geometry assisted probabilistic modeling,” in CVPR, 2005.
  • [5] X. Liu, J. Rittscher, and T. Chen, “Optimal pose for face recognition,” in CVPR, 2006.
  • [6]

    X. Chai, S. Shan, X. Chen, and W. Gao, “Locally linear regression for pose-invariant face recognition,”

    TIP, 2007.
  • [7]

    R. Abiantun, U. Prabhu, and M. Savvides, “Sparse feature extraction for pose-tolerant face recognition,”

    TPAMI, 2014.
  • [8] C. Ding and D. Tao, “A comprehensive survey on pose-invariant face recognition,” TIST, 2016.
  • [9] S. Sengupta, J.-C. Chen, C. Castillo, V. M. Patel, R. Chellappa, and D. W. Jacobs, “Frontal to profile face verification in the wild,” in WACV, 2016.
  • [10] T. Hassner, S. Harel, E. Paz, and R. Enbar, “Effective face frontalization in unconstrained images,” in CVPR, 2015.
  • [11] X. Zhu, Z. Lei, J. Yan, D. Yi, and S. Z. Li, “High-fidelity pose and expression normalization for face recognition in the wild,” in CVPR, 2015.
  • [12] M. Kan, S. Shan, H. Chang, and X. Chen, “Stacked Progressive Auto-Encoders (SPAE) for face recognition across poses,” in CVPR, 2014.
  • [13] Z. Zhu, P. Luo, X. Wang, and X. Tang, “Multi-view perceptron: a deep model for learning face identity and view representations,” in NIPS, 2014.
  • [14]

    J. Yim, H. Jung, B. Yoo, C. Choi, D. Park, and J. Kim, “Rotating your face using multi-task deep neural network,” in

    CVPR, 2015.
  • [15] I. Masi, S. Rawls, G. Medioni, and P. Natarajan, “Pose-aware face recognition in the wild,” in CVPR, 2016.
  • [16] C. Ding and D. Tao, “Robust face recognition via multimodal deep face representation,” TMM, 2015.
  • [17] I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio, “Generative adversarial nets,” in NIPS, 2014.
  • [18] M. Mirza and S. Osindero, “Conditional generative adversarial nets,” arXiv:1411.1784, 2014.
  • [19] E. L. Denton, S. Chintala, A. Szlam, and R. Fergus, “Deep generative image models using a Laplacian pyramid of adversarial networks,” in NIPS, 2015.
  • [20] A. Radford, L. Metz, and S. Chintala, “Unsupervised representation learning with deep convolutional generative adversarial networks,” ICLR, 2016.
  • [21] X. Chen, Y. Duan, R. Houthooft, J. Schulman, I. Sutskever, and P. Abbeel, “InfoGAN: Interpretable representation learning by information maximizing generative adversarial nets,” in NIPS, 2016.
  • [22] D. Berthelot, T. Schumm, and L. Metz, “BEGAN: Boundary Equilibrium Generative Adversarial Networks,” arXiv:1703.10717, 2017.
  • [23] B. F. Klare, B. Klein, E. Taborsky, A. Blanton, J. Cheney, K. Allen, P. Grother, A. Mah, M. Burge, and A. K. Jain, “Pushing the frontiers of unconstrained face detection and recognition: IARPA Janus Benchmark A,” in CVPR, 2015.
  • [24] J.-C. Chen, V. M. Patel, and R. Chellappa, “Unconstrained face verification using deep CNN features,” in WACV, 2016.
  • [25] D. Wang, C. Otto, and A. K. Jain, “Face search at scale,” TPAMI, 2016.
  • [26] I. Masi, A. T. Tran, T. Hassner, J. T. Leksut, and G. Medioni, “Do we really need to collect millions of faces for effective face recognition?” in ECCV, 2016.
  • [27] L. Tran, X. Yin, and X. Liu, “Disentangled Representation Learning GAN for pose-invariant face recognition,” in CVPR, 2017.
  • [28] R. Gross, I. Matthews, J. Cohn, T. Kanade, and S. Baker, “Multi-PIE,” IVC, 2010.
  • [29] S. Reed, Z. Akata, X. Yan, L. Logeswaran, B. Schiele, and H. Lee, “Generative adversarial text to image synthesis,” in ICML, 2016.
  • [30] X. Yu and F. Porikli, “Ultra-resolving face images by discriminative generative networks,” in ECCV, 2016.
  • [31] T. Salimans, I. Goodfellow, W. Zaremba, V. Cheung, A. Radford, and X. Chen, “Improved techniques for training GANs,” in NIPS, 2016.
  • [32]

    A. Odena, “Semi-supervised learning with generative adversarial networks,” in

    ICMLW, 2016.
  • [33] S. Li, X. Liu, X. Chai, H. Zhang, S. Lao, and S. Shan, “Morphable displacement field based image matching for face recognition across pose,” in ECCV, 2012.
  • [34] C. Sagonas, Y. Panagakis, S. Zafeiriou, and M. Pantic, “Robust statistical face frontalization,” in ICCV, 2015.
  • [35] J. Yang, S. E. Reed, M.-H. Yang, and H. Lee, “Weakly-supervised disentangling with recurrent transformations for 3D view synthesis,” in NIPS, 2015.
  • [36] Y. Zhang, M. Shao, E. K. Wong, and Y. Fu, “Random faces guided sparse many-to-one encoder for pose-invariant face recognition,” in ICCV, 2013.
  • [37] J. Roth, Y. Tong, and X. Liu, “Unconstrained 3D face reconstruction,” in CVPR, 2015.
  • [38] ——, “Adaptive 3D face reconstruction from unconstrained photo collections,” TPAMI, 2017.
  • [39] Y. Bengio, A. Courville, and P. Vincent, “Representation learning: A review and new perspectives,” TPAMI, 2013.
  • [40]

    R. Marc’Aurelio, F. J. Huang, Y.-L. Boureau, and Y. LeCun, “Unsupervised learning of invariant feature hierarchies with applications to object recognition,” in

    CVPR, 2007.
  • [41] T. D. Kulkarni, W. F. Whitney, P. Kohli, and J. Tenenbaum, “Deep convolutional inverse graphics network,” in NIPS, 2015.
  • [42] S. Bharadwaj, M. Vatsa, and R. Singh, “Biometric quality: A review of fingerprint, iris, and face,” EURASIP JIVP, 2014.
  • [43] P. Grother and E. Tabassi, “Performance of biometric quality measures,” TPAMI, 2007.
  • [44] Y. Tong, F. W. Wheeler, and X. Liu, “Improving biometric identification through quality-based face and fingerprint biometric fusion,” in CVPRW, 2010.
  • [45] A. Abaza, M. A. Harrison, T. Bourlai, and A. Ross, “Design and evaluation of photometric image quality measures for effective face recognition,” IET Biometrics, 2014.
  • [46] M. Abdel-Mottaleb and M. H. Mahoor, “Application notes-algorithms for assessing the quality of facial images,” IEEE Computational Intelligence Magazine, 2007.
  • [47] N. Ozay, Y. Tong, F. W. Wheeler, and X. Liu, “Improving face recognition with a quality-based probabilistic framework,” in CVPRW, 2009.
  • [48] Y. Chen, S. C. Dass, and A. K. Jain, “Localized iris image quality using 2-D wavelets,” in ICB, 2006.
  • [49] E. Krichen, S. Garcia-Salicetti, and B. Dorizzi, “A new probabilistic iris quality measure for comprehensive noise detection,” in BTAS, 2007.
  • [50] E. Tabassi and C. L. Wilson, “A novel approach to fingerprint image quality,” in ICIP, 2005.
  • [51] R. Teixeira and N. Leite, “A new framework for quality assessment of high-resolution fingerprint images,” TPAMI, 2016.
  • [52] D. Muramatsu, Y. Makihara, and Y. Yagi, “View transformation model incorporating quality measures for cross-view gait recognition,” IEEE transactions on cybernetics, 2016.
  • [53] D. S. Matovski, M. Nixon, S. Mahmoodi, and T. Mansfield, “On including quality in applied automatic gait recognition,” in ICPR, 2012.
  • [54] Y. Wong, S. Chen, S. Mau, C. Sanderson, and B. C. Lovell, “Patch-based probabilistic image quality assessment for face selection and improved video-based face recognition,” in CVPRW, 2011.
  • [55] D. Yi, Z. Lei, S. Liao, and S. Z. Li, “Learning face representation from scratch,” arXiv:1411.7923, 2014.
  • [56] X. Yin and X. Liu, “Multi-task convolutional neural network for face recognition,” arXiv:1702.04710, 2017.
  • [57] H. Kwak and B.-T. Zhang, “Ways of conditioning generative adversarial networks,” in NIPSW, 2016.
  • [58] A. Makhzani, J. Shlens, N. Jaitly, and I. Goodfellow, “Adversarial autoencoders,” in ICLRW, 2015.
  • [59] M. Koestinger, P. Wohlhart, P. M. Roth, and H. Bischof, “Annotated facial landmarks in the wild: A large-scale, real-world database for facial landmark localization,” in First IEEE International Workshop on Benchmarking Facial Image Analysis Technologies, 2011.
  • [60] A. Jourabloo and X. Liu, “Pose-Invariant 3D face alignment,” in ICCV, 2015.
  • [61] ——, “Large-Pose face alignment via CNN-based dense 3D model fitting,” in CVPR, 2016.
  • [62] D. Kingma and J. Ba, “Adam: A method for stochastic optimization,” in ICLR, 2015.
  • [63] S. Sankaranarayanan, A. Alavi, C. Castillo, and R. Chellappa, “Triplet probabilistic embedding for face verification and clustering,” in BTAS, 2016.
  • [64] J.-C. Chen, J. Zheng, V. M. Patel, and R. Chellappa, “Fisher vector encoded deep convolutional features for unconstrained face verification,” in ICIP, 2016.
  • [65] Z. Zhu, P. Luo, X. Wang, and X. Tang, “Deep learning identity-preserving face space,” in ICCV, 2013.