BoostGAN for Occlusive Profile Face Frontalization and Recognition

02/26/2019 ∙ by Qingyan Duan, et al. ∙ Chongqing University 0

There are many facts affecting human face recognition, such as pose, occlusion, illumination, age, etc. First and foremost are large pose and occlusion problems, which can even result in more than 10 degradation. Pose-invariant feature representation and face frontalization with generative adversarial networks (GAN) have been widely used to solve the pose problem. However, the synthesis and recognition of occlusive but profile faces is still an uninvestigated problem. To address this issue, in this paper, we aim to contribute an effective solution on how to recognize occlusive but profile faces, even with facial keypoint region (e.g. eyes, nose, etc.) corrupted. Specifically, we propose a boosting Generative Adversarial Network (BoostGAN) for de-occlusion, frontalization, and recognition of faces. Upon the assumption that facial occlusion is partial and incomplete, multiple patch occluded images are fed as inputs for knowledge boosting, such as identity and texture information. A new aggregation structure composed of a deep GAN for coarse face synthesis and a shallow boosting net for fine face generation is further designed. Exhaustive experiments demonstrate that the proposed approach not only presents clear perceptual photo-realistic results but also shows state-of-the-art recognition performance for occlusive but profile faces.



There are no comments yet.


page 1

page 2

page 5

page 6

page 7

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

In recent years, face recognition has received a great progress, enabled by deep learning techniques 

[26]. However, there are still many problems remaining to be unresolved satisfactorily. First and foremost is the large pose variation, a bottleneck in face recognition. Existing methods addressing pose variations can be divided into two main categories. One category aims to obtain pose-invariant features by hand-crafting or deep learning on multi-view facial images [15, 25, 19, 27]. The other one, inspired by generative adversarial network (GAN) techniques, employs synthesis approaches to generate frontal view images [9, 24, 21, 12, 10]

. For the latter, the generator in GANs generally follows an encoder-decoder convolutional neural network architecture, where the encoder extracts identity preserved features and the decoder outputs realistic faces in target pose. Not only pose variation, but illumination difference was also considered for pose-invariant facial feature representation, synthesis, and recognition 


(a) Profile
(b) Ours
(c) [21]
(d) [12]
(e) GT
Figure 1: Synthesis results by testing the existing models on occluded faces. The poses of the row and the row are and , respectively. GT denotes the ground truth frontal images.
Figure 2: The framework of BoostGAN, in an end-to-end and coarse-to-fine architecture. Two parts are included: multi-occlusion frontal view generator and multi-input boosting network. For the former, the coarse de-occlusion and slight identity preserving images are generated. For the latter, the photo-realistic, clean, and frontal faces are achieved by ensemble complementary information from multiple inputs in a boosting way.

In fact, except for pose variation, occlusion also seriously affects the performance of face recognition. In order to fill the ‘hole’ in faces, image completion is generally considered. Most traditional low-level cues based approaches synthesize the contents by searching patches from the known region of the same image [2, 17, 28]. Recently, GAN has been used for image completion and achieved a success [18, 13, 4]. However, these methods usually only generate the pixels of the corrupted region, while keeping other pixels of the clean region unchanged. It means that, these approaches are more appropriate for close-set image completion, because the corrupted part in test images can not find the matched clean image in training stage for open-set image completion. Therefore, the filled parts of testing images by these methods are imprecise and lack of identity discrimination. That is, if an occlusive facial image is excluded in the training set, the repaired face cannot preserve the original identity, which is not conducive to face recognition. Although Zhao et al. [34] introduced an identity preservation loss, it is still not enough for solving open-set image completion problem faultlessly.

To address this issue, in this paper, we aim to answer how to recognize faces if large pose variation and occlusion exist simultaneously? This is a general problem in face recognition community. Briefly, our solution is the proposed BoostGAN model. To our knowledge, this is the first work for both de-occlusion and frontalization of faces by using GAN based variants with regard to face recognition. The previous face frontalization methods usually synthesize frontal view faces from clean profile image. However, once the keypoint region is occluded or corrupted, the synthesis effect of these previous approaches becomes very poor (as is shown in Figure 1). Additionally, the previous image completion methods were only used for occlusion part restoration of a near-frontal view face, but not face frontalization. Although it can be used for filling the ‘hole’ (de-occlusion) with GAN based image completion approaches, the generated faces can not well preserve identity and texture information. Consequently, the performance of occlusive profile face recognition is seriously flawed, especially when the key region, such as eyes, mouth, is sheltered, as shown in Figure 1.

Specifically, we propose an end-to-end BoostGAN model to solve the face frontalization when both occlusions and pose variations exist simultaneously. The identity preservation of the generated faces is also qualified for occlusive but profile face recognition. The architecture of BoostGAN is shown in Figure 2. Different from the previous GAN based face completion, BoostGAN exploits both pixel level and feature level information as the supervisory signal to preserve the identity information from multiple scales. Therefore, BoostGAN can be guaranteed to cope with open-set occlusive profile face recognition. A new coarse-to-fine aggregation architecture with a deep GAN network (coarse net) and a shallow boosting network (fine net) is designed for identity-invariable photo-realistic face synthesis.

The main contributions of our work lie in three folds:

-We propose an end-to-end BoostGAN model for occlusive but profile face recognition, which can synthesize photo-realistic and identity preserved frontal faces under arbitrary poses and occlusions.

-A coarse-to-fine aggregation structure is proposed. The coarse part is a deep GAN network for de-occlusion and frontalization of multiple partially occluded profile faces (e.g. keypoints). The fine part is a shallow boosting network for photo-realistic face synthesis.

-We demonstrate our proposed BoostGAN model through quantitative and qualitative experiments on benchmark datasets, and state-of-the-art results are achieved under both non-occlusive and occlusive scenarios.

2 Related Work

In this section, we review three closely-related topics, including generative adversarial network (GAN), face frontalization and image completion.

2.1 Generative Adversarial Network (GAN)

Generative adversarial network, proposed by Goodfellow et al[7], was formulated by gaming between a generator and a discriminator. Deep convolutional generative adversarial network (DCGAN) [23] was the first to combine CNN and GAN together for image generation. After that, many GAN variants have been proposed. However, the stability of GAN training is always an open problem, which attracts a number of researchers to solve the problem of training instability. For example, Wasserstein GAN [1]

removed the logarithm in the loss function of original GAN. Spectral normalization generative adversarial network (SNGAN) 

[22] proposed the spectral normalization to satisfy the Lipschitz constant and guarantee the boundedness of statistics. Besides that, many other works focus on how to improve the visual realism. For example, Zhu et al. proposed the cycleGAN [35] to deal with the unpaired data. Karras et al. obtained the high-resolution image from a low-resolution image by growing both the generator and discriminator progressively [16].

Method Without occlusion With occlusion
DR-GAN [21]
FF-GAN [32]
TP-GAN [12]

Table 1: Feasibility of GAN variants for profile face frontalization and recognition withwithout occlusion.

2.2 Face Frontalization

Face Frontalization is an extremely challenging task due to its ill-posed nature. Existing methods dealing with this problem can be divided into three categories: 2D/3D local texture warping [9, 36], statistical methods  [24], and deep learning methods [21, 3, 12, 10]. Specifically, Hassner et al. exploited a mean 3D face reference surface to generate a frontal view facial image for any subject [9]. Sagonas et al. viewed the frontal view reconstruction and landmark localization as a constrained low-rank minimization problem [24]. Benefiting from GAN, Luan et al. proposed a DR-GAN for pose-invariant face recognition [21]. FF-GAN introduced the 3D face model into GAN, such that the 3DMM conditioned model can retain the visual quality during frontal view synthesis [32]. TP-GAN proposed to deal with profile view faces through global and local networks separately, then generate the final frontal face by fusion to improve the image photo-realistic [12]. CAPG-GAN recovers both neutral and profile head pose face images from input face and facial landmark heatmap by pose-guided generator and couple-agent discriminator [10].

These methods work well under non-occlusive scenarios. However, they focus on the impact of pose variation, and the effect of occlusion is ignored. Due to the specificity of these network architecture, the existing methods cannot be generalized with occlusion as described in Table 1.

2.3 Image Completion

Synthesizing the missing part of a facial image can be formulated as an image completion problem. Content prior is required to obtain a faithful reconstruction, which usually comes from either other parts of the same image or an external dataset. The early algorithms inpaint missing content by propagating information from known neighborhoods based on low-level cues or global statistics, find similar structures from the context of the input image and then paste them in the holes [2, 17, 28]. Deep neural network based methods repair the missing part of the images by learning the background texture [5]. Recently, GANs have been introduced for this task. Li et al. designed a GAN model with global and local discriminators for image completion [18]. Yeh et al

. generates the missing content by conditioning on the available data for semantic image inpainting 

[30]. However, none of them is capable of preserving the identity. Therefore, Zhao et al. proposed to recover the missing content under various head poses while preserving the identity by introducing an identity loss and a pose discriminator [34].

These image completion approaches can only fill the missing region. Recognizing faces under the scenarios of both occlusion and pose variation is not well studied.

3 Approach

Different from those face frontalization and image completion methods, our proposed BoostGAN works on the scenario that occlusion and pose variation occur simultaneously, with regard to occlusive but profile face recognition.

3.1 Network Architecture

3.1.1 Multi-occlusion Frontal View Generator

In this work, two kinds of occlusions with regard to the position are considered: keypoint position occlusion and random position occlusion. There are four occlusive profile images , which are covered in left eye, right eye, nose and mouth position on a profile image by a white square mask, respectively, for keypoint position occlusion. The corresponding frontal facial groundtruth of profile image is denoted as . The size of , and are , where , , and are defined as width, height, and channel of image, respectively. The aim of the multi-occlusion frontal view generator is to recover four rough but slightly discriminative frontal images from the four different occlusive profile images. The multi-occlusion frontal view generator is composed of an encoder and decoder, denoted as , . That is,


where is the generated frontal image with respect to each different occlusive profile image, respectively.

Inspired by the excellent performance of TP-GAN [12], our follows the similar network structure, that is formulated with a down-sampling encoder and an up-sampling decoder with skip connections, for fair comparison and model analysis.

Layer Input Filter Size Output Size
resblock1 concatenated
conv1 resblock1
resblock2 conv1
conv2 resblock2
conv3 conv2
Table 2: Configuration of the boosting network.

3.1.2 Multi-input Boosting Network

Through the multi-occlusion frontal view generator , four rough and slight discriminative frontal faces can be coarsely obtained. Due to the occlusion of key regions, the effectiveness of identity preservation is not good and the photo-realistic of synthesis is flawed with blur and distortion. Therefore, a multi-input boosting network is then followed for photo-realistic and identity preservation, because we find that the four inputs are complementary. Boosting is an ensemble meta-algorithm, for transforming a family of weak learners into a strong one by fusion. Therefore, the four primarily generated facial images by the generator can be further converted to a photo-realistic and identity preserving frontal face by the boosting network.

The boosting network denoted as is used to deal with image generation. Simply, the four primary generated facial images are concatenated as the input (the size is ) of the boosting network. That is,



denotes the final generated photo-realistic and identity preserving frontal face. The boosting network contains two residual blocks, with each follows a convolutional layer. In order to avoid overfitting and vanishing gradient, Leaky-ReLU and batch normalization are used. The detail of network configuration is shown in Table 


3.2 Training Loss Functions

The proposed BoostGAN is trained by taking a weighted sum of different losses as the supervisory signal in an end-to-end manner, including adversarial loss, identity preserving loss, and pixel-level losses.

3.2.1 Adversarial Loss

In the training stage, two components: discriminator and generator are included, where the goal of is to distinguish the fake data produced by generator and from the real data. The generator and aim to synthesize realistic-looking images to fool . The game between and can be represented as a value function :


where is the batch size. In practice, , and are alternatively optimized via the following objectives:


3.2.2 Identity Preserving Loss

During the de-occlusion and frontalization process, facial identity information is easy to be lost, which is undoubtedly harmful to face recognition performance. Therefore, for better preservation of human identity of the generated image, the identity-wise feature representation with a pre-trained face recognition network is used as supervisory signal. The similarity measured with L1-distance formulates the identity preserving loss function:


where and denote the output of the last pooling layer and the fully connected layer of the pre-trained Light CNN [29] on large faces datasets, respectively. represents all generated faces in BoostGAN, including , and .

3.2.3 Pixel-level Losses

In order to guarantee the multi-image content consistency and improve the photo-realistic, three pixel-level losses, i.e. multi-scale pixel-wise L1 loss, symmetry loss, and total variation regularization [14] are employed:


where denotes the number of scales. and denote the width and height of each image scale, respectively. is the symmetric abscissa of in . In our proposed approach, three scales , and are considered.

3.2.4 Overall Loss of Generator

In summary, the ultimate loss function for training the generator ( and ) is a weighted summation of the above loss functions:


where and are trade-off parameters.

4 Experiments

To demonstrate the effectiveness of our proposed method for occlusive profile face recognition, the qualitative and quantitative experiments on benchmark datasets (constrained vs. unconstrained) are conducted in this paper. For the former, we show the qualitative results of frontal face synthesis under various poses and occlusions. For the latter, face recognition performance on the occlusive profile faces across different poses and occlusions is evaluated.

(a) Profile
(b) Ours
(c) [21]
(d) [12]*
(e) GT
Figure 3: Synthesis results on keypoint region occluded Multi-PIE dataset. From top to bottom, the poses are , , , . The ground truth frontal images are provided at the last column.
(a) Profile
(b) Ours
(c) [21]
(d) [12]*
(e) GT
Figure 4: Synthesis results on random block occluded Multi-PIE dataset. From top to bottom, the poses are , , , . The ground truth frontal images are provided at the last column. Note that the all the models are trained solely on keypoint occluded Multi-PIE dataset.

4.1 Experimental Settings

Databases. Multi-PIE [8] is the largest database for evaluating face recognition and synthesis in a constrained setting. A total of 337 subjects were recorded in four sessions. 20 illumination levels and 13 poses range from to are included per subject. Following the testing protocol in [21, 12], we use 337 subjects with neutral expression and 11 poses within from all sessions. The first 200 subjects are used as training set and the remaining 137 subjects are used as testing set. In testing stage, the first appearance image with frontal and neutral illumination per subject is viewed as gallery, and the others are probes.

The LFW [11] database contains 13,233 images from 5,749 subjects, in which only 85 subjects have more than 15 images, and 4069 people have only one image. It is generally used to evaluate the face verification or synthesis performance in the wild (i.e. unconstrained setting). Following the face verification protocol [11], the 10-fold cross validation strategy is considered for verification performance evaluation on the generated images. Several state-of-the-art models such as FF-GAN [32], DR-GAN [21], and TP-GAN [12] have been compared with our approach.

Data Preprocessing. In order to guarantee the generality of BoostGAN and reduce the model parameter bias, both Multi-PIE and LFW are detected by MTCNN [33] and aligned to a canonical view of size . Two kinds of occlusions: keypoint occlusion and random occlusion across positions are used in our work. For the former, the centers of occlusion masks are the facial key points, i.e. left eye, right eye, tip of the nose, and the center of mouth. For the latter, the centers of occlusion masks are randomly positioned. The size of each occlusion is (the keypoint region can be completely covered) filled with white pixels.

Implementation Details. In conventional GAN [7], Goodfellow et al. suggested to alternate between (usually ) steps of optimizing and one step of optimizing . Thus, we update 2 steps for optimizing and , and 1 for , ensuring a good performance. In all experiments, we set , and .

(a) Profile
(b) Ours
(c) [21]
(d) [12]*
Figure 5: Synthesis results on keypoint region occluded LFW dataset in the wild. Note that there are no ground truth frontal images for this dataset. The models are solely trained based on keypoint occluded Multi-PIE dataset.

4.2 Face Frontalization

To qualitatively demonstrate the synthetic ability of our method, the generative frontal images under different poses and occlusions in different settings are shown in this section. The qualitative experiments of face synthesis are divided into 3 parts: occlusive Multi-PIE, occlusive LFW, and Multi-PIE after de-occlusion.

(a) Profile
(b) Ours
(c) [21]
(d) [12]*
Figure 6: Synthesis results on random block occluded LFW dataset. Note that all the models are trained solely on keypoint occluded Multi-PIE dataset, without retraining on randomly blocked datasets.
(a) Profile
(b) Ours
(c) [12]
(d) [21]
(e) [31]
(f) [6]
(g) [36]
(h) [9]
(i) GT
Figure 7: Comparison with state-of-the-art synthesis methods under the pose variation of (the first two rows) and (the last row). The BoostGAN model is trained on non-occlusive Multi-PIE dataset and the generality for non-occlusive frontalization is demonstrated.

Face Synthesis on Occlusive Multi-PIE. With the two different types of block occlusions, the synthesis results are shown in Figure 3 and Figure 4, respectively. Note that for each block occlusion the trained model with keypoint occlusion is used. We can see that the proposed model can generate photo-realistic and identity preserving faces under occlusions, which are better than DR-GAN and TP-GAN. Due to the missing of keypoint region pixels, the landmarks located patch network of TP-GAN [12]

become unavailable, and only the global parametric model can be trained on occlusive Multi-PIE. For convenience, we mark it as TP-GAN*. Specially, due to the lack of supervision of ground truth frontal image in DR-GAN model, it can not fill the hole (block occlusion) in the facial image.

Notably, the synthesis results of random occlusive profile image are shown in Figure 4. Obviously, the generated images of BoostGAN are still better than DR-GAN and TP-GAN*. The synthesis performance of both DR-GAN and TP-GAN is degraded due to the random occlusive. The generated results of TP-GAN* become more distorted due to that the random occlusive profile images may not appeared in training process. Additionally, with random occlusion, the position of occlusion in the generated images by DR-GAN is also changed as some keypoint position appeared in the training process. However, different from DR-GAN and TP-GAN*, the proposed BoostGAN can still obtain good synthetic images. Further, with the increasing of pose angle, BoostGAN can faithfully synthesize frontal view images with clear and clean details.

Face Synthesis on Occlusive LFW. In order to demonstrate the generalization ability in the wild, the LFW database is used to test BoostGAN model trained solely on the keypoint region occlusive Multi-PIE database. As shown in Figure 5 and Figure 6, BoostGAN also obtains better visual results on LFW than others, but the background color is similar to Multi-PIE. This is understandable because the model is trained solely on Multi-PIE.

Face Synthesis on Multi-PIE after De-occlusion. We compare the synthesis results of BoostGAN against state-of-art face synthesis methods on faces after de-occlusion in Figure 7. The de-occlusive Multi-PIE here denotes the original non-occlusive Multi-PIE. It can be seen that, the generated results of the proposed BoostGAN still show the effectiveness even without occlusions. Note that BoostGAN shows competitive performance with TP-GAN [12], but obviously better than other methods no matter in global structure or local texture. This demonstrates that although BoostGAN is proposed under the occlusion scenarios, it can also works well under non-occlusive scenario and the generality is further verified.

4.3 Face Recognition

DR-GAN [21] (k1) 67.38 60.68 55.83 47.25 39.34
DR-GAN [21] (k2) 73.24 65.37 59.90 51.18 42.24
DR-GAN [21] (k3) 66.93 60.60 56.54 49.70 39.77
DR-GAN [21] (k4) 71.33 63.72 57.59 50.10 40.87
DR-GAN [21] (mean) 69.72 62.59 57.47 49.56 40.55
TP-GAN [12]* (k1) 98.17 95.46 86.60 65.91 39.51
TP-GAN [12]* (k2) 99.27 97.25 88.37 66.03 40.82
TP-GAN [12]* (k3) 95.04 90.95 82.72 62.40 38.67
TP-GAN [12]* (k4) 97.80 93.66 83.84 62.27 36.76
TP-GAN [12]* (mean) 97.57 94.33 85.38 64.15 38.94
BoostGAN 99.48 97.75 91.55 72.76 48.44
Table 3: Rank-1 recognition rate (%) comparison on keypoint region occluded Multi-PIE. Black: ranks the ; Red: ranks the ; Blue: ranks the .
DR-GAN [21] (r1) 47.64 38.93 33.21 25.38 18.92
DR-GAN [21] (r2) 65.75 55.15 46.52 38.33 29.00
DR-GAN [21] (r3) 56.01 46.27 39.13 29.11 23.01
DR-GAN [21] (r4) 59.10 47.92 39.97 33.69 25.20
DR-GAN [21] (mean) 57.13 47.07 39.71 31.63 24.03
TP-GAN [12]* (r1) 89.81 83.88 74.94 54.83 31.34
TP-GAN [12]* (r2) 77.98 71.68 60.52 42.68 23.92
TP-GAN [12]* (r3) 79.12 72.45 60.00 41.37 24.11
TP-GAN [12]* (r4) 86.13 77.76 64.84 45.08 25.15
TP-GAN [12]* (mean) 83.26 76.44 65.08 45.99 26.13
BoostGAN 99.45 97.50 91.11 72.12 48.53
Table 4: Rank-1 recognition rate (%) comparison on random block occluded Multi-PIE. Black: ranks the ; Red: ranks the ; Blue: ranks the .

The proposed BoostGAN aims to recognize human faces with occlusions and pose variations. Therefore, for verifying the identity preserving capacity of different models, face recognition on benchmark datasets is studied. We first use the trained generative models to frontalize the profile face images in Multi-PIE and LFW, then evaluate the performance of face recognition or verification by using the Light CNN extracted features of those generated frontal facial images. Similar to the qualitative experiments, the quantitative experiments include 3 parts: face recognition on occlusive Multi-PIE, face verification on occlusive LFW, and face recognition on Multi-PIE after de-occlusion.

Face Recognition on Occlusive Multi-PIE. Similarly, the trained model is solely on the keypoint region occluded images. The rank-1 recognition rates for the two types of occlusions are shown in Table 3 and Table 4, respectively. denotes the four different block mask regions, such as left eye, right eye, nose and mouth. denotes four random block occlusions.

It is obvious that the performance of BoostGAN outperforms DR-GAN and TP-GAN* for each type of blocked occlusion. It is common that with the increase of pose angles, the recognition performance is decreased. However, compared with other methods, BoostGAN still shows state-of-the-art performance. Specially, by comparing Table 4 with Table 3, we observe that the recognition rates of DR-GAN and TP-GAN* show a dramatically decrease due to changes of occlusive types. However, the proposed BoostGAN is almost unaffected.

Method ACC(%) AUC(%)
DR-GAN [21] (k1) 67.60 73.65
DR-GAN [21] (k2) 67.28 72.94
DR-GAN [21] (k3) 58.43 59.19
DR-GAN [21] (k4) 69.50 76.05
DR-GAN [21] (mean) 65.71 70.46
TP-GAN [12]* (k1) 86.52 92.81
TP-GAN [12]* (k2) 87.83 93.96
TP-GAN [12]* (k3) 85.17 91.63
TP-GAN [12]* (k4) 87.78 93.97
TP-GAN [12] (mean) 86.83 93.09
BoostGAN 89.57 94.90
Table 5: Face verification accuracy (ACC) and area-under-curve (AUC) results on keypoint region occluded LFW.
Method ACC(%) AUC(%)
DR-GAN [21] (r1) 63.28 67.20
DR-GAN [21] (r2) 65.53 71.79
DR-GAN [21] (r3) 57.15 57.76
DR-GAN [21] (r4) 64.82 70.35
DR-GAN [21] (mean) 62.70 66.78
TP-GAN [12]* (r1) 82.75 89.86
TP-GAN [12]* (r2) 77.65 84.63
TP-GAN [12]* (r3) 81.07 88.24
TP-GAN [12]* (r4) 83.25 90.15
TP-GAN [12]* (mean) 81.18 88.22
BoostGAN 89.58 94.75
Table 6: Face verification accuracy (ACC) and area-under-curve (AUC) results on random block occluded LFW.

Face Verification on Occlusive LFW. Face verification performance evaluated on the recognition accuracy (ACC) and area under ROC curves (AUC) in the wild are provided in Table 5 and Table 6, based on two types of occlusions. From the results, we observe that DR-GAN is seriously flawed due to the model’s weak specificity to occlusions. TP-GAN has shown comparable results in Table 5 under keypoint occlusion, but significantly degraded performance in Table 6 under random occlusion. However, similar to constrained Multi-PIE dataset, the proposed BoostGAN shows state-of-the-art performance under occlusions, and there is no performance degradation across different types of occlusion. We can conclude that our BoostGAN shows excellent generalization power for occlusive but profile face recognition in the wild.

Face Recognition on Multi-PIE after De-occlusion. After discussion of the recognition performance under occlusions, we further verify the effectiveness of the proposed method under profile but non-occlusive faces. The Rank-1 accuracies of different methods on Multi-PIE database are presented in Table 7. Specifically, 8 methods including FIP+LDA [37], MVP+LDA [38], CPF [31], DR-GAN [21],  [20], FF-GAN [32] and TP-GAN [12] are compared. The results of Light CNN are used as the baseline. All the methods are following the same experimental protocol for fair comparison. We can observe that the proposed BoostGAN outperforms all other methods for clean profile face recognition. TP-GAN [12], as the state-of-the-art method, is also inferior to ours.

Discussion. Our approach is an ensemble model to complete the facial de-occlusion and frontalization simultaneously, aiming at face recognition tasks under large pose variations and occlusions. Although BoostGAN achieves a success in the uninvestigated synthesis problem under occlusion, some similar characteristics with existing GAN variants are equipped. First, the encoder-decoder based CNN architecture is used. Second, the traditional loss functions in pixel level and feature level are exploited. Third, the basic components such as generator and discriminator of GAN are used. The key difference between ours and other GAN variants lie in that the complementary information out of occlusion is boosted. Due to the space limitation, more experiments on different sized occlusions are explored in Supplementary Material.

Method mean
FIP+LDA [37] 90.7 80.7 64.1 45.9 70.35
MVP+LDA [38] 92.8 83.7 72.9 60.1 77.38
CPF [31] 95.0 88.5 79.9 61.9 81.33
DR-GAN [21] 94.0 90.1 86.2 83.2 88.38
 [20] 95.0 91.3 88.0 85.8 90.03
FF-GAN [32] 94.6 92.5 89.7 85.2 90.50
TP-GAN [12] 98.68 98.06 95.38 87.72 94.96
Light CNN [29] 98.59 97.38 92.13 62.09 87.55
BoostGAN 99.88 99.19 96.84 87.52 95.86
Table 7: Rank-1 recognition rate (%) comparison on profile Multi-PIE without occlusion.

5 Conclusion

This paper has answered how to recognize faces if large pose variation and occlusion exist simultaneously. Specifically, we contribute a BoostGAN model for occlusive but profile face recognition in constrained and unconstrained settings. The proposed model follows an end-to-end training protocol, from a multi-occlusion frontal view generator to a multi-input boosting network, and achieves coarse-to-fine de-occlusion and frontalization. The adversarial generator aims to realize coarse frontalization, de-occlusion and identity preservation across large pose variations and occlusions. The boosting network targets at generating photo-realistic, clean and frontal faces by ensemble the complementary information of multiple inputs. Extensive experiments on benchmark datasets have shown the generality and superiority of the proposed BoostGAN over other state-of-the-art under occlusive and non-occlusive scenarios.


  • [1] M. Arjovsky, S. Chintala, and L. Bottou. Wasserstein GAN. In ICML, 2017.
  • [2] M. Bertalmio, L. Vese, G. Sapiro, and S. Osher. Simultaneous structure and texture image inpainting. In CVPR, 2003.
  • [3] J. Cao, Y. Hu, B. Yu, R. He, and Z. Sun. Load balanced gans for multi-view face image synthesis. In arXiv:1802.07447, 2018.
  • [4] Z. Chen, S. Nie, T. Wu, and C. G. Healey. High resolution face completion with multiple controllable attributes via fully end-to-end progressive generative adversarial networks. arXiv:1801.07632, 2018.
  • [5] A. Fawzi, H. Samulowitz, D. Turaga, and P. Frossard. Image inpainting through neural networks hallucinations. In Image, Video, and Multidimensional Signal Processing Workshop, 2016.
  • [6] A. Ghodrati, J. Xu, M. Pedersoli, and T. Tuytelaars. Towards automatic image editing: Learning to see another you. In BMVC, 2016.
  • [7] I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio. Generative adversarial nets. In NIPS, 2014.
  • [8] R. Gross, I. Matthews, J. Cohn, T. Kanade, and S. Baker. Multi-pie. Image & Vision Computing, 28(5):807 – 813, 2010.
  • [9] T. Hassner, S. Harel, E. Paz, and R. Enbar. Effective face frontalization in unconstrained images. In CVPR, 2015.
  • [10] Y. Hu, X. Wu, B. Yu, R. He, and Z. Sun. Pose-guided photorealistic face rotation. In CVPR, 2018.
  • [11] G. B. Huang, M. Ramesh, T. Berg, and E. Learned-Miller. Labeled faces in the wild: A database for studying face recognition in unconstrained environments. Technical report, University of Massachusetts, 2007.
  • [12] R. Huang, S. Zhang, T. Li, and R. He. Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In ICCV, 2017.
  • [13] S. Iizuka, E. Simo-Serra, and H. Ishikawa. Globally and locally consistent image completion. ACM Transactions on Graphics, 36(107):1–14, 2017.
  • [14] J. Johnson, A. Alahi, and F. Li.

    Perceptual losses for real-time style transfer and super-resolution.

    In ECCV, 2016.
  • [15] M. Kan, S. Shan, and X. Chen. Multi-view deep network for cross-view classification. In CVPR, 2016.
  • [16] T. Karras, T. Aila, S. Laine, and J. Lehtinen. Progressive growing of gans for improved quality, stability, and variation. In ICLR, 2018.
  • [17] A. Levin, A. Zomet, and Y. Weiss. Learning how to inpaint from global image statistics. In ICCV, 2003.
  • [18] Y. Li, S. Liu, J. Yang, and M. H. Yang. Generative face completion. In CVPR, 2017.
  • [19] W. Liu, Y. Wen, Z. Yu, M. Li, B. Raj, and L. Song. Sphereface: Deep hypersphere embedding for face recognition. In CVPR, 2017.
  • [20] Q. T. Luan, X. Yin, and X. Liu. Representation learning by rotating your faces. IEEE Transactions on Pattern Analysis & Machine Intelligence, PP(99):1–1, 2018.
  • [21] T. Luan, X. Yin, and X. Liu. Disentangled representation learning gan for pose-invariant face recognition. In CVPR, 2017.
  • [22] T. Miyato, T. Kataoka, M. Koyama, and Y. Yoshida. Spectral normalization for generative adversarial networks. arXiv:1802.05957, 2018.
  • [23] A. Radford, L. Metz, and S. Chintala. Unsupervised representation learning with deep convolutional generative adversarial networks. In ICLR, 2016.
  • [24] C. Sagonas, Y. Panagakis, S. Zafeiriou, and M. Pantic. Robust statistical face frontalization. In ICCV, 2015.
  • [25] F. Schroff, D. Kalenichenko, and J. Philbin. Facenet: A unified embedding for face recognition and clustering. In CVPR, 2015.
  • [26] Y. Sun, X. Wang, and X. Tang. Deep learning face representation from predicting 10,000 classes. In CVPR, 2014.
  • [27] H. Wang, Y. Wang, Z. Zhou, X. Ji, D. Gong, J. Zhou, Z. Li, and W. Liu. Cosface: Large margin cosine loss for deep face recognition. In CVPR, 2018.
  • [28] Y. Wexler, E. Shechtman, and M. Irani. Space-time completion of video. IEEE Transactions on Pattern Analysis & Machine Intelligence, 29(3):463–476, 2007.
  • [29] X. Wu, R. He, Z. Sun, and T. Tan. A light cnn for deep face representation with noisy labels. IEEE Transactions on Information Forensics & Security, 13(11):2884 – 2896, 2018.
  • [30] R. A. Yeh, C. Chen, T. Y. Lim, A. G. Schwing, M. Hasegawajohnson, and M. N. Do. Semantic image inpainting with deep generative models. In CVPR, 2017.
  • [31] J. Yim, H. Jung, B. I. Yoo, C. Choi, D. Park, and J. Kim. Rotating your face using multi-task deep neural network. In CVPR, 2015.
  • [32] X. Yin, X. Yu, K. Sohn, X. Liu, and M. Chandraker. Towards large-pose face frontalization in the wild. In ICCV, 2017.
  • [33] K. Zhang, Z. Zhang, Z. Li, and Y. Qiao. Joint face detection and alignment using multitask cascaded convolutional networks. In ISPL, 2016.
  • [34] Y. Zhao, W. Chen, J. Xing, X. Li, Z. Bessinger, F. Liu, W. Zuo, and R. Yang. Identity preserving face completion for large ocular region occlusion. In BMVC, 2018.
  • [35] J. Y. Zhu, T. Park, P. Isola, and A. A. Efros.

    Unpaired image-to-image translation using cycle-consistent adversarial networks.

    In ICCV, 2017.
  • [36] X. Zhu, Z. Lei, J. Yan, Y. Dong, and S. Z. Li. High-fidelity pose and expression normalization for face recognition in the wild. In CVPR, 2015.
  • [37] Z. Zhu, P. Luo, X. Wang, and X. Tang. Deep learning identity-preserving face space. In ICCV, 2013.
  • [38] Z. Zhu, P. Luo, X. Wang, and X. Tang.

    Multi-view perceptron: a deep model for learning face identity and view representations.

    In NIPS, 2014.