As one of the most commonly observed subjects in computer vision, face images are often captured by various types of imaging sensors under unconstrained wild scenarios, which bring different types of distortions to the clear face images. When face images are captured behind a piece of glass, the reflection-contaminated face images not only unpleasantly affect the human perception, but also degrade the performance of visual computing algorithms focusing on face. Therefore, it is of great interest to remove the reflections and enhance the visibility of the human faces behind the glass.
Different from general objects or scenes, faces have its specific priors awarded by humans, even if a slight reflection (transmitted) distortion may significantly annoy human perception . When reflection become stronger (non-transmitted), machine vision algorithms may fail due to the lost or distortion of important facial features. How to remove non-transmitted reflections and recover important features for machine vision methods pose unique challenges for face reflection removal.
Existing reflection removal methods [17, 32, 30, 6] can be directly applied to face images with reflections. However, due to ignorance of the specific facial priors and the non-transmitted reflections, artifacts on face largely remain on the recovered ‘reflection-free’ face image (e.g., the result obtained by CEILNet  in Figure 1
). Thus, methods designed for generic reflections in arbitrary scenarios are not capable to deal with these challenges. To recover the facial information largely occluded by non-transmitted reflections, it is also straight-forward to integrate specific facial priors into the single image inpainting methods (e.g., [24, 38]). However, solely relying on learned representations from the training data to inpaint the reflection-contaminated region may not faithfully retain the lost face identity feature.
To conquer the above challenges, we first explore the complementary advantages from image inpainting and reflection removal to recover the facial information occluded by non-transmitted reflections. Then, to recover important facial features, we employ the guided removal framework with particular considerations on the feature level similarity and specific facial priors. Instead of only predicting the missing information from the learned representations, the guided framework [31, 16] can provide more accurate identity details of the human faces due to the additionally guided information, from an additional face image of an image sequence with continuous face movement, which provide different facial pose or reflection properties. On the other hand, since the face similarity is compared in a compact feature space and the pixel level similarity adopted by previous reflection removal methods can hardly guarantee the identity consistency , we also embed the feature level similarity and other specific facial priors into the estimation process.
Our complete framework is shown in Figure 2, which includes four components: the siamese estimation network to roughly estimate the missing facial information, the face alignment network to compensate the different motions between two face images, the prior estimation network to embed the facial priors into the whole estimation process, and the local refinement network to refine the local details. Our major contributions are summarized as follows:
We propose the first reflection removal framework that targets at the face images for improving human perception and facilitating machine vision algorithms.
We propose an effective approach to remove non-transmitted reflections by mixing the merits of image inpainting and reflection removal.
We recover important facial features by employing a guided removal framework with particular considerations on the feature level similarity and specific facial priors.
We build the first face reflection image dataset to facilitate the research of reflection removal in a specific domain and accordingly perform quantitative and qualitative evaluation.
2 Related work
Previous works on reflection removal can be roughly classified into two categories. The first category solves by using the non-learning based methods. For example, Liet al.  and Nikolas et al.  made use of the different blur levels of the background and reflection layers. Shih et al.  used the GMM patch prior to remove reflections with the visible ghosting effects. The handcrafted priors adopted by these methods are based on the observations of some special properties between the background and reflection (e.g., different blur levels [32, 17]) which is often violated in the general scenes especially when these properties are weakly observed.
The deep learning framework also benefits reflection removal problems. For example, Fanet al.  proposed a two-stage deep learning approach to learn the mapping between the mixture images and the estimated clean images. Recently, Wan et al.  also proposed a concurrent model to better preserve the background details. The method proposed by Zhang et al.  first utilized the generative model to better learn the mappings from the mixture image to the clean images. However, existing methods are all designed for general scenes, which have difficulty in preserving facial details in face image reflection removal problem.
Face image enhancement.
Numerous methods have been proposed during the past decades to solve different face image enhancement problem including face hallucination , face deblurring , and face inpainting . Recently, the end-to-end deep learning framework are introduced to solve this problem in a data-driven manner. For example, Li et al.  proposed a method based on the generative model to solve the face inpainting problem. Chen et al. 
made full use of the geometry prior to solve the face super-resolution problem. Shenet al.  also proposed a method to solve the face deblurring problem by using the face semantic priors. However, the face image reflection removal problem has never been explicitly modeled and solved.
3 Proposed method
In this section, we describe the dataset, the design methodology of the proposed reflection removal network, the optimization process, and the training details.
3.1 Dataset preparation
The data-driven approaches need a large-scale dataset to learn the inherent reflection properties . Previous methods [30, 6, 39] obtain the training dataset by using the following image formation model:
where and are the mixing coefficients and , , and are the mixture image, the background image, and reflection image, respectively. The background images can be obtained from generic image datasets (e.g., PASCAL  or COCO ) only when targeting the reflections at arbitrary scenes [30, 6, 39]. Accordingly, existing benchmark reflection removal datasets (e.g., SIR ) are not suitable for our ask due to the scenery diversity. Although many face image datasets have been proposed (e.g., CELEBA  and CASIA Webface dataset ), they are also not applicable for our problem since they mostly consider a fixed facial pose. To facilitate the training and evaluation of our approach, we build a large-scale face image training dataset collected from online resources and its corresponding evaluation dataset taken in the real world.
Our background face images in the training dataset are collected from Youtube by cropping face images from several consecutive video frames. The reflection images are taken by ourselves based on the method proposed in . Then, we generate the mixture images by using Equation (1). To focus on the vital facial components, we further adopt the MTCNN  to crop the face portion. We show samples from our training dataset in Figure 3.
Our training dataset has two major characteristics: 1)Diversity. The face images in the training dataset are with different races, expressions, and poses; 2) Scale: The training dataset have face images from approximately people to meet the request of data-driven methods and each person is labeled by their corresponding person IDs.
The images in the evaluation dataset are all taken in the real world by using different capturing devices (e.g., DSLR cameras and mobile phones) with diverse scene settings. Except for the face images occluded by reflections, we also take its corresponding ‘groundtruth’ images with same identity information for further evaluation. The evaluation dataset has images from different person.
3.2 Network architecture
As shown in Figure 2, our network includes four parts to do the missing facial information estimation, motion compensation, prior estimation, and local refinement, respectively. Except that the face alignment network is largely based on an existing optical flow estimation network , other three networks all have a similar mirror-like framework with the encoder to capture the context information by contracting the feature channels step by step and decoder part to obtain the final results.
3.2.1 Siamese estimation networks
Due to the occlusions caused by the reflections, it is non-trivial to estimate facial priors (e.g., facial landmark positions and parsing) from the reflection-contaminated input images directly. We first use the siamese estimation networks with shared weights to roughly estimate the coarse face images from the input image pair as:
where denotes one branch of the siamese estimation networks, and are the main image and guided image with different reflections or varying facial properties (e.g., pose and illuminations), and and are the roughly recovered images corresponding to and , respectively.
Local context loss.
Due to the regional property of reflections , the missing information occluded by reflections cannot be well reconstructed by solely minimizing the global loss on the whole image. To solve this problem, we adopt the local context loss widely used by the image inpainting methods  as:
where is the input to the network, is its corresponding ground truth, is the element-wise product operation, and denotes the binary mask corresponding to the non-transmitted reflections. When the reflections occupy the whole image plane, Equation (3) degrades to the common global loss.
To roughly estimate the missing information occluded by the non-transmitted reflections, we employ the Conditional Wasserstein GAN as follows:
where is the discriminator network, is the set of -Lipschitz functions and is the real data distributions. Our discriminator takes an input image with a size of
Combining the above terms, the loss function for the siamese estimation networks become:
where is the classical pixel-wise loss and , , and are weights to balance different terms.
The siamese estimation networks can be regarded as a single-image approach to solve this problem. From the results shown in Figure 4, using such networks alone is not sufficient to efficiently remove the reflections on face. However, from the face parsing results shown in Figure 4, it helps to alleviate the difficulties for estimating the facial priors in the next stage by estimating some key facial components.
3.2.2 Face parsing network
To better recover the important facial features, we embed the facial prior into our estimation process by employing the face parsing network. We use the U-Net as the backbone but just keep the vital layers for the efficiency of the whole network to estimate the parsing maps of facial components as follows:
where is the face parsing network, denotes the input images of this network, and denotes the -channel semantic parsing map of the important facial components as shown in Figure 2.
3.2.3 Face alignment network
Due to the misalignment between the input image pair, their motion or pose inconsistency may increase the difficulty to recover image details . To compensate the motion inconsistency, we adopt FlowNetS model  as the backbone to build the face alignment network. It takes the two roughly recovered images from the siamese estimation networks as the input and aims at estimating the optical flow from to as shown in Figure 2 as follows:
where denotes the face alignment network and denotes the estimated optical flow field. With the flow field
, the guided image can be warped to the main image by using the spatial transformer network as:
where , and denote the predicated and coordinates for the pixel and represents the four-pixel neighbors of .
Landmark loss and face parsing loss.
The classical FlowNetS model  is trained by using the supervised training strategy. However, due to the lack of corresponding ground truth optical flow, the supervised training strategy is not applicable in our settings. To sovle this problem, an unsupervised training strategy is proposed in  by minimizing the MSE loss between a warped image and another non-warped image. However, due to the roughly estimate results may have different colors, the MSE loss cannot serve as a valid way to measure the difference. Thus, we propose to use the facial landmarks and the face parsing results to facilitate the training process.
In order to align and , the landmarks of and those of the warped image should be close to each other. We first get the facial landmarks corresponding to and and then define the landmark loss as follows:
where and are the facial landmarks corresponding to and , respectively.
For the face parsing loss, we first feed and to the face parsing network to get corresponding parsing maps and then the face parsing loss is defined as:
By combining the above the two terms, the loss function for the face alignment network becomes:
3.2.4 Local refinement network
To suppress the artifacts from the output of the siamese estimation networks (the second and third images shown in Figure 4) and better preserve the facial components, the main image after the siamese estimation networks ( channels), the warped guided image (
channels), and the probability maps of facial label (channels) are concatenated into a
channel tensors as the input to the local refinement network as:
where is the local refinement network, denotes the final estimated results, and is the probability maps of facial labels corresponding to .
Statistic identity loss.
Existing reflection removal methods always aim at estimating the recovered images with higher PSNR  and/or SSIM values  by using different pixel-wise loss functions. Though these pixel-wise loss functions are simple to calculate, the recovered face images estimated by them may have a larger difference from the ground truth since the face similarity is more properly defined in a compact feature space rather than the image pixel space . Two perceptually indistinguishable face images with high SSIM values still have quite obvious feature-level differences .
where denotes the
-th layer features from a pre-trained loss network (e.g., VGG16 ) and and denote the estimate images and targets, respectively.
However, Equation (13) can only calculate the first-order statistics of the feature level differences. Previous methods [15, 12] have shown the important roles of the higher-oder statistics in different tasks. Based on the perceptual loss used by previous methods [25, 4] in Equation (13
), we propose a statistic identity loss to measure the feature level similarity on the basis of maximum mean discrepancy (MMD) in the local refinement network. As a kind of distribution divergence measurement derived from kernel embedding, MMD can measure the similarity of two distributions based on all-order moments as used in the two-sample testing problem. Given two images and , MMD is defined as:
where denotes the Hilbert space and is defines as:
Here, is a feature map, and is the kernel function induced by . Combining these, our statistic identity loss becomes:
are all-one vectors with the sizeand , respectively. and are the empirical measure  of and , respectively.
Local structural facial loss.
Since human vision is more sensitive to the key components (e.g., eyes, lips, and mouths) , instead of solely minimizing the global loss on the whole face image, we use the local structural facial loss similar to  to better preserve the facial information as follows:
where denotes the binary operation. We impose the local structural facial loss on eyebrows, eyes, noses, lips, and teeth.
Then the loss functions for the local refinement becomes:
where , , and .
3.2.5 Overall loss function
3.3 Implementation and training details
We have implemented our model using PyTorch. The complete training processes of our network can be divided into two stages: 1) We train the siamese estimation networks, the face parsing network, and the face alignment network separately to convergence. 2) We fix the face parsing network and then combine them with the local refinement network, and the entire network is fine-tuned again, which grant more opportunities to cooperate accordingly. The landmarks in Section3.2.3 are obtained by using a pretrained landmark estimation network based on the MobileNet . We adop the pretrained LightCNN  as the face recognition model used in the statistics identity loss of Section 3.2.4. The learning rate for whole network training is set to
for the first 30 epochs and then decreases to.
We first compare the visual quality and quantitative errors of our method and state-of-the-art reflection removal approaches. We also conduct a user study to investigate how each method improves human perception. Then, another experiments on face identity recognition are conducted to investigate whether our proposed method can contribute to the high-level face recognition algorithms. At last, we conduct an ablation study to verify the effectiveness of each component in our network.
4.1 Comparison with the state-of-the-arts
We compare our method with state-of-the-art reflection removal methods, including ZN18 , CRRN , CEILNet , NR17 , and WS16 . For fair comparison, we use the released codes of the above methods and train all models with the same training dataset for the data-driven methods (CRRN , CEILNet , and ZN18 ).
Visual quality comparison.
We first show examples of recovered reflection-free face images by our method and other five methods in Figure 5 to check their visual quality. In these examples, our method removes reflections more effectively and recovers the details of the face images more clearly. All the non-learning based methods (NR17  and WS16 ) cannot remove the non-transmitted reflections effectively and also downgrade the visual quality of the regions not covered by reflections. Though the data-driven based methods performs much better than the non-learning based methods, the final estimated results still remain visible artifacts and some key face components are also wrongly estimated (e.g., ZN18  in the second examples). CRRN  and CEILNet  cause serious color degradation in the estimated results. It is mainly due to linear dependency between the mixture image and background image of their image formation models.
Since we consider the image capturing in a dynamic scenario, it is difficult to obtain the well-aligned ground truth like previous methods [29, 35]. Thus, the widely used error metrics (e.g., PSNR and SSIM) are not suitable for our evaluations due to the lack of well-aligned ground truth. Instead, we evaluate the performances from the feature domains by using the high-level facial information. We use the OpenFace toolbox  to compute the identity distance between the ‘ground truth’ face images and different results obtained by using identity error defined as: , where denotes the face recognition model used in the evaluations.
From the results shown in Figure 6, our method achieves the best identity scores, which demonstrates that the proposed method preserves the face identity well. The two non-learning methods WS16  and NR17  achieves even worse results than the baseline, which are consistent with the observations in the visual quality comparisons. The results of deep learning based methods are much better than that of the non-learning based methods. However, since CEILNet cannot well recover the color information, its performance also cannot beat CRRN  and ZN18 . CRRN  and ZN18  achieves similar performances. However, due to the wrongly recovered face components, its average scores is also worser than our proposed methods.
Human perception evaluations.
To investigate how each method improves human perception on reflection-removed results, we conduct another experiments based on the user study scores. We invite 30 participants to judge all images in our evaluation dataset. The participants are required to give three rankings for different results. From the results shown in Figure 6, nearly percent of our images are given the first rank, which are best among all methods. The two non-learning based methods generally fails on almost all images. The other three learning based methods (CEILNet , CRRN , ZN18 ) perform much better than the non-learning based methods. The ranking of how each method performs in human perception evaluation (the higher rank-1 score the better) is generally consistent with quantitative comparison (the lower error the better).
Face recognition evaluations.
The identity errors and human perception study in Figure 6 partly reveals the network ability of preserving the face identity information. In order to fully investigate whether our method can improve the accuracy of machine vision algorithms, we evaluate our estimated results in the task of face recognition. Given a probe face example, the goal of recognition is to find an example from the gallery set that belongs to the same identity . We randomly select 575 identities from the LFW dataset  and then merge it with the identities in our evaluation dataset to form an evaluation dataset with roughly 600 identities. Each identity has roughly the same amount of images in each set.
We use the Top-, Top-, Top-, and Top- recognition accuracy to evaluate the performances. From the results shown in Table 1, our method achieves the highest recognition accuracy than all other methods. The non-learning based methods can not effectively increase the recognition rate and their results are even lower than the baseline. The learning based methods achieve much better results. However, the artifacts observed in Figure 5 downgrade their performances. ZN18  achieves similar performances in the Top- part. However, the higher scores among other parts prove the effectiveness of our proposed method.
4.2 Network analysis
|Ours||SEN only||W/o AL and CL||W/o IL||W/o PE|
Our framework consists of four parts, i.e., the siamese estimation network (SEN), the face alignment network (FAN), the face parsing network (FPN), and the local refinement network (LRN). In this section, we have conducted several experiments to further analyze the contributions of the guided reflection removal framework and the influence of different loss functions.
The first one is to show the effectiveness of the guided reflection removal framework. We conduct this experiment by only keeping one branch of the SEN. As discussed in Section 3.2.1, this setting can be regraded as the straight-forward single-image approach to solve this problem, which still contains obvious artifacts in the final estimated results as shown in Figure 7. The identity loss in Table 2 also proves this phenomenon, where it has relatively poor performance when compared with the complete model. Then, another experiment is conducted to verify the concepts leveraged from the image inpainting technique by removing the local context loss and adversarial loss. The two loss functions aim at estimating the missing information occluded by the non-transmitted reflections. From the results shown in Figure 7, without the two loss functions, the network fails to estimate the facial components occluded by the reflections.
Another experiment is to evaluate the contributions from the identity loss in the local refinement network. We train a network by removing the two loss functions. From the results shown in Table 2 and Figure 7, though the key components are successfully recovered, the color inconsistency between the regions with and without reflections are also very obvious. The lower face identity distance also prove its weakness. Then, we remove the prior embedding mechanism in the LRN, where the input to LRN reduces to a -channel tensors. From the results shown in Figure 7 and Table 2, the performances are similar to the results obtained by using the model without the identity loss.
We propose and solve the face image reflection removal problem in this paper. Different from the general scenes, the special properties of face images pose challenges on non-transmitted reflection and face identity feature recover for face reflection removal. To address these issues, we first leverage ideas from image inpainting to recover the key facial components occluded by reflections, and we then utilize the guided removal framework, prior embedding and statistics identity loss to better recover the important facial features. Based on the newly collected training dataset, our framework achieves better performances than existing methods on the proposed evaluation dataset.
The performances of our method may drop when the reflections on faces become non-uniform as shown in Figure 8. However, even in this situation, our method still outperforms other methods. Another limitation of our work is from the evaluation dataset, as it is difficult to obtain the face images with different identities. Though we have tried our best to cover more realistic scenarios, the diversity of our evaluation dataset is still limited. Our current dataset is suitable for a proof-of-concept purpose, and we are working on increasing the diversity of our evaluation dataset by including more identities and more challenging scenes (e.g., the images capture by surveillance cameras).
-  B. Amos, B. Ludwiczuk, M. Satyanarayanan, et al. Openface: A general-purpose face recognition library with mobile applications. CMU School of Computer Science, 2016.
-  N. Arvanitopoulos, R. Achanta, and S. Susstrunk. Single image reflection suppression. In Proc. CVPR, 2017.
-  Y. Chen, Y. Tai, X. Liu, C. Shen, and J. Yang. Fsrnet: End-to-end learning face super-resolution with facial priors. In Proc. CVPR, 2018.
-  J. Deng, S. Cheng, N. Xue, Y. Zhou, and S. Zafeiriou. Uv-gan: Adversarial facial uv map completion for pose-invariant face recognition. arXiv preprint arXiv:1712.04695, 2017.
-  M. Everingham, L. Van Gool, C. K. I. Williams, J. Winn, and A. Zisserman. The pascal visual object classes (voc) challenge. Springer IJCV, 88:303–338, 2010.
-  Q. Fan, J. Yang, G. Hua, B. Chen, and D. Wipf. A generic deep architecture for single image reflection removal and image smoothing. Proc. ICCV, 2017.
-  P. Fischer, A. Dosovitskiy, E. Ilg, P. Häusser, C. Hazırbaş, V. Golkov, P. Van der Smagt, D. Cremers, and T. Brox. Flownet: Learning optical flow with convolutional networks. arXiv preprint arXiv:1504.06852, 2015.
-  A. Gretton, K. M. Borgwardt, M. Rasch, B. Schölkopf, and A. J. Smola. A kernel method for the two-sample-problem. In Proc. NIPS, 2007.
-  A. G. Howard, M. Zhu, B. Chen, D. Kalenichenko, W. Wang, T. Weyand, M. Andreetto, and H. Adam. Mobilenets: Efficient convolutional neural networks for mobile vision applications. arXiv preprint arXiv:1704.04861, 2017.
-  M. Jaderberg, K. Simonyan, A. Zisserman, et al. Spatial transformer networks. In Proc. NIPS, 2015.
J. Y. Jason, A. W. Harley, and K. G. Derpanis.
Back to basics: Unsupervised learning of optical flow via brightness constancy and motion smoothness.In Proc. ECCV, 2016.
-  P. Koniusz, F. Yan, P.-H. Gosselin, and K. Mikolajczyk. Higher-order occurrence pooling for bags-of-words: Visual concept detection. IEEE TPAMI, 39(2):313–326, 2017.
-  E. Learned-Miller, G. B. Huang, A. RoyChowdhury, H. Li, and G. Hua. Labeled faces in the wild: A survey. In Proc. NIPS. 2016.
-  H. Li, S. J. Pan, S. Wang, and A. C. Kot. Domain generalization with adversarial feature learning. In Proc. CVPR, 2018.
-  P. Li, J. Xie, Q. Wang, and W. Zuo. Is second-order information helpful for large-scale visual recognition. In Proc. ICCV, 2017.
-  X. Li, M. Liu, Y. Ye, W. Zuo, L. Lin, and R. Yang. Learning warped guidance for blind face restoration. Proc. ECCV, 2018.
-  Y. Li and M. S. Brown. Single image layer separation using relative smoothness. In Proc. CVPR, 2014.
-  Y. Li, S. Liu, J. Yang, and M.-H. Yang. Generative face completion. In Proc. CVPR, 2017.
-  D. Lin and X. Tang. Quality-driven face occlusion detection and recovery. In Proc. CVPR, 2007.
-  T.-Y. Lin, M. Maire, S. Belongie, J. Hays, P. Perona, D. Ramanan, P. Dollár, and C. L. Zitnick. Microsoft coco: Common objects in context. In Proc. ECCV, 2014.
-  C. Liu, H.-Y. Shum, and W. T. Freeman. Face hallucination: Theory and practice. Springer IJCV, 75(1):115–134, 2007.
-  Z. Liu, P. Luo, X. Wang, and X. Tang. Deep learning face attributes in the wild. In Proc. ICCV, 2015.
-  J. Pan, Z. Hu, Z. Su, and M.-H. Yang. Deblurring face images with exemplars. In Proc. ECCV, 2014.
-  D. Pathak, P. Krahenbuhl, J. Donahue, T. Darrell, and A. A. Efros. Context encoders: Feature learning by inpainting. In Proc. CVPR, 2016.
-  Z. Shen, W.-S. Lai, T. Xu, J. Kautz, and M.-H. Yang. Deep semantic face deblurring. arXiv preprint arXiv:1803.03345, 2018.
-  Y. Shih, D. Krishnan, F. Durand, and W. T. Freeman. Reflection removal using ghosting cues. In Proc. CVPR, 2015.
-  K. Simonyan and A. Zisserman. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556, 2014.
-  R. Wan, B. Shi, L.-Y. Duan, A.-H. Tan, W. Gao, and A. C. Kot. Region-aware reflection removal with unified content and gradient priors. IEEE TIP, 2018.
-  R. Wan, B. Shi, L.-Y. Duan, A.-H. Tan, and A. C. Kot. Benchmarking single-image reflection removal algorithms. In Proc. ICCV, 2017.
-  R. Wan, B. Shi, L.-Y. Duan, A.-H. Tan, and A. C. Kot. CRRN: Multi-scale guided concurrent reflection removal network. In Proc. CVPR, 2018.
-  R. Wan, B. Shi, A. Tan, and A. C. Kot. Sparsity based reflection removal using external patch search. In Proc. ICME.
-  R. Wan, B. Shi, A. H. Tan, and A. C. Kot. Depth of field guided reflection removal. In Proc. ICIP, 2016.
-  X. Wu, R. He, Z. Sun, and T. Tan. A light cnn for deep face representation with noisy labels. IEEE TIFS, 13(11):2884–2896, 2018.
-  T. Xue, B. Chen, J. Wu, D. Wei, and W. T. Freeman. Video enhancement with task-oriented flow. arXiv preprint arXiv:1711.09078, 2017.
-  T. Xue, M. Rubinstein, C. Liu, and W. T. Freeman. A computational approach for obstruction-free photography. ACM TOG, 34(4):79, 2015.
-  D. Yi, Z. Lei, S. Liao, and S. Z. Li. Learning face representation from scratch. arXiv preprint arXiv:1411.7923, 2014.
-  K. Zhang, Z. Zhang, Z. Li, and Y. Qiao. Joint face detection and alignment using multitask cascaded convolutional networks. IEEE SPL, 23(10):1499–1503, 2016.
-  S. Zhang, R. He, Z. Sun, and T. Tan. Demeshnet: Blind face inpainting for deep meshface verification. IEEE TIFS, 13(3):637–647, 2018.
-  X. Zhang, N. Ren, and Q. Chen. Single image reflection separation with perceptual losses. In Proc. CVPR, 2018.