I Introduction
In recent years, human face recognition techniques have demonstrated promising performance in many largescale practical applications. However, in reallife images or videos, various occlusion can often be observed on human faces, such as sunglasses, mask and hands. The occlusion, as a type of spatially contiguous and additive gross noise, would severely contaminate discriminative features of human faces and harm the performance of traditional face recognition approaches that are not robust to such noise. To address this issue, a promising solution is to automatically remove facial occlusion before recognizing the faces [1, 2, 3, 4, 5]. However, most of existing methods can only remove facial occlusions well under rather constrained environments, e.g., faces are from a predefined closed set or there is only a single type of occlusion. Thus those methods are not applicable for the complex real scenarios like surveillance.
In this work, we aim to address this challenging problem – face deocclusion in the wild where the faces can be from an open test set and the occlusions can be of various types (see Fig. 1). To solve this problem, we propose a novel face deocclusion framework built upon our developed robust LSTMAutoencoders (RLA). In real scenarios, facial occlusion often presents rather complex patterns and it is difficult to recover clean faces from the occluded one in a single step. Different from existing methods pursuing onestop solution to deocclusion, the proposed RLA model removes occlusion in several successive processes to restore occluded face parts progressively. Each step can benefit from recovered results provided by the previous step. More concretely, the RLA model works as follows.
Given a new face image with occlusion, RLA first employs a multiscale spatial LSTM encoder to read patches of the image sequentially to alleviate the contamination from occlusion in the encoding process. RLA produces a occlusionrobustness latent representation of the face because the influence of occlusion is only upon some of the patches. Then, a dualchannel LSTM decoder takes this representation as input and jointly reconstructs the occlusionfree face and detects the occluded regions from coarse to fine. The dualchannel LSTM decoder contains two complementary subnetworks, i.e. a face reconstruction network and an occlusion detection network. These two networks collaborate with each other to localize and remove the facial occlusion. In particular, hidden units of the reconstruction network feeds forward the decoding information of face reconstruction at each step to the detection network to help the occlusion localization, and the detection network back propagates the occlusion detection information into the reconstruction network to make it focus on reconstructing occluded parts. Finally, the reconstructed face is integrated with the occluded face in an occlusionaware manner to produce the recovered occlusionfree face. We train the overall RLA in an endtoend way, through minimizing the mean square error (MSE) between paired recovered face and ground truth face. We observe that purely minimizing MSE usually oversmoothes the restored facial parts and leads to loss of the discriminative features for recognizing person identity. This would hurt the performance of face recognition. Therefore, in order to preserve the identity information of recovered faces, we introduce an identity based supervised CNN to encourage RLA to preserve the discriminative details during face recovery. However, this kind of supervised CNN results in severe artifacts in the recovered faces. We thus further introduce an adversarial discriminator [6], which learns to distinguish recovered and original occlusionfree faces, to remove the artifacts and enhance visual quality of recovered faces. As can be seen in the experiments, introducing such discriminative regularization indeed effectively preserves the identity information of recovered faces and facilitates the following face recognition.
Our main contributions include the following three aspects. 1) We propose a novel LSTM autoencoders to remove facial occlusion step by step. To the best of our knowledge, this is the first research attempt to exploit the potential of the LSTM autoencoders for face deocclusion in the wild. 2) We introduce a dualchannel decoding process for jointly reconstructing faces and detecting occlusion. 3) We further develop a person identity diagnostic deocclusion model, which is able to preserve more facial details and identify information in the recovered faces through employing a supervised and adversarial learning method.
Ii Related Work
Iia Face DeOcclusion
There are some existing methods based on analytic–synthetic techniques for face deocclusion. Wright et al. [1] proposed to apply sparse representation to encoding faces and demonstrated certain robustness of the extracted features to occlusion. Park et al. [2] showed that eye areas occluded by glasses can be recovered using PCA reconstruction and recursive error compensation. Li et al. [3] proposed a local nonnegative matrix factorization (LNMF) method to learn spatially localized and partbased subspace representation to recover and recognize occluded faces. Tang et al. [4]
presented a robust Boltzmann machine based model to deal with occlusion and noise. This unsupervised model uses a multiplicative gating to induce a scale mixture of two Gaussians over pixels. Cheng et al.
[5]introduced a stacked sparse denoising autoencoder with two channels to detect noise through exploiting the difference between activations of the two SSDAs, which requires faces from training and test sets have the same occluded location. All of those methods do not consider open test sets. Test samples in their experiments have the identical subjects with training samples, which is too limited for practical applications.
IiB Image Inpainting
Our work is also related to image inpainting which mainly aims to fill in small image gaps or restore large background regions with similar structures. Classical image inpainting methods usually is based on local nonsemantic algorithms. Bertalmio et al.
[7] proposed to smoothly propagate information from the surrounding areas in the isophotes direction for digital inpainting of still images. Criminisi et al. [8] introduced a bestfirst algorithm to propagate the confidence in the synthesized pixel values in a manner similar to the propagation of information in inpainting and compute the actual colour values using exemplarbased synthesis. Osher et al. [9] proposed an iterative regularization procedure for restoring noisy and blurry images through using total variation regularization. It is difficult for those methods to remove gross spatially contiguous noise like facial occlusion because too much structural information is lost in that case, e.g., the entire eye or mouth is occluded.Recently, some methods based on global context features have been developed. Xie et al. [10] proposed the stacked sparse denoising autoencoders (SSAD) for image denoising and inpainting through combining sparse coding and pretrained deep networks. Pathak et al. [11] trained the context encoders to generate images for inpainting or holefilling and simultaneously learned feature representations which captures appearances and semantics of visual structures. However, the locations of image regions which require to be filled in are provided beforehand. By contrast, our method dose not need to know locations of the corrupted regions and automatically identify those regions.
Iii Robust LSTMAutoencoders for Face DeOcclusion
In this section we first briefly review the Long ShortTerm Memory (LSTM). Then we elaborate the proposed robust LSTMAutoencoders in details, including the multiscale spatial LSTM encoder, the dualchannel LSTM decoder and the identity preserving component.
Iiia Long ShortTerm Memory
Long ShortTerm Memory (LSTM) [12]
is a popular architecture of recurrent neural networks. It consists of a memory unit
, a hidden state and three types of gates — the input gate , the forget gate and the output gate . These gates are used to regulate reading and writing to the memory unit. More concretely, for each time step , LSTM first receives an input and the previous hidden state , then computes activations of the gates and finally updates the memory unit to and the hidden state to . The involved computation is given as follows,(1)  
where
is a logistic sigmoid function,
denotes the pointwise product, and and are weights and biases for the three gates and the memory unit.A major obstacle in using gradient descent to optimize standard RNN models is that the gradient might vanish quickly during back propagation along the sequence. LSTM alleviates this issue effectively. Its memory unit sums up activities over all time steps, which guarantees that the gradients are distributed over the factors of summation. Thus, back propagation would not suffer from the vanishing issue anymore when applying LSTM to long sequence data. This makes LSTM memorize better longrange context information.
Due to such an excellent property, LSTM has been extensively exploited to address a variety of problems concerning sequential data analysis, e.g., speech recognition [13], image captioning [14], action recognition [15] and video representation learning [16], as well as some problems that can be casted to sequence analysis, e.g., scene labeling [17] and image generation [18]. Here we utilize LSTM netwroks to build our face deocclusion model where facial occlusion is removed by a sequential processing to eliminate the effect of occlusion step by step.
IiiB Robust LSTMAutoencoders
In this work, we are going to solve the problem of recovering a occlusionfree face from its noisy observation with occlusion. Let denote an occluded face and let denote its corresponding occlusionfree face. Face deocclusion then aims to find a function that removes the occlusion on by minimizing the difference between the recovered face and the occlusionfree face :
(2) 
We propose to parameterize the recovering function using an autoencoder, which has been exploited for image denoising and inpainting [10]. The recovering function then can be expressed as
(3) 
where and
encapsulate weights and biases of the encoder function and decoder function respectively. In the image denoising and inpainting, the goal is to remove distributed noise, e.g., Gaussian noise, and contiguous noise with low magnitude, e.g., text. Unlike them, one cannot apply the autoencoder directly to remove facial occlusion. It is difficult to remove such a large area of spatially contiguous noise like occlusion in one step, especially in unconstrained environments where face images probably have various resolutions, illuminations, poses and expressions, or even never appear in training data. Inspired by divideandconquer algorithms
[19] in computer science, here we propose an LSTM based autoencoder to divide the problem of deocclusion into a series of subproblems of occlusion detection and removal.Fig. 2 illustrates the framework of our proposed robust LSTMAutoencoders (RLA) model. We now proceed to explain each of its components and how they work jointly to remove facial occlusion one by one.
IiiB1 Multiscale Spatial LSTM Encoder
Given the architecture shown in Fig. 2, we first explain the builtin LSTM encoder. The LSTM encoder learns representations from the input occluded face . Here it is worth noting that if the LSTM encoder takes the whole face as a single input, the occlusion will be involved in the overall encoding process and eventually contaminate the generated representation. In order to alleviate the negative effect of occlusion, as shown in the left panel of Fig. 2, we first divide the face image into patches, denoted as (here ), and feed them to a spatial LSTM network sequentially. Spatial LSTM is an extension of LSTM for analyzing twodimensional signals [20]. It sequentializes the input image in a predefined order (here, from left to right and top to bottom). By doing so, some of encoding steps will see occlusionfree patches and thus not be affected by noise. Besides, the noisy information from occluded patches is not directly encoded into feature representations, but controlled by the gates of spatial LSTM for the sake of the subsequent occlusion detection. At each step, the LSTM also encodes a larger region around the current patch but with a lower resolution to learn more contextual information. Here the whole image is used as and concatenated with as a joint input of the encoder.
For each location in the grid dividing the image, the multiscale spatial LSTM encoder learns representations from the patch centered at as follows,
(4)  
where is an affine transformation w.r.t. parameters of the memory unit and gates respectively (ref. Eqn. (IIIA)). The memory unit is connected with two previous memory units and in the 2D space. It takes the information of neighboring patches into consideration when learning the representation for the current patch.
After reading in all patches sequentially, the spatial LSTM encoder outputs its last hidden state in the sequence as a feature representation of the occluded face. The representation is then recurrently decoded to extract face and occlusion information for face recovery.
IiiB2 DualChannel LSTM Decoder
Given the representation of an occluded face produced by the encoder, an LSTM decoder follows to map the learned representation back into an occlusionfree face. Traditional autoencoders, which have been used in image denoising, usually perform the decoding for once only. However, as we explain above, faces may contain a variety of occlusion in the real world. This kind of spatially contiguous noise corrupts images in a more malicious way than general stochastic noise such as Gaussian one, because it incurs loss of important structural information of faces. As a result, the face cannot be recovered very well by only onestep decoding. Therefore, we propose to use an LSTM decoder to progressively restore the occluded part.
As shown in the top right panel of Fig. 2, the LSTM decoder takes over as its input at the first step and initializes its memory unit with the last memory state of the encoder , and then keeps revising the output at each step based on the previous output . The operations of the LSTM decoder for face reconstruction can be summarized as
(5)  
(6)  
(7)  
(8) 
where indicates that the parameters are used for the reconstruction network. The final reconstructed face is obtained by passing the output at the last step through a sigmoid function, which can be seen as a result refined by decoding for multiple times.
In the above decoding and reconstruction process, we apply the decoder on both nonoccluded and occluded parts. Thus, pixels of nonoccluded parts may suffer from the risk of being corrupted in the decoding process. To address this issue, we introduce another LSTM decoder which aims to detect the occlusion. Being aware of the location of occlusion, one can simply compensate values of the nonoccluded pixels using original pixel values in the inputs. In particular, for each pixel, the occlusion detector estimates the probability of its being occluded. As illustrated in Fig.
2 (bottom right), for each step , the LSTM detection network receives the hidden state of the reconstruction network, and updates its current occlusion scores based on the previous detection result. Here the crossnetwork connection provides the decoding information of face reconstruction at each step for the detection network to better localize the occlusion. More formally, the LSTM decoder detects occlusion as follows,(9)  
(10)  
(11)  
(12) 
where indicates that the parameters are used for the detection network. Similar to the reconstruction network, the final occlusion scores are given by . Then combining the reconstructed face and the occluded face according to the occlusion scores gives the recovered face with compensated pixels:
(13) 
Note that is actually a weighted sum of and using . The pixel value in the reconstructed face is fully preserved if the score is one, and the pixel value is equal to the one from the occluded face if its occlusion score is zero.
IiiB3 Optimization
Given a training dataset , substituting Eqn. (13) into Eqn. (2), we have the following mean square error function that RLA is going to optimize
(14) 
which can be minimized by standard stochastic gradient descent. Taking its derivatives w.r.t.
and gives the gradients:(15)  
(16) 
Then they are used in error back propagation to update the parameters of each LSTM network. Note that in Eqn. (15), the gradients according to the nonoccluded part are set to zeros by the occlusion sores , and thus the reconstruction network will prefer to reconstruct the occluded part with the help of the occlusion detection network.
Since the model contains three networks, i.e., the encoder network, the face reconstruction network and the occlusion detection network, directly training the three networks simultaneously hardly gives a good local optimum and may converge slowly. To ease the optimization, we adopt a multistage optimization strategy. We first ignore parameters of the occlusion detection network, and pretrain the encoder and decoder to minimize the reconstruction error . Then we fix their parameters and pretrain the decoder for occlusion detection to minimize the joint loss in Eqn. (14). These two rounds of separate pretraining provides us with sufficiently good initial parameters and we proceed to retrain all the three networks jointly. We observe that this strategy usually gives better results and faster convergence rate in the experiments.
IiiC IdentityPreserving Face DeOcclusion
Although it is able to restore facial structural information (e.g., eyes, mouth and their spatial configuration) from occluded faces well, the RLA model introduced above only considers minimizing the mean squared error between occlusionfree and recovered faces. Generally, there are multiple plausible appearances for an occluded facial region. For example, when the lower face is occluded, only according to the upper face, it is hard to determine what the lower face is actually like. Thus if we force the model to exactly fit the value of each pixel, it would tend to generate mean values of all probable appearances for the recovered part. This probably leads to loss of discriminative details of faces and harms the performance of face recognition. Recently, deep convolutional neural networks (CNNs) are widely applied to face recognition and provide stateoftheart performance
[21, 22, 23]. Inspired by their success, we propose to leverage an identity based supervised CNN and an adversarial CNN to provide extra guidances for the RLA model on face recovery, in order to preserve the person identity information and enhance visual quality of recovered faces.Fig. 3
illustrates our proposed pipeline for identitypreserving RLA (IPRLA). A pretrained CNN is concatenated to the decoder for classifying recovered faces with identity labels
, and helps tune RLA to simultaneously minimize the mean squared error between pixels in Eqn. (14) and the classification loss(17) 
where denotes the probability that the recovered face is assigned to its identity label by the supervised CNN. So we preserve highlevel facial identity information and meanwhile recover lowlevel structural information of faces. However, we observe that the model produces severe artifacts in recovered face images for fitting to the classification network. Similar to generative adversarial nets (GAN) [6], we introduce an adversarial discriminator to alleviated this effect of artifacts. In particular, let denote the generator modeled by RLA, and denote the adversarial discriminator modeled by CNN. The optimization procedure can be viewed as a minimax game between and , where is trained to discriminate original occlusionfree faces and recovered faces from through maximizing the log probability of predicting the correct labels (original or recovered) for both of them:
(18) 
while is trained to recover more real faces which cannot be discriminated by through minimizing . Both and are optimized alternately using stochastic gradient descent as described in [6].
We first train the RLA model according to Eqn. (14) by using the multistage optimization strategy mentioned previously, and then train the supervised CNN on original occlusionfree face data and the adversarial CNN both on original and recovered faces to obtain a good supervisor and discriminator. In the stage of finetuning, we initialize the networks in Fig. 3
using these pretrained parameters and update the parameters of RLA to optimize the following joint loss function in an endtoend way:
(19) 
Here the parameters of the supervised CNN are fixed because it has learned correct filters from original occlusionfree faces. In the other side, we update the parameters of the adversarial CNN to maximize in Eqn. (18).
Iv Experiments
To demonstrate the effectiveness of the proposed model, we evaluate it on two occluded face datasets, in which one contains synthesized occlusion and the other one contains real occlusion. We present qualitative results of occlusion removal as well as quantitative evaluation on face recognition.
Iva Datasets
IvA1 Training Data
Since it is hard to collect sufficient occluded faces and the corresponding occlusionfree ones in real life to model occluded faces in the wild, we train our model on a synthesized dataset from the CASIAWebFace dataset [24]. CASIAWebFace contains 10,575 subjects and 494,414 face images crawled from the Web. We select around 380,000 nearfrontal faces () from the dataset and synthesize occlusion caused by 9 types of common objects on these faces. The occluding objects we use include glasses, sunglasses, masks, hands, eye masks, scarfs, phones, books and cups. Each type of occluding object has 100 different templates, out of which half are used for generating occlusion on training data and the rest are used for testing data. For each face, we randomly select one template from types of occlusion to generate the occluded face. Some occlusion templates require a correct location, such as sunglasses, glasses and masks. We add these templates onto specific locations of the faces with reference to detected facial landmarks. The other templates are added onto random locations of the faces to enhance diversity of the produced data. All face images are cropped and coarsely aligned by three key points located at the centers of eyes and mouth, and then resized to gray level ones. Fig. 4 illustrates some examples of occluded faces generated using this approach. We will release the dataset for training upon acceptance.
IvA2 Test Data
We use two datasets for testing, i.e., LFW [25] and 50OccPeople. The latter one is constructed by ourselves. The LFW dataset contains a total of 13,233 face images of 5,749 subjects, which were collected from the Web. Note that LFW does not have any overlap with CASIAWebFace [24]. In order to analyze the effects of various occlusion for face recognition, we add all the 9 types of occlusion to every face in the dataset in a similar way for generating training data. Our 50OccPeople dataset contains face images with real occlusion, which contains 50 subjects and 1,200 images. Each subject has one normal face image and 23 face images taken under realistic illumination conditions with the same 9 types of occlusions. The test images are preprocessed by the same way with the training images. It can be seen that both the test datasets have completely different occlusion templates and subjects from the training dataset.
IvB Settings and Implementation Details
Our model uses a twolayer LSTM network for the encoder and the decoder respectively, and each LSTM has 2,048 hidden units. Each face image is divided into four nonoverlapped patches, which is a reasonable size for capturing facial structures and reducing the negative effect of the occlusion. The LSTM encoder reads facial patches from left to right and top to bottom, and meanwhile, the whole image is resized to the same size as a different scale input of the encoder. We set the number of steps of the decoder to 8 for the tradeoff between the effectiveness and computational complexity. We use the GoogLeNet [26] architecture for both the supervised and adversarial CNNs, and the original CASIAWebFace dataset is used to pretrain the CNNs.
For comparison, a standard autoencoder (AE) with four 2048dimensional (the same with our model) hidden layers is implemented as a baseline method. We use Principal Component Analysis (PCA) as another baseline, which projects an occluded face image onto a 400 dimensional subspace and then takes PCA reconstruction to be the recovered face. We also include the comparison with Sparse Representationbased Classification (SRC)
[1] and Stacked Sparse Denoising Autoencoder (SSDA) [10]. We test SRC using a subset of 20K images on CASIAWebFace. However, even on this sampled training set, the estimation of SRC is already impractically slow. For SSDA, We use the same hyperparameters with [10] and the same number and dimensions of hidden layers with our model. In this paper, all the experiments are conducted on a standard desktop with Intel Core i7 CPU and GTX TiTan GPUs.IvC Results and Comparisons
IvC1 Occlusion Removal
We first look into intermediate outputs of the proposed RLA during the process of occlusion removal, which are visualized in Fig. 5. It can be observed that our model does remove occlusion step by step. Specifically, at the first step, the face reconstruction network of the model produces a probable profile of the face, where occluded parts may not be so clear as nonoccluded parts. The occlusion prediction network provides a coarse estimation of the occlusion region. Then the outputs are refined progressively upon the states of previous steps. For example, one can see that more and more structures and textures are added to the face profile, and the shape of the occlusion region becomes sharper and sharper.
To verify the ability of our proposed model for occlusion removal, we present qualitative comparisons with several methods including Principal Component Analysis (PCA), Autoencoder (AE), Sparse Representationbased Classification (SRC) [1] and Stacked Sparse Denoising Autoencoder (SSDA) [10], with different types of occlusion. We also evaluate the contribution of components in our model in ablation study, which include face reconstruction channel of RLA (Face Rec), RLA and the identitypreserving RLA (IPRLA).
Fig. 6 gives example results of occlusion removal on faces from the occluded LFW dataset. From the figure, one can see that for each type of occlusion, RLA restores occluded parts well and in the meantime retains the original appearance of nonoccluded parts. This also demonstrates that the detection of occlusion is rather accurate. Although our model is trained on CASIAWebFace which has no duplicated subject and occlusion template with the test datasets, our model can still remove occlusion effectively without knowing the type and location of occlusion. Through using the supervised and adversarial CNNs to finetune it, the proposed IPRLA further recovers and sharpens some discriminative patterns, such as edge and texture, in occluded parts. Note that only using the face reconstruction network in the decoder of RLA damages finegrained structures of nonoccluded parts. It is undesired because this might lose key information for the following face recognition task. By comparison, PCA cannot remove occlusion and only make it blur. SRC does not appropriately reconstruct occluded parts and severely damages or changes the appearances of nonoccluded parts. AE and SSDA remove occlusion but oversmooths many details, which results in recovered faces biased toward an average face. This clearly demonstrates the advantage of removing occlusion progressively in a recurrent framework.
We also test the proposed IPRLA on faces of different subjects corrupted by the same type of occlusion at the same location. The results are shown in Fig. 7. It can be seen that our method can recover diverse results for different subjects, which demonstrates our method dose not simply produce the mean of occluded facial parts over the training dataset but predicts the meaningful appearances according to nonoccluded parts of different subjects.
Furthermore, based on occluded location and area, we divide the occlusion into 4 categories: quarter of the face at different locations, left or right half of the face, upper face and lower face. Fig. 8 compares recovered results of IPRLA under different occlusion categories on the occluded LFW dataset. As one can see, for quarter of the face, our model can remove it easily. When the left or right half of a face is occluded, although the occluded area is large, our model still produces recovered faces with high similarity to the original occlusionfree faces. The model may exploit facial symmetry and learn specific feature information from the nonoccluded half face. When the upper or lower part of a face is occluded, our model can also remove the occlusion, but the restored parts may be not very similar to the original parts, such as the 4th column in Fig. 8 (c). This is because it is extremely challenging to infer exactly the appearances of the lower (upper) face according to the upper (lower) face. However, it is still possible to correctly predict some general facial attributes, such as genders, sizes, skin colors and expressions.
Besides the synthetic occluded face dataset, we also test our model on the 50OccPeople dataset which is a real occluded face dataset to verify the performance in practice. Some results are illustrated in Fig. 9. one can see that our model still obtains good deocclusion results although it is trained only using synthetic occluded faces.
Types of occlusion  Our Model  SSDA  SRC  AE  PCA 


IPRLA  RLA  Face Rec  

Hand  5.6%  6.2%  14.8%  37.0%  40.8%  26.8%  12.4%  6.5%  
Book  5.9%  6.5%  15.0%  37.7%  42.0%  27.8%  12.5%  7.0%  

Hand  9.3%  10.5%  20.3%  39.1%  40.1%  30.5%  18.6%  12.8%  
Book  9.8%  11.4%  21.4%  40.4%  43.0%  32.6%  21.0%  13.4%  
Upper face  Glasses  5.8%  5.8%  12.8%  36.3%  32.7%  25.0%  11.8%  6.7%  
Sunglasses  9.9%  10.7%  22.9%  42.9%  38.8%  33.6%  20.7%  10.0%  
Eye mask  25.5%  27.3%  34.4%  44.2%  43.2%  39.5%  33.3%  27.2%  
Lower face  Mask  9.2%  12.1%  20.9%  40.3%  44.7%  31.9%  21.4%  12.1%  
Phone  7.2%  7.8%  15.3%  37.8%  42.3%  28.9%  14.2%  8.7%  
Cup  5.7%  6.1%  15.3%  37.9%  41.3%  28.4%  12.7%  5.8%  
Scarf  9.3%  11.0%  20.9%  39.8%  44.7%  33.7%  17.3%  10.0% 
Our Model  SSDA  SRC  AE  PCA  Occluded face  

IPRLA  RLA  Face Rec  
18.0%  18.2%  23.2%  42.6%  45.5%  35.0%  25.6%  19.1% 
IvC2 Face Recognition
We carry out the experiment of face verification on the faces recovered by deocclusion methods to further investigate the ability of our model in recognizing occluded faces. We first extract feature vectors for a pair of face images (one is a occlusionfree face, and the other is a recovered face or an occluded face) and compute the similarity between two feature vectors using Joint Bayesian
[27] to decide whether the pair of faces is from the same subject.CNN is adopted to extract face features in the experiment. We train a GoogLenet model on CASIAWebFace, and a 6,144dimensional feature vector is obtained by concatenating activation outputs of hidden layers before the three loss layers. By reducing dimension using PCA, we have a 800dimensional feature vector for each face image.
We first evaluate the recognition performance for different types of occlusion on the occluded LFW dataset. We compute the equal error rates (EER) on predefined pairs of faces provided by the dataset website. The pair set contains 10 mutually excluded folds, and 300 positive pairs and 300 negative pairs for each fold. Through alternately occluding the two faces in a pair, a total of 12,000 pairs are generated for testing. Table I reports the verification results for various occlusion and deocclusion methods. We compare our proposed model with other methods including PCA, AE, SRC and SSDA, and also list the performance of verification on occluded face images for reference. As one can see, the IPRLA performs better for all types of occlusion as it produces more discriminative occlusionfree faces than other methods. Note that combining with the occlusion detection significantly reduces the error rate compared with recovering faces without using occlusion detection. This is because utilizing occlusion detection to retain nonoccluded parts effectively preserve discriminative information contained in these parts. SRC does not obtain the expected performance as [1] because the open test set has no identical subject with the training dataset. SSDA performs even worse than the standard Autoencoder (AE), which shows that it cannot handle well the large area of spatially contiguous noise like occlusion although it is effective for removing Gaussian noise and contiguous noise with low magnitude like text. Note that only using face reconstruction (Face Rec) still achieves better performance than the standard Autoencoder (AE). This demonstrates the effectiveness of the progressive recovery framework.
Similar to the observations made in the qualitative analysis, occlusion removal for quarter or left/right half of the face improve better the performance of occluded face recognition because the appearances of occluded facial parts can be predicted according to the nonoccluded parts by utilizing facial symmetry. However, recovered faces for upper or lower faces still achieves lower error rate compared with occluded faces, which indicates that our model can learn relations between upper and lower faces and extract discriminative features from nonoccluded upper (lower) faces to recover occluded lower (upper) faces.
We also compare the overall verification performance for all types of occlusion on the 50OccPeople dataset. We randomly sample 10,000 pairs (5,000 positive pairs and 5,000 negative pairs) of faces for testing. The EER averaged on all types of occlusion are listed in Table II. The verification results shows that our model outperforms other methods and is able to be generalized to real occluded face data.
V Conclusions
In this paper we have proposed a robust LSTMAutoencoders to address the problem of face deocclusion in the wild. The proposed model is shown to be able to effectively recover occluded facial parts progressively. The proposed model contains a spatial LSTM network encoding face patches sequentially under different scales for feature representation extraction, and a dualchannel LSTM network decoding the representation to reconstruct the face and detect occlusion step by step. Extra supervised and adversarial CNNs are introduced to finetune the robust LSTM autoencoder and enhance the discriminative information about person identity in the recovered faces. Extensive experiments on synthetic and real occlusion datasets demonstrate that the proposed model outperforms other deocclusion methods in terms of both the quality of recovered faces and the accuracy of occluded face recognition.
References
 [1] J. Wright, A. Yang, A. Ganesh, S. Sastry, and Y. Ma, “Robust face recognition via sparse representation,” IEEE T. Pattern Analysis Mach. Intelli. (TPAMI), vol. 31, no. 2, pp. 210–227, 2009.
 [2] J. Park, Y. Oh, S. Ahn, and S. Lee, “Glasses removal from facial image using recursive error compensation,” IEEE T. Pattern Analysis Mach. Intelli. (TPAMI), vol. 27, no. 5, pp. 805–811, 2005.
 [3] S. Li, X. Hou, H. Zhang, and Q. Cheng, “Learning spatially localized, partsbased representation,” in Proc. IEEE Conf. Comp. Vis. Pattern Recogn. (CVPR), 2001, pp. I–207–I–212.
 [4] Y. Tang, R. Salakhutdinov, and G. Hinton, “Robust boltzmann machines for recognition and denoising,” in Proc. IEEE Conf. Comp. Vis. Pattern Recogn. (CVPR), 2012, pp. 2264–2271.
 [5] L. Cheng, J. Wang, Y. Gong, and Q. Hou, “Robust deep autoencoder for occluded face recognition,” in Proc. of the 23st ACM Int. Conf. on Multimedia, 2015, pp. 1099–1102.
 [6] I. Goodfellow, J. PougetAbadie, M. Mirza, and et al., “Generative adversarial nets,” in Proc. Adv. Neural Info. Process. Syst. (NIPS), 2014, pp. 2672–2680.
 [7] M. Bertalmio, G. Sapiro, V. Caselles, and C. Ballester, “Image inpainting,” in Proc. ACM Conf. Comp. Graphics (SIGGRAPH), 2000, pp. 417–424.
 [8] A. Criminisi, P. Perez, and K. Toyama, “Region filling and object removal by exemplarbased image inpainting,” IEEE Trans. on Image Processing (TIP), vol. 13, no. 9, pp. 1200–1212, 2004.
 [9] S. Osher, M. Burger, D. Goldfarb, J. Xu, and W. Yin, “An iterative regularization method for total variationbased image restoration,” Multiscale Modeling & Simulation, vol. 4, no. 2, pp. 460–489, 2005.
 [10] J. Xie, L. Xu, and E. Chen, “Image denoising and inpainting with deep neural networks,” in Proc. Adv. Neural Info. Process. Syst. (NIPS), 2012, pp. 341–349.
 [11] D. Pathak, P. Krahenbuhl, J. Donahue, and et al., “Context encoders: Feature learning by inpainting,” in Proc. IEEE Conf. Comp. Vis. Pattern Recogn. (CVPR), 2016, pp. 1–9.
 [12] S. Hochreiter and J. Schmidhuber, “Long shortterm memory,” Neural Computation, vol. 9, no. 8, pp. 1735–1780, 1997.
 [13] A. Graves and N. Jaitly, “Towards endtoend speech recognition with recurrent neural networks,” in Proc. Int. Conf. Mach. Learn. (ICML), 2014, pp. 1764–1772.
 [14] K. Xu, J. Ba, R. Kiros, A. Courville, and et al., “Show, attend and tell: Neural image caption generation with visual attention,” CoRR, vol. abs/1502.03044, 2015.
 [15] J. Donahue, L. Hendricksa, S. Guadarrama, and M. Rohrbach, “Longterm recurrent convolutional networks for visual recognition and description,” in Proc. IEEE Conf. Comp. Vis. Pattern Recogn. (CVPR), 2015, pp. 2625–2634.

[16]
N. Srivastava, E. Mansimov, and R. Salakhutdinov, “Unsupervised learning of video representations using lstms,” in
Proc. Int. Conf. Mach. Learn. (ICML), 2015, pp. 843–852.  [17] W. Byeon, T. Breuel, F. Raue, and M. Liwicki, “Scene labeling with lstm recurrent neural networks,” in Proc. IEEE Conf. Comp. Vis. Pattern Recogn. (CVPR), 2015, pp. 3547–3555.
 [18] K. Gregor, I. Danihelka, A. Graves, and D. Wierstra, “Draw: A recurrent neural network for image generation,” in Proc. Int. Conf. Mach. Learn. (ICML), 2015, pp. 1462–1471.
 [19] T. Cormen, C. Leiserson, R. Rivest, and C. Stein, Introduction to Algorithms. MIT Press, 2001.
 [20] L. Theis and M. Bethge, “Generative image modeling using spatial lstms,” in Proc. Adv. Neural Info. Process. Syst. (NIPS), 2015, pp. 1918–1926.
 [21] Y. Taigman, M. Yang, M. Ranzato, and L. Wolf, “Deepface: Closing the gap to humanlevel performance in face verification,” in Proc. IEEE Conf. Comp. Vis. Pattern Recogn. (CVPR), 2014, pp. 1701–1708.

[22]
Y. Sun, X. Wang, and X. Tang, “Deep learning face representation from predicting 10,000 classes,” in
Proc. IEEE Conf. Comp. Vis. Pattern Recogn. (CVPR), 2014, pp. 1891–1898.  [23] F. Schroff, D. Kalenichenko, and J. Philbin, “Facenet: A unified embedding for face recognition and clustering,” in Proc. IEEE Conf. Comp. Vis. Pattern Recogn. (CVPR), 2015, pp. 815–823.
 [24] D. Yi, Z. Lei, S. Liao, and S. Li, “Learning face representation from scratch,” CoRR, vol. abs/1411.7923, 2014.
 [25] G. Huang, M. Ramesh, T. Berg, and E. LearnedMiller, “Labeled faces in the wild: A database for studying face recognition in unconstrained environments,” University of Massachusetts, Amherst, Tech. Rep. 0749, October 2007.
 [26] C. Szegedy, W. Liu, Y. Jia, and et al., “Going deeper with convolutions,” in Proc. IEEE Conf. Comp. Vis. Pattern Recogn. (CVPR), 2015, pp. 1–9.
 [27] D. Chen, X. Cao, L. Wang, F. Wen, and J. Sun, “Bayesian face revisited: A joint formulation,” in Proc. Eur. Conf. Comp. Vis. (ECCV), 2012, pp. 566–579.
Comments
There are no comments yet.