RED-Net: A Recurrent Encoder-Decoder Network for Video-based Face Alignment

01/17/2018 ∙ by Xi Peng, et al. ∙ ibm Rutgers University 0

We propose a novel method for real-time face alignment in videos based on a recurrent encoder-decoder network model. Our proposed model predicts 2D facial point heat maps regularized by both detection and regression loss, while uniquely exploiting recurrent learning at both spatial and temporal dimensions. At the spatial level, we add a feedback loop connection between the combined output response map and the input, in order to enable iterative coarse-to-fine face alignment using a single network model, instead of relying on traditional cascaded model ensembles. At the temporal level, we first decouple the features in the bottleneck of the network into temporal-variant factors, such as pose and expression, and temporal-invariant factors, such as identity information. Temporal recurrent learning is then applied to the decoupled temporal-variant features. We show that such feature disentangling yields better generalization and significantly more accurate results at test time. We perform a comprehensive experimental analysis, showing the importance of each component of our proposed model, as well as superior results over the state of the art and several variations of our method in standard datasets.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 11

page 13

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Face landmark detection plays a fundamental role in many computer vision tasks, such as face recognition/verification, expression analysis, person identification, and 3D face modeling. It is also the basic technology component for a wide range of applications like video surveillance, emotion recognition, augmented reality on faces, etc. In the past few years, many methods have been proposed to address this problem, with significant progress being made towards systems that work in real-world conditions (“in the wild”).

Multiple lines of research have been explored for face alignment in last two decades. Early research includes methods based on active shape models (ASMs) Cootes92BMVC ; StephenECCV08 and active appearance models (AAMs) Gao2010 . ASMs iteratively deform a shape model to the target face image, while AAMs impose both shape and object appearance constraints in the optimization process. Recent advances in the field are largely driven by regression-based techniques XiongCVPR13 ; CaoIJCV14 ; ZhangECCV14 ; Lai2015 ; ZhangTangECCV14

. These methods usually take advantage of large-scale annotated training sets (lots of faces with labeled landmark points), achieving accurate results by learning discriminative regression functions that directly map facial appearance to landmark coordinates. The features extracted for regressing landmarks can be either hand-crafted features

XiongCVPR13 ; CaoIJCV14

, or features extracted from convolutional neural networks

ZhangECCV14 ; Lai2015 ; ZhangTangECCV14 . Although these methods can achieve very reliable results in standard benchmark datasets, they still suffer from limited performance in challenging scenarios, e.g., involving large face pose variations and heavy occlusions.

A promising direction to address these challenges is to consider video-based face alignment (i.e., sequential face landmark detection) ShenICCVW15 ; peng2017toward , leveraging temporal information and identity consistency as additional constraints WangCVPR16 . Despite the long history of research in rigid and non-rigid face tracking BlackCVPR95 ; OliverCVPR97 ; DecarloIJCV00 ; PatrasFG04 , current efforts have mostly focused on face alignment in still images SagonasICCVW13 ; ZhangECCV14 ; TzimiropoulosCVPR15 ; ZhuCVPR15 . When videos are considered as input, most methods perform landmark detection by independently applying models trained on still images in each frame in a tracking-by-detection manner WangTPAMI15 , with notable exceptions such as AsthanaCVPR14 ; PengICCV15 ; BMVC2016_129 , which explore incremental learning based on previous frames. These methods do not take full advantage of the temporal information to predict face landmarks for each frame. How to effectively model long-term temporal constraints while handling large face pose variations and occlusions is an open research problem for video-based face alignment.

In this work, we address this problem by proposing a novel recurrent encoder-decoder deep neural network model (see Figure 1), named as RED-Net. The encoding module projects image pixels into a low-dimensional feature space, whereas the decoding module maps features in this space to 2D facial point maps, which are further regularized by a regression loss.

Our encoder-decoder framework allows us to explore spatial refining of our landmark prediction results, in order to handle faces with large pose variations. More specifically, we introduce a feedback loop connection between the aggregated 2D facial point maps and the input. The intuition is similar to cascading multiple regression functions XiongCVPR13 ; ZhangECCV14 for iterative coarse-to-fine face alignment, but in our approach the iterations are modeled jointly with shared parameters, using a single network model. It provides significant parameter reduction when compared to traditional methods based on cascaded neural networks. A recurrent structure also avoids the effort to explicitly divide the task into multiple stage prediction problems. This subtle difference makes the recurrent model more elegant in terms of holistic optimization. It can implicitly track the prediction behavior in different iterations for a specific face example, while cascaded predictions can only look at the immediate previous cascade stage. Our design also shares the same spirit of residual networks he2016deep . By adding feedback connections from the predicted heat map, the network only needs to implicitly predict the residual from previous predictions in subsequent iterations, which is arguably easier and more effective than directly predicting the absolute location of landmark points.

For more effective temporal modeling, we first decouple the features in the bottleneck of the network into temporal-variant factors peng2017reconstruction

, such as pose and expression, and temporal-invariant factors, such as identity. We disentangle the features into two components, where one component is used to learn face recognition using identity labels, and the other component encodes temporal-variant factors. To utilize temporal coherence in our framework, we apply recurrent temporal learning to the temporal-variant component. We used Long Short Term Memory (LSTM) to implicitly abstract motion patterns by looking at multiple successive video frames, and use this information to improve landmark fitting accuracy. Landmarks with large pose variation are typically outliers in a landmark training set. By looking at multiple frames, it helps to reduce the inherent prediction variance in our model.

We show in our experiments that our encoder-decoder framework and its recurrent learning in both spatial and temporal dimensions significantly improve the performance of sequential face landmark detection. In summary, our work makes the following contributions:

  • We propose a novel recurrent encoder-decoder network model for real-time sequential face landmark detection. To the best of our knowledge, this is the first time a recurrent model is investigated to perform video-based facial landmark detection.

  • Our proposed spatial recurrent learning enables a novel iterative coarse-to-fine face alignment using a single network model. This is critical to handle large face pose changes and a more effective alternative than cascading multiple network models in terms of accuracy and memory footprint.

  • Different from traditional methods, we apply temporal recurrent learning to temporal-variant features which are decoupled from temporal-invariant features in the bottleneck of the network, achieving better generalization and more accurate results.

  • We provide a detailed experimental analysis of each component of our model, as well as insights about key contributing factors to achieve superior performance over the state of the art. The project page is publicly available. 111https://sites.google.com/site/xipengcshomepage/eccv2016

2 Related Work

Face alignment has a long history of research in computer vision. Here we briefly discuss face alignment works related to our approach, as well as advances in deep learning, like the development of recurrent and encoder-decoder neural networks.

Regression-based face landmark detection. Recently, regression-based face landmark detection methods AsthanaCVPR13 ; SunCVPR13 ; XiongCVPR13 ; CaoIJCV14 ; ZhangECCV14 ; AsthanaCVPR14 ; ZhuCVPR15 ; TzimiropoulosCVPR15 ; JourablooCVPR16 ; WuCVPR16 ; ZhuCVPR16 have achieved significant boost in the generalization performance of face landmark detection, compared to algorithms based on statistical models such as Active shape models Cootes92BMVC ; StephenECCV08 and Active appearance models Gao2010 . Regression-based approaches directly regress landmark locations based on features extracted from face images. Landmark models for different points are learned either in an independent manner or in a joint fashion CaoIJCV14 . When all the landmark locations are learned jointly, implicit shape constraints are imposed because they share the same or partially the same regressors. This paper performs landmark detection via both a classification model and a regression model. Different from most previous methods, this work deals with face alignment in a video. It jointly optimizes detection output by utilizing multiple observations from the same person.

Cascaded models for landmark detection. Additional accuracy improvement in face landmark detection performance can be obtained by learning cascaded regression models. Regression models from earlier cascade stages learn coarse detectors, while later cascade stages refine the result based on early predictions. Cascaded regression helps to gradually reduce the prediction variance, thus making the learning task easier for later stage detectors. Many methods have effectively applied cascade-like regression models for the face alignment task XiongCVPR13 ; SunCVPR13 ; ZhangECCV14 . The supervised descent method XiongCVPR13 learns cascades of regression models based on SIFT features. Sun et. al. SunCVPR13 proposed to use three levels of neural networks to predict landmark locations. Zhang et. al. ZhangECCV14 studied the problem via cascades of stacked auto-encoders which gradually refine the landmark position with higher resolution inputs. Compared to these efforts which explicitly define cascade structures, our method learns a spatial recurrent model which implicitly incorporates the cascade structure with shared parameters. It is also more ”end-to-end” compared to previous works that divide the learning process into multiple stages.

Face alignment in videos. Most face alignment algorithms utilize temporal information by initializing the location of landmarks with detection results from the previous frame, performing alignment in a tracking-by-detection fashion  WangTPAMI15 . Asthana et. al. AsthanaCVPR14 and Peng et. al. PengICCV15 ; BMVC2016_129 proposed to learn a person specific model using incremental learning. However, incremental learning (or online learning) is a challenging problem, as the incremental scheme has to be carefully designed to prevent model drifting. In our framework, we do not update our model online. All the training is performed offline and we expect our LSTM unit to capture landmark motion correlations.

Recurrent neural networks.Recurrent neural networks (RNNs) are widely employed in the literature of speech recognition MikolovInterspeech10

and natural language processing 

MikolovArxiv14 . They have also been recently used in computer vision. For instance, in the tasks of image captioning Karpathy_2015_CVPR and video captioning Yao_2015_ICCV

, RNNs are usually employed for text generation. RNNs are also popular as a tool for action classification. As an example, Veeriah

et. al. VeeriahICCV15 use RNNs to learn complex time-series representations via high-order derivatives of states for action recognition.

Encoder-decoder networks Encoder and decoder networks are well studied in machine translation ChoArxiv14 where the encoder learns the intermediate representation and the decoder generates the translation from the representation. It is also investigated in speech recognition llu_is2015b and computer vision BadriCoRR15 ; HongCoRR15 . Yang et. al. YangNIPS15 proposed to decouple identity units and pose units in the bottleneck of the network for 3D view synthesis. However, how to fully utilize the decoupled units for correspondence regularization LongNIPS14 is still unexplored. In this work, we employ the encoder to learn a joint representation for identity, pose, expression as well as landmarks. The decoder translates the representation to landmark heatmaps. Our spatial recurrent model loops the whole encoder-decoder framework.

3 Method

The task is to locate facial landmarks in sequential images using an end-to-end deep neural network. Figure 1 shows an overview of our approach. The network consists of a series of nonlinear and multi-layered mappings, which can be functionally categorized as four modules: (1) encoder-decoder and , (2) spatial recurrent learning , (3) temporal recurrent learning , and (4) constrained identity disentangling . Details of the novelty are described in following sections.

Figure 1: Overview of the recurrent encoder-decoder network: (a) encoder-decoder (Section 3.1); (b) spatial recurrent learning (Section 3.2); (c) temporal recurrent learning (Section 3.3); and (d) supervised identity disentangling (Section 3.4). are potentially nonlinear and multi-layered mappings.

3.1 Encoder-Decoder

The input of the encoder-decoder is a single video frame and the output is a response map which indicates landmark locations. or depending on the number of landmarks to be predicted.

The encoder

performs a sequence of convolution, pooling and batch normalization

IoffeCoRR15 to extract a low-dimensional representation from both and :

(1)

where denotes the encoder mapping with parameters . We concatenate and along the channel dimension thus . The concatenation is fed into the encoder as an updated input.

Symmetrically, the decoder performs a sequence of unpooling, convolution and batch normalization to upsample the representation code to the response map:

(2)

where denotes the decoder mapping with parameters . has the same dimension as but channels for landmarks. Each channel presents pixel-wise confidences of the corresponding landmark.

The encoder-decoder design plays an important role in our task. First, the decoder’s output has the same resolution (but a different number of channels) as the input image . Thus it is easy to directly concatenate with along the channel dimension. The concatenation provides pixel-wise spatial cues to update the landmark prediction by the proposed spatial recurrent learning (). We will explain it soon in Section 3.2.

Second, the encoder-decoder network can achieve a low-dimensional representation in the bottleneck. We can utilize the domain prior to decouple into two parts: the identity code , which is temporal-invariant as we are tracking the same person; and the non-identity code , which models temporal-variant factors such as head pose, expression, illumination, and etc.

In Section 3.3, we propose the temporal recurrent learning () to model the changes of . In Section 3.4, we show how to speed up the network training by carrying out the supervised identity disentangling () on .

Third,the encoder-decoder network enables a fully convolutional design. The bottleneck embedding and output response map

are feature maps instead of fully-connected neurons that are often used in ordinary convolutional neural networks. This design is highly memory-efficient and can significantly speed up the training and testing

LongCoRR14 , which is preferred by video-based applications.

Figure 2: An unrolled illustration of spatial recurrent learning. The response map is pretty coarse when the initial guess is far away from the ground truth if large pose and expression exist. It eventually gets refined in the successive recurrent steps.
Figure 3: An unrolled illustration of temporal recurrent learning. encodes temporal-invariant factor which subjects to the same identity constraint. encodes temporal-variant factors which is further modeled in .

3.2 Spatial Recurrent Learning

The purpose of spatial recurrent learning is to pinpoint landmark locations in a coarse-to-fine manner. Unlike existing approaches SunCVPR13 ; ZhangECCV14 that employ multiple networks in cascade, we accomplish the coarse-to-fine search in a single network in which the parameters are jointly learned in successive recurrent steps.

The spatial recurrent learning is performed by iteratively feeding back the previous prediction, stacked with the image as shown in Figure 3, to eventually push the shape prediction from an initial guess to the ground truth:

(3)

where denotes the spatial recurrent mapping with parameters . is the initial response map, which could be a response map generated by the mean shape or the output of the previous frame.

In our conference version peng2016recurrent , detection-based supervision is performed in every recurrent step. It is robust to appearance variations but lacks precision, because pixels within a certain radius around the ground-truth location are labeled using the same value. To address this limitation, motivated by Bulat2016 , we propose to further explore the spatial recurrent learning by performing detection-followed-by-regression in successive steps.

Specially, we carry out a two-step recurrent learning by setting . The first step performs landmark detection that aims to locate 7 major facial components (i.e. in Equation (2)). The second step performs landmark regression that refines all 68 landmarks positions (i.e. ). For clarity, we use and to denote the number of channels output by the detection and the regression steps, respectively.

The landmark detection step guarantees fitting robustness especially in large pose and partial occlusions. The encoder-decoder aims to output a binary map of channels, one for each major facial component. The detection step outputs:

(4)

where the detection task can be trained using pixel-wise sigmoid cross-entropy loss function:

(5)

where . Here denotes the sigmoid output at pixel location in for the -th landmark. is the ground-truth label at the same location, which is set to 1 to mark the presence of the corresponding landmark and 0 for the remaining background.

Note that this loss function is different from the N-way cross-entropy loss used in our previous conference paper peng2016recurrent . It allows multiple class labels for a single pixel, which helps to tackle the landmark overlaps.

The landmark regression step improves the fitting accuracy from the outputs of the previous detection step. The encoder-decoder aims to output a heatmap of channels, one for each landmark. The regression step outputs:

(6)

where the regression task can be trained using pixel-wise loss function:

(7)

where . Here denotes the heatmap value of the -th landmark at pixel location in for the -th landmark.

is the ground-truth value at the same location, which obeys a Gaussian distribution centered at the landmark with a pre-defined standard deviation.

Now the spatial recurrent learning (Equation (3)) can be achieved by minimizing the detection loss (Equation (5)) and the regression loss (Equation (7)), simultaneously:

(8)

where balances the loss between the two tasks. Note that the spatial recurrent learning do not introduce new parameters but sharing the same parameters of the encoder-decoder network, i.e. .

The spatial recurrent learning is highly memory efficient. It is capable of end-to-end training, which is a significant advantage compared with the cascade framework Bulat2016 . More importantly, the network can jointly learn the coarse-to-fine fitting strategy in recurrent steps, instead of training cascaded networks independently SunCVPR13 ; ZhangECCV14 , which guarantees robustness and accuracy in challenging conditions.

conv conv conv conv conv unpooling unpooling unpooling unpooling
conv conv conv conv
pooling pooling pooling pooling -
Table 1: Specification of the VGGNet-based design: block name (Top), feature map dimension (Middle), and layer configuration (Bottom). means there are 64 filters (channels), each has a size of . Pooling or unpooling operations are performed after or before each module. The pooling window is

with a stride of

.
Figure 4: Left: the architecture of the VGGNet-based design. The encoder () and the decoder () are nearly symmetrical except that has one more block . downsamples the input image from to . So and have the same resolution and can be easily concatenated along the channel dimension. Right: an illustration of the pooling/unpooing with indices. The corresponding pooling and unpooling share pooling indices using a 2-bit switch in each 2 2 pooling window.

3.3 Temporal Recurrent Learning

In addition to the spatial recurrent learning, we also propose a temporal recurrent learning to model factors, e.g. head pose, expression, and illumination, that may change over time. These factors affect the landmark locations significantly PengCVIU15 . Thus we can expect improved tracking accuracy by modeling their temporal variations.

As mentioned in Section 3.1, the bottleneck embedding can be decoupled into two parts: the identity code and the non-identity code :

(9)

where and model the temporal-invariant and -variant factors, respectively. We leave to Section 3.4 for additional identity supervision, and exploit variations of via the recurrent model. Please refer to Figure 3 for an unrolled illustration of the proposed temporal recurrent learning.

Mathematically, given successive video frames , the encoder extracts a sequence of embeddings . Our goal is to achieve a nonlinear mapping , which simultaneously tracks a latent state and updates at time :

(10)

where and are functions of with parameters . is the update of .

The temporal recurrent learning is trained using successive frames. At each frame, the detection and regression tasks are performed for the spatial recurrent learning. The recurrent learning is performed by minimizing Equation (8) at every time step :

(11)

where denotes network parameters of the temporal recurrent learning, e.g. parameters of LSTM units. It is worth mentioning that, we perform recurrent learning in both spatial and temporal dimensions by jointly optimizing in Equation (11).

The temporal recurrent module is memorizing as well as modeling the changing pattern of the temporal-variant factors. Our experiments indicated that the offline learned model can significantly improve the online fitting accuracy and robustness, especially when large variations or partial occlusions happen.

3.4 Supervised Identity Disentangling

There is no guarantee that temporal-invariant and -variant factors can be completely decoupled in the bottleneck by simply splitting the bottleneck representation into two parts peng2017reconstruction . More supervised information is required to achieve the disentangling. To address this issue, we propose to apply a face recognition task on the identity code , in addition to the temporal recurrent learning applied on non-identity code .

The supervised identity disentangling is formulated as an -way classification problem. is the number of unique individuals present in the training sequences. In general, we associate the identity representation

with a one-hot encoding

to indicate the score of each identity:

(12)

where is the identity classification mapping with parameters . The identity task is trained using -way cross-entropy loss:

(13)

where denotes the softmax activation of the -th element in . is the -th element of the identity annotation

, which is a one-hot vector with a

for the correct identity and all s for others.

Now we can jointly train all the three tasks, i.e. , , and . Based on Equation (11) and (13), we simultaneously minimize the detection and regression loss together with the identity loss at every time step :

(14)

where weights the identity constraint. An obvious advantage of our approach is that the whole network can be trained end-to-end by optimizing all parameters simultaneously, which guarantees an efficient learning.

It has been shown in ZhangTangECCV14 that learning the face alignment task together with correlated tasks, e.g. head pose, can improve the fitting performance. We have a similar observation when adding face recognition task to the alignment task. More importantly, we find that the additional identity task can effectively speed up the training of the entire encoder-decoder network. In addition to more supervision, the identity task helps to decouple the identity and non-identity factors more completely, which facilitates the training of the temporal recurrent learning.

conv conv conv conv conv dconv dconv dconv dconv
Table 2: Specification of the ResNet-based design: block name (Top), feature map dimension (Middle), and layer configuration (Bottom). We use conv/decov layers with a stride of to halve or double the feature map dimensions. Thus no pooling/unpooling layer is used. The skip connections are specified in Table 3.
Figure 5: Left: the architecture of ResNet-based design (Left). The encoder () and the decoder () are asymmetrical. is much deeper than , i.e. 151 vs. 4 layers. downsamples the input image from to . Skip connections () are used to bridge hierarchical spatial information at different resolutions. Right: an example of residual unit used in . convolutional layers are used in the residual unit to cut down the number of filter parameters.

4 Network Architecture

We present the architecture details of proposed modules: , , , and . All the four modules are designed in a single network that can be trained end-to-end. We first introduce two variant designs of , based on which , , and are designed accordingly.

4.1 The Design of and

To best evaluate the proposed method, we investigate two variant designs of the encoder-decoder: VGGNet SimonyanCoRR14 based and ResNet he2016deep based. The VGGNet-based design has a symmetrical structure between the encoder and decoder; while the ResNet-based design has an asymmetrical structure due to the usage of the residual modules.

VGGNet-based design. Table 1 presents the network specification. Figure 4 (left) shows the network architecture. The encoder is designed based on a variant of the VGG-16 network SimonyanCoRR14 ; KendallCoRR15 . It has 13 convolutional layers of constant filters. We can, therefore, initialize the training process from weights trained on large datasets for object classification. We remove all fully connected layers in favor of a fully convolutional manner LongCoRR14 , which can effectively reduce the number of parameters from 117M to 14.8M BadriCoRR15 . The bottleneck feature maps are split into two parts for the identity and non-identity codes, respectively. This design preserves rich spatial information in 3D feature maps rather than 1D feature vectors, which is important for landmark localization.

We use max-pooling to halve the feature resolution at the end of each convolutional block. The pooling window size is

and the stride is . Although max-pooling can help to achieve translation invariance, it would cause a considerable loss of spatial information especially when multiple max-pooling layers are applied in a cascade. To solve this issue, we use a 2-bit code to record the index of the maximum activation selected in a pooling window ZeilerECCV14 . As illustrated in Figure 4 (right), the memorized index is then used in the corresponding unpooling layer to place each activation back to its original location. This strategy is particularly useful when the decoder recovers the input structure from highly compressed feature maps. Besides, it is more efficient to store spatial indices than to memorize entire feature maps of float precision, which is a common setup in FCNs LongCoRR14 .

The decoder is nearly symmetrical to the encoder with a mirrored configuration but replacing all max-pooling with unpooling layers. The encoder is slightly deeper than the decoder with one more covolutional block at the beginning. downsamples the input image from to . So and have the same resolution and can be easily concatenated along the channel dimension. We find that batch normalization IoffeCoRR15

can significantly boost the training speed since it reduces internal shifting in the mini batch. Thus, we apply batch normalization as well as rectified linear unit (ReLU)

NairICML10 after each convolutional layer.

ResNet-based design. Table 2 presents the network specification. Figure 5 (left) shows the network architecture. The encoder is designed based on a variant of the ResNet-152 he2016deep , which has 50 residual units of totally 151 convolutional layers. Figure 5 (right) shows a residual unit used in . convolutional layers are used to cut down the number of filter parameters. According to he2016deep , the residual shortcut guarantees efficient training of the very deep network, as well as improved performance compared with vanilla design SimonyanCoRR14 . Stride-2 convolutions instead of max poolings are used to halve the feature map resolution at the end of each block.

Different from the VGGNet-based design, the encoder and decoder are asymmetrical. The encoder is much deeper than the decoder, and the decoder has only 4 upsampling blocks of totally 4 convolutional layers. A practical consideration behind this design is that the encoder has to tackle a complicated task, e.g. understand the image and translate it to a low-dimensional embedding, while the decoder’s task is relatively simpler, e.g. recover a set of response maps to mark landmark locations from the embedding. We use stride-2 de-convolutions to double the feature map resolution in each block. Similar to the VGGNet-based design, an additional convolutional block is used to downsample the inpuy image from to . So and have the same resolution for an easy channel-wise concatenation.

Another difference between the ResNet-based design and the VGGNet-based design is the usage of skip connections OhNIPS15 as shown in Figure 5 and specified in Table 3

. These skip connections are used to bridge hierarchical spatial information between the encoder and decoder at different resolutions. They provide shortcuts of the gradient flow in backpropagation for efficient training. Besides, they also enable us to use a shallow decoder design since rich spatial information can be delivered through the shortcuts.

conv conv conv
Table 3: Specification of the skip connections. Note that and , and , and share the same configurations. The bridged features are directly added to the outputs of at the corresponding resolutions.

4.2 The Design of and

The design of the proposed and aims to tradeoff between network complexity and training or testing efficiency.

Spatial recurrent learning. We perform a two-step spatial recurrent learning. Particularly, the first step performs landmark detection to locate 7 major facial components that are robust to variations, i.e. four corners of left/right eyes, one nose tip, and two corners of the mouth. The second step performs landmark regression to refine the predicted locations of all 68 landmarks. This coarse-to-fine strategy guarantees efficient and robust spatial recurrent learning.

As mentioned in Section 3.2, the landmark detection task outputs a binary map of channels, in which the values within a radius of 5 pixels around the ground truth are set to 1 and the values for the remaining background are set to 0. The landmark regression task outputs a heat map of channels, in which the correct locations are represented by Gaussian with a standard deviation of 5 pixels. The two tasks share the weights of the entire encoder-decoder except for the last convolutional layer, which uses convolutional layers to adapt to either the binary map or the heat map.

In either landmark detection or regression, the foreground pixels are much less than the background ones, which lead to highly unbalanced loss contributions. To solve this issue, we enlarge the foreground loss defined in Equation (8) and (11) by multiplying a constant weight (15 in most cases) to focus more on foreground pixels.

Temporal recurrent learning. We specify the configuration of in Figure 6 (left). A Long Short Term Memory (LSTM) module HochreiterNC97 ; OhNIPS15 is used to model the temporal variations of the identity code. There are hidden neurons are used in LSTM. We empirically set the number of successive frames as in Equation (11). The prediction loss is calculated at each time step. Directly feeding the non-identity code into LSTM layers would lead to a slow training as it needs a large number of neurons for both the input and output. Instead, we apply average pooling to compress to a vector before inputting to the LSTM and recover it by unpooling with indices as shown in Figure 4 (left).

Figure 6: Left: the architecture of . We use average pooling to cut down the input dimension of LSTM and recover the dimension by upsampling. Right: the architecture of . We set to achieve a compact identity representation.

4.3 The Design of

The design of is shown in Figure 6 (right). The purpose of is to apply additional identity constraint on , so the identity and non-identity codes can be decoupled more completely. Specially, takes as the input and output a feature vector for the identity representation. Instead of using a very long feature vector in former face recognition networks TaigmanCVPR14 , e.g. , we use a compact one, e.g. , to reduce the computational cost for efficient training SchroffCVPR15 ; SunCVPR15 . We apply dropout on the vector to avoid overfitting. The vector is then followed by a fully-connected layer of neurons to output an one-hot vector for the identity prediction, where is the number of different subjects in training sequences. We use the cross-entropy loss defined in Equation (13) to train the identity task.

AFLW Koestinger11 LFW Gary14 Helen LeECCV12 LFPW BelhumeurCVPR11 TF fgnet04 FM PengICCV15 300-VW ShenICCVW15
in-the-wild setting yes yes yes yes no yes yes
image number 21,080 12,007 2,330 1,035 500 2,150 114,000
video number - - - - 5 6 114
landmark annotation 21pt 7pt 194pt 68pt 68pt 68pt 68pt
subject number - 5,371 - - 1 6 105
used in training 16,864 12,007 2,330 1,035 0 0 90,000
used in evaluation 4,216 0 0 0 500 2150 24,000
Table 4: The image and video datasets used in training and evaluation. We split AFLW and 300-VW into two parts for training and evaluation, respectively. LFW, Helen, LFPW, TF, and FM are used for training only. Note that LFW, TF, FM and 300-VW have both landmark and identity annotations; while the others have only landmark annotations.

5 Experiments

We first introduce the datasets and settings. Then we carry out comprehensive module-wise study to validate the proposed method in various aspects. Finally, we compare our method with state-of-the-arts on both controlled and in-the-wild datasets.

5.1 Datasets and Settings

Datasets. We conduct our experiments on both image and video datasets. These datasets are widely used in face alignment and recognition. They present challenges in multiple aspects such as large pose, extensive expression, severe occlusion and dynamic illumination. Totally 7 datasets are used:

We list configurations and setups of each dataset in Table 4. Different datasets have different landmark annotation protocol. For Helen, LFPW, TF, FM and 300-VW, we follow SagonasICCVW13 ; Sagonas20163 to obtain both 68- and 7-landmark annotation. For AFLW, we generat 7-landmark annotations using the original 21 landmarks. The landmark annotation of LFW is given by Gary14 . For identity labels, we manually label all videos in TF, FM, and 300-VW. It is easy since the identity is unique is a given video.

AFLW and 300-VW have the largest number of labeled images. They are also more challenging than others due to the extensive variations. Therefore, we use them for both training and evaluation. More specifically, of the images in AFLW and out of videos in 300-VW are used for training, and the rest are used for evaluation. We sample videos to roughly cover the three different scenarios defined in ChrysosICCVW15 , i.e. ”Scenario 1”, ”Scenario 2” and ”Scenario 3”, corresponding to well-lit, mild unconstrained and completely unconstrained conditions.

We perform data augmentation by sampling ten variations from each image in the image training datasets. The sampling was achieved by random perturbation of scale ( to ), rotation (), translation ( pixels), as well as horizontal flip. To generate sequential training data, we randomly sample 100 clips from each training video, where each clip has 10 frames. It is worth mentioning that no augmentation is applied on video training data to preserve the temporal consistency in the successive frames.

Compared methods. We compared the proposed method with both regression based and deep learning based approaches that reported state-of-the-art performance in unconstrained conditions. Totally 8 methods are compared:

For image-based evaluation, we follow AsthanaCVPR13 to provide a bounding box as the face detection output. For video-based evaluation, we follow PengICCV15 to utilize a tracking-by-detection protocol, where the face bounding box of the current frame is calculated according to the landmark of the previous frame.

Training strategy. Our approach is capable of end-to-end training. However, there are only 105 different subjects presented in 300-VW, which hardly provide sufficient supervision for the identity constraint. To make full use of all datasets, we conducted the training through three steps. First, we pre-train the network without and using image-based datasets, i.e., AFLW Koestinger11 , Helen LeECCV12 and LFPW BelhumeurCVPR11 . Then, is engaged for identity constraint and fine-tuned together with other modules using image-based LFW Gary14 . Finally, is triggered and the entire network is fine-tuned using video-based dataset, i.e. 300-VW ShenICCVW15 .

Experimental Settings. In every frame, the initial response map (Equation (2)) is generated using the landmark prediction of the previous frame. Parameter and (Equation (14)) are empirically set so the ratio of is roughly equal to .

For training, we optimize the network parameters by using stochastic gradient descent (SGD) with 0.9 momentum. We use fixed learning rate started at 0.01 and manually decreased it to an order of magnitude according to the validation accuracy. is initialized using pre-trained weights of VGG-16 SimonyanCoRR14 or ResNet-152 he2016deep . Other modules are initialized with Gaussian initialization JiaACMM14 . The training clips in a mini-batch have no identity overlap to avoid oscillations of the identity loss. We perform temporal recurrent learning in both forward and backward direction to double the usage of the sequential corpus.

For testing, we split 300-VW so that the training and testing videos do not have identity overlap (16 videos share 7 identities) to avoid overfitting. We use the inter-ocular distance to normalize the root mean square error (RMSE) SagonasICCVW13 for accuracy evalutaion. A prediction with larger than mean error is reported as a failure ShenICCVW15 .

5.2 Validation of Encoder-decoder Variants

In Section 4.1, we proposed two different designs of encoder-decoder: (1) VGGNet-based design with symmetrical encoder and decoder, which has been mainly investigated in our former conference paper peng2016recurrent ; and (2) ResNet-based design with asymmetrical encoder, i.e., the encoder is much deeper than the decoder. In particular, skip connections are incorporated in bridging the encoder and decoder with hierarchical spatial information at different resolutions.

We compared the performance of two encoder-decoder variants on AFLW Koestinger11 and 300-VW ShenICCVW15 . The results are reported in Table 5. The results show that the ResNet-based design outperforms the VGGNet-based variant with a substantial margin in terms of fitting accuracy (mean error) and robustness (standard deviation). Much deeper layers, as well as the proposed skipping shortcuts, contribute a lot to the improvement. In addition, the ResNet-based encoder-decoder has very close computational cost to the VGGNet-based variant, e.g. the average fitting time per image/frame and the memory usage of a trained model, which should be attributed to the custom residual module design and the proposed asymmetrical encoder-decoder network.

Mean (%) Std (%) Time Memory
VGGNet-based 6.85 4.52 43.6 184
ResNet-based 6.33 3.61 54.9 257
VGGNet-based 5.16 2.57 42.5 184
ResNet-based 4.75 2.10 56.2 257
Table 5: Performance comparison of VGGNet-based and ResNet-based encoder-decoder Variants. Network configurations are described in Section 4.1. Row 1-2: image-based results on AFLW Koestinger11 ; Row 3-4: video-based results on 300-VW ShenICCVW15 .
Common () Challenging ()
Error Failure Error Failure
Single-step Detection 6.05 4.62 8.14 12.4
Single-step Regression 5.92 4.75 7.87 14.5
Recurrent Det.+Det. 5.86 3.44 7.33 8.20
Recurrent Det.+Reg. 5.71 3.30 6.97 8.75
Table 6: Comparison of single-step detection or regression with the proposed recurrent detection-followed-by-regression on AFLW Koestinger11 . The proposed method (Last Row) has the best performance especially in challenging settings.
Mean () Std () Memory
Cascade Det. & Reg. 6.81 4.53 468
Recurrent Det. & Reg. 6.33 3.61 257
Table 7: Comparison of cascade and recurrent learning in the challenging settings of AFLW Koestinger11 . The latter improves accuracy with a half memory usage of the former.
Common Challenging Full
Mean (%) Std (%) Fail (%) Mean (%) Std (%) Failure (%) Mean (%) Std (%) Fail (%)
w/o 4.52 2.24 3.48 6.27 5.33 13.3 5.83 3.42 6.43
4.21 1.85 1.71 5.64 3.28 5.40 5.25 2.15 2.82
Table 8: Validation of temporal recurrent learning on 300-VW SagonasICCVW13 . helps to improve the tracking robustness (smaller std and lower failure rate), as well as the tracking accuracy (smaller mean error). The improvement is more significant in challenging settings of large pose and partial occlusion as demonstrated in Figure 7.

5.3 Validation of Spatial Recurrent Learning

We validated the proposed spatial recurrent learning on the validation set of AFLW Koestinger11 . To better investigate the benefits of spatial recurrent learning, we partitioned the validation set into two image groups according to the absolute value of the yaw angle: (1) Common settings where -; and (2) Challenging settings where ,. The training sets are ensembles of AFLW Koestinger11 , Helen LeECCV12 and LFPW BelhumeurCVPR11 as described in Table 4.

Validation of detection-followed-by-regression. To validate the proposed recurrent detection-followed-by-regression, we investigated four different network configurations:

  • Single-step prediction using loss defined in Equation (5);

  • Single-step prediction using loss defined in Equation (7);

  • Two-step recurrent detection-followed-by-detection;

  • Two-step recurrent detection-followed-by-regression.

The mean fitting errors and failure rates are reported in Table 6. First, the results show that the two-step recurrent learning can instantly decrease the fitting error and failure rate, compared with either the single-step detection or regression. The improvement is more significant in challenging settings with large pose variations. Second, though landmark detection is more robust in challenging settings (low failure rate), it lacks the ability to predict precise locations (small fitting error) compared to landmark regression. This fact proves the effectiveness of the proposed recurrent detection-followed-by-regression.

Validation of recurrent learning. We also conducted comparisons between the proposed spatial recurrent learning and the cascade models that are widely used in former approaches SunCVPR13 ; ZhangECCV14 . For a fair comparison, we implemented a two-step cascade variant to perform detection-followed-by-regression. Each network in the cascade has exactly the same architecture as the recurrent version. But there is no weight sharing among cascades. We fully trained the cascade networks using the same training set and validated the performance in challenging settings.

The comparison is shown in Table 7. Unsurprisingly, the spatial recurrent learning can improve the fitting accuracy. The underlying reason is taht the recurrent network learns the step-by-step fitting strategy jointly, while the cascade networks learn each step independently. It can better handle the challenging settings where the initial guess is usually far away from the ground truth. Moreover, the recurrent network with shared weights can instantly reduce the memory usage to one-half of the cascaded model.

Figure 7: Examples of temporal recurrent learning on 300-VW SagonasICCVW13 . The tracked subject undergoes intensive pose and expression variations as well as severe partial occlusions. substantially improves the tracking robustness (less variance) and fitting accuracy (low error), especially for landmarks on the nose tip and mouth corners.

5.4 Validation of Temporal Recurrent Learning

We validate the proposed temporal recurrent learning on the validation set of 300-VW ShenICCVW15 . To better study the performance under different settings, we split the validation set into two groups: (1) 9 videos in common settings that roughly match ”Scenario 1”; and (2) 15 videos in challenging settings that roughly match ”Scenario 2” and ”Scenario 3”. The common, challenging and full sets were used for evaluation.

We implemented a variant of our approach that turns off the temporal recurrent learning

. It was also pre-trained on the image training set and fine-tuned on the video training set. Since there was no temporal recurrent learning, we used frames instead of clips to conduct the fine-tuning which was performed for the same 50 epochs. We showed the result with and without temporal recurrent learning in Table

8.

For videos in common settings, the temporal recurrent learning achieves and improvement in terms of mean error and standard deviation respectively, while the failure rate is remarkably reduced by . Temporal modeling produces better prediction by taking consideration of history observations. It may implicitly learn to model the motion dynamics in the hidden units from the training clips.

For videos in challenging settings, the temporal recurrent learning won with even bigger margin. Without , it is hard to capture the drastic motion or changes in consecutive frames, which inevitably results in higher mean error, std and failure rate. Figure 7 shows an example where the subject exhibits intensive pose and expression variations as well as severe partial occlusions. The curve showed our recurrent model obviously reduced landmark errors, especially for landmarks on nose tip and mouth corners. The less oscillating error also suggests that significantly improves the prediction stability over frames.

Figure 8: Fitting accuracy of different facial components with respect to the number of training epochs on 300-VW ShenICCVW15 . The proposed supervised identity disentangling helps to achieve a more complete factor decoupling in the bottleneck of the encoder-decoder, which yields better generalization capability and more accurate fitting results.

. 7 landmarks 68 landmarks TF fgnet04 FM PengICCV15 300-VW ShenICCVW15 TF fgnet04 FM PengICCV15 300VW ShenICCVW15 DRMF AsthanaCVPR13 4.43 8.53 9.16 ESR CaoIJCV14 3.49 6.74 7.09 ESR CaoIJCV14 3.81 7.58 7.83 SDM XiongCVPR13 3.80 7.38 7.25 SDM XiongCVPR13 4.01 7.49 7.65 CFAN ZhangECCV14 3.31 6.47 6.64 IFA AsthanaCVPR14 3.45 6.39 6.78 TCDCN ZhangTangECCV14 3.45 6.92 7.59 DCNC SunCVPR13 3.67 6.16 6.43 CFSS ZhuCVPR15 3.04 5.67 6.13 RED-Net (Ours) RED-Net (Ours)

Table 9: Mean error comparison with state-of-the-arts on video-based validation sets: TF, FM, and 300-VW SagonasICCVW13 . The top performance in each dataset is highlighted. Our approach achieves the best fitting accuracy on both controlled and unconstrained datasets.

5.5 Benefits of Supervised Identity Disentangling

The supervised identity disentangling is proposed to better decouple the temporal-invariant and temporal-variant factors in the bottleneck of the encoder-decoder. This facilitates the temporal recurrent training, yielding better generalization and more accurate fittings at test time.

To study the effectiveness of the identity constraint, we removed and follow the exact training steps. The testing accuracy comparison on the 300-VW SagonasICCVW13 is shown in Figure 8

. The accuracy was calculated as the ratio of pixels that were correctly classified in the corresponding channel(s) of the response map.

The validation results of different facial components show similar trends: (1) The network demonstrates better generalization capability by using additional identity cues, which results in a more efficient training. For instance, after only 10 training epochs, the validation accuracy for landmarks located at the left eye reaches 0.84 with identity loss compared to 0.8 without identity loss. (2) The supervised identity information can substantially boost the testing accuracy. There is an approximately improvement by using the additional identity loss. It worth mentioning that, at the very beginning of the training (¡ 5 epochs), the network has inferior testing accuracy with supervised identity disentangling. It is because the suddenly added identity loss perturbs the backpropagation process. However, the testing accuracy with identity loss increases rapidly and outperforms the one without identity loss after only a few more training epochs.

5.6 General Comparison with the State of the art

We compared our framework with both traditional approaches and deep learning based approaches. The methods with hand-crafted features include: (1) DRMF AsthanaCVPR13 , (2) ESR CaoIJCV14 , (3) SDM XiongCVPR13 , (4) IFA AsthanaCVPR14 , and (5) PIEFA PengICCV15 . The deep learning based methods include: (1) DCNC SunCVPR13 , (2) CFAN ZhangECCV14 , and (3) TCDCN ZhangTangECCV14 . All these methods were recently proposed and reported state-of-the-art performance. For fair comparison, we evaluated these methods in a tracking protocol: fitting result of current frame was used as the initial shape (DRMF, SDM and IFA) or the bounding box (ESR and PIEFA) in the next frame. The comparison was performed on both controlled, e.g. Talking Face (TF) fgnet04 , and in-the-wild datasets, e.g. Face Movie (FM) PengICCV15 and 300-VW ShenICCVW15 .

We report the evaluation results for both 7 and 68 landmark setups in Table  9. Our approach achieves state-of-the-art performance under both settings. It outperforms others with a substantial margin on all datasets under both 7-landmark and 68-landmark protocols. The performance gain is more significant on the challenging datasets (FM and 300-VW) than controlled dataset (TF). Our alignment model runs fairly fast, it takes around 40ms to process an image using a Tesla K40 GPU accelerator. Please refer to Figure 9 for fitting results of our approach on FM PengICCV15 and 300-VW ShenICCVW15 , which demonstrate the robust and accurate performance in wild conditions.

(a)
(b)
Figure 9: Examples of 7-landmark (Row 1-6) and 68-landmark (Row 7-10) fitting results on FM PengICCV15 and 300-VW ShenICCVW15 . The proposed approach achieves robust and accurate fittings when the tracked subjects suffer from large pose/expression changes (Row 1, 3, 4, 6, 10), illumination variations (Row 2, 8) and partial occlusions (Row 5, 7).

6 Conclusion

In this paper, we proposed a novel recurrent encoder-decoder network for real-time sequential face alignment. It utilizes spatial recurrency to train an end-to-end optimized coarse to fine landmark detection model. It decouples temporal-invariant and temporal-variant factors in the bottleneck of the network, and exploits recurrent learning at both spatial and temporal dimensions. Extensive experiments demonstrated the effectiveness of our framework and its superior performance. The proposed method provides a general framework that can be further applied to other localization-sensitive tasks, such as human pose estimation, object detection, scene classification, and others.

References

  • (1) Asthana, A., Zafeiriou, S., Cheng, S., Pantic, M.: Robust discriminative response map fitting with constrained local models.

    In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3444–3451 (2013)

  • (2) Asthana, A., Zafeiriou, S., Cheng, S., Pantic, M.: Incremental face alignment in the wild. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (2014)
  • (3) Badrinarayanan, V., Kendall, A., Cipolla, R.: Segnet: A deep convolutional encoder-decoder architecture for image segmentation. CoRR (2015)
  • (4) Belhumeur, P.N., Jacobs, D.W., Kriegman, D.J., Kumar, N.: Localizing parts of faces using a consensus of exemplars. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (2011)
  • (5)

    Black, M., Yacoob, Y.: Tracking and recognizing rigid and non-rigid facial motions using local parametric models of image motion.

    In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition
  • (6) Bulat, A., Tzimiropoulos, G.: Human Pose Estimation via Convolutional Part Heatmap Regression, pp. 717–732. Springer International Publishing, Cham (2016)
  • (7) Cao, X., Wei, Y., Wen, F., Sun, J.: Face alignment by explicit shape regression. International Journal of Computer Vision 107(2), 177–190 (2014)
  • (8) Cho, K., van Merrienboer, B., Bahdanau, D., Bengio, Y.: On the properties of neural machine translation: Encoder-decoder approaches. CoRR abs/1409.1259 (2014)
  • (9) Chrysos, G.G., Antonakos, E., Zafeiriou, S., Snape, P.: Offline deformable face tracking in arbitrary videos. In: Proceedings of the IEEE International Conference on Computer Vision Workshop, pp. 954–962 (2015)
  • (10) Cootes, T.F., Taylor, C.J.: Active shape models - smart snakes. In: BMVC (1992)
  • (11) Decarlo, D., Metaxas, D.: Optical flow constraints on deformable models with applications to face tracking. International Journal of Computer Vision 38(2), 99–127 (2000)
  • (12) FGNet: Talking face video. Tech. rep., Online (2004)
  • (13) Gao, X., Su, Y., Li, X., Tao, D.: A review of active appearance models. IEEE Transactions on Systems, Man, and Cybernetics 40(2), 145–158 (2010)
  • (14) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Computer Vision and Pattern Recognition (CVPR), 2016 IEEE Conference on (2016)
  • (15) Hochreiter, S., Schmidhuber, J.: Long short-term memory. Neural Computing 9(8), 1735–1780 (1997)
  • (16) Hong, S., Noh, H., Han, B.: Decoupled deep neural network for semi-supervised semantic segmentation. CoRR abs/1506.04924 (2015)
  • (17) Ioffe, S., Szegedy, C.: Batch normalization: Accelerating deep network training by reducing internal covariate shift. CoRR abs/1502.03167 (2015)
  • (18)

    Jia, Y., Shelhamer, E., Donahue, J., Karayev, S., Long, J., Girshick, R., Guadarrama, S., Darrell, T.: Caffe: Convolutional architecture for fast feature embedding.

    In: ACM Multimedia Conference, pp. 675–678 (2014)
  • (19) Jourabloo, A., Liu, X.: Large-pose face alignment via cnn-based dense 3d model fitting. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (2016)
  • (20) Karpathy, A., Fei-Fei, L.: Deep visual-semantic alignments for generating image descriptions. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (2015)
  • (21) Kendall, A., Badrinarayanan, V., Cipolla, R.: Bayesian segnet: Model uncertainty in deep convolutional encoder-decoder architectures for scene understanding. CoRR abs/1511.02680 (2015)
  • (22) Koestinger, M., Wohlhart, P., Roth, P.M., Bischof, H.: Annotated facial landmarks in the wild: A large-scale, real-world database for facial landmark localization. In: Workshop on Benchmarking Facial Image Analysis Technologies (2011)
  • (23) Lai, H., Xiao, S., Cui, Z., Pan, Y., Xu, C., Yan, S.: Deep cascaded regression for face alignment. In: arXiv:1510.09083v2 (2015)
  • (24) Le, V., Brandt, J., Lin, Z., Bourdev, L., Huang, T.S.: Interactive facial feature localization. In: European Conference on Computer Vision, pp. 679–692 (2012)
  • (25) Learned-Miller, G.B.H.E.: Labeled faces in the wild: Updates and new reporting procedures. Tech. Rep. UM-CS-2014-003, University of Massachusetts, Amherst (2014)
  • (26) Long, J., Shelhamer, E., Darrell, T.: Fully convolutional networks for semantic segmentation. CoRR abs/1411.4038 (2014)
  • (27) Long, J.L., Zhang, N., Darrell, T.: Do convnets learn correspondence? In: Advances in Neural Information Processing Systems, pp. 1601–1609 (2014)
  • (28) Lu, L., Zhang, X., Cho, K., Renals, S.: A study of the recurrent neural network encoder-decoder for lar ge vocabulary speech recognition. In: INTERSPEECH (2015)
  • (29) Mikolov, T., Joulin, A., Chopra, S., Mathieu, M., Ranzato, M.: Learning longer memory in recurrent neural networks. CoRR abs/1412.7753 (2014)
  • (30) Mikolov, T., Karafiát, M., Burget, L., Černocký, J., Khudanpur, S.: Recurrent neural network based language model. In: INTERSPEECH (2010)
  • (31) Milborrow, S., Nicolls, F.: Locating facial features with an extended active shape model. In: European Conference on Computer Vision, pp. 504–513 (2008)
  • (32)

    Nair, V., Hinton, G.E.: Rectified linear units improve restricted boltzmann machines.

    In: The International Conference on Machine Learning, pp. 807–814 (2010)

  • (33) Oh, J., Guo, X., Lee, H., Lewis, R.L., Singh, S.: Action-conditional video prediction using deep networks in atari games. In: Advances in Neural Information Processing Systems, pp. 2845–2853 (2015)
  • (34) Oliver, N., Pentland, A., Berard, F.: Lafter: Lips and face real time tracker. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 123–129 (1997)
  • (35) Patras, I., Pantic, M.: Particle filtering with factorized likelihoodsfor tracking facial features. In: Automatic Face and Gesture Recognition, pp. 97–102 (2004)
  • (36) Peng, X., Feris, R.S., Wang, X., Metaxas, D.N.: A recurrent encoder-decoder network for sequential face alignment. In: European Conference on Computer Vision, pp. 38–56. Springer International Publishing (2016)
  • (37) Peng, X., Hu, Q., Huang, J., Metaxas, D.N.: Track facial points in unconstrained videos. In: Proceedings of the British Machine Vision Conference, pp. 129.1–129.13 (2016)
  • (38) Peng, X., Huang, J., Hu, Q., Zhang, S., Elgammal, A., Metaxas, D.: From circle to 3-sphere: Head pose estimation by instance parameterization. Computer Vision and Image Understanding 136, 92–102 (2015)
  • (39) Peng, X., Yu, X., Sohn, K., Metaxas, D.N., Chandraker, M.: Reconstruction-based disentanglement for pose-invariant face recognition. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1623–1632 (2017)
  • (40) Peng, X., Zhang, S., Yang, Y., Metaxas, D.N.: Piefa: Personalized incremental and ensemble face alignment. In: Proceedings of the IEEE International Conference on Computer Vision (2015)
  • (41) Peng, X., Zhang, S., Yu, Y., Metaxas, D.N.: Toward personalized modeling: Incremental and ensemble alignment for sequential faces in the wild. International Journal of Computer Vision pp. 1–14 (2017)
  • (42) Sagonas, C., Antonakos, E., Tzimiropoulos, G., Zafeiriou, S., Pantic, M.: 300 faces in-the-wild challenge: database and results. Image and Vision Computing 47, 3 – 18 (2016)
  • (43) Sagonas, C., Tzimiropoulos, G., Zafeiriou, S., Pantic, M.: 300 faces in-the-wild challenge: The first facial landmark localization challenge. In: Proceedings of the IEEE International Conference on Computer Vision Workshop (2013)
  • (44) Schroff, F., Kalenichenko, D., Philbin, J.: Facenet: A unified embedding for face recognition and clustering. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 815–823 (2015)
  • (45) Shen, J., Zafeiriou, S., Chrysos, G., Kossaifi, J., Tzimiropoulos, G., Pantic, M.: The first facial landmark tracking in-the-wild challenge: Benchmark and results. In: Proceedings of the IEEE International Conference on Computer Vision Workshop (2015)
  • (46) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. CoRR abs/1409.1556 (2014)
  • (47) Sun, Y., Wang, X., Tang, X.: Deep convolutional network cascade for facial point detection. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3476–3483 (2013)
  • (48) Sun, Y., Wang, X., Tang, X.: Deeply learned face representations are sparse, selective, and robust. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2892–2900 (2015)
  • (49) Taigman, Y., Yang, M., Ranzato, M., Wolf, L.: Deepface: Closing the gap to human-level performance in face verification. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (2014)
  • (50) Tzimiropoulos, G.: Project-out cascaded regression with an application to face alignment. In: CVPR, pp. 3659–3667 (2015)
  • (51) Veeriah, V., Zhuang, N., Qi, G.J.: Differential recurrent neural networks for action recognition. In: Proceedings of the IEEE International Conference on Computer Vision (2015)
  • (52) Wang, J., Cheng, Y., Feris, R.S.: Walk and learn: Facial attribute representation learning from egocentric video and contextual data. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (2016)
  • (53) Wang, X., Yang, M., Zhu, S., Lin, Y.: Regionlets for generic object detection. TPAMI 37(10), 2071–2084 (2015)
  • (54) Wu, Y., Ji, Q.: Constrained joint cascade regression framework for simultaneous facial action unit recognition and facial landmark detection. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (2016)
  • (55) Xuehan-Xiong, De la Torre, F.: Supervised descent method and its application to face alignment. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (2013)
  • (56) Yang, J., Reed, S., Yang, M.H., Lee, H.: Weakly-supervised disentangling with recurrent transformations for 3d view synthesis. In: NIPS (2015)
  • (57) Yao, L., Torabi, A., Cho, K., Ballas, N., Pal, C., Larochelle, H., Courville, A.: Describing videos by exploiting temporal structure. In: Proceedings of the IEEE International Conference on Computer Vision (2015)
  • (58) Zeiler, M.D., Fergus, R.: Visualizing and understanding convolutional networks. In: European Conference on Computer Vision, pp. 818–833 (2014)
  • (59) Zhang, J., Shan, S., Kan, M., Chen, X.: Coarse-to-fine auto-encoder networks (CFAN) for real-time face alignment. In: European Conference on Computer Vision, pp. 1–16 (2014)
  • (60) Zhang, Z., Luo, P., Loy, C.C., Tang, X.: Facial landmark detection by deep multi-task learning. In: European Conference on Computer Vision, pp. 94–108 (2014)
  • (61) Zhu, S., Li, C., Loy, C.C., Tang, X.: Face alignment by coarse-to-fine shape searching. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4998–5006 (2015)
  • (62) Zhu, X., Lei, Z., Liu, X., Shi, H., Li, S.Z.: Face alignment across large poses: A 3d solution. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (2016)