A fast online cascaded regression algorithm for face alignment

05/10/2019 ∙ by Lin Feng, et al. ∙ Dalian University of Technology 0

Traditional face alignment based on machine learning usually tracks the localizations of facial landmarks employing a static model trained offline where all of the training data is available in advance. When new training samples arrive, the static model must be retrained from scratch, which is excessively time-consuming and memory-consuming. In many real-time applications, the training data is obtained one by one or batch by batch. It results in that the static model limits its performance on sequential images with extensive variations. Therefore, the most critical and challenging aspect in this field is dynamically updating the tracker's models to enhance predictive and generalization capabilities continuously. In order to address this question, we develop a fast and accurate online learning algorithm for face alignment. Particularly, we incorporate on-line sequential extreme learning machine into a parallel cascaded regression framework, coined incremental cascade regression(ICR). To the best of our knowledge, this is the first incremental cascaded framework with the non-linear regressor. One main advantage of ICR is that the tracker model can be fast updated in an incremental way without the entire retraining process when a new input is incoming. Experimental results demonstrate that the proposed ICR is more accurate and efficient on still or sequential images compared with the recent state-of-the-art cascade approaches. Furthermore, the incremental learning proposed in this paper can update the trained model in real time.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

Face alignment aims to locate a sparse set of facial landmarks for a given facial image or video. It is a topic of interest in the domain of Computer Vision because many subsequent face analysis tasks, such as face recognition

[1], facial animation, and authentication on the Internet of Things [2]

, heavily depend on the accurate localizations of facial landmarks. Over the decades, various face alignment procedures have been proposed, which can be broadly classified into generative models and discriminative models. The generative approaches adopt an analysis-by-synthesis loop where the optimization strategy attempt to find the optimal shape parameters by maximizing the joint posterior probability between the pre-built deformable model and feature of the input image. The representative examples of this category are Active Appearance Model(AAM)

[3] and Gauss-Newton Deformable Part Model(GN-DPM) [4].

The discriminative models seek to learn discriminative information (i.e. discriminative function [5, 6]) which directly maps representation of facial appearance to facial landmarks. Many discriminative methods utilize popular Cascaded Regression(CR) framework, in which a series of regressors are learned in a cascaded manner to gradually refine the initialization to ground-truths. Numerous cascaded regression methodologies have shown to produce excellent results on face alignment tasks and validate the CR framework’s superior efficiency and accuracy [7, 8, 9, 10, 11]. The most efficient of these methods is LBF [12] using a set of local binary features to learn a cascade linear regressors and its running speed can be achieved over 3000fps on a standard desktop for locating a few dozens of landmarks. The authors of [6]

attempt to provide a theoretical explanation of cascade linear regression from the perspective of least squares optimization and solve it as a supervised descent methodology. While these cascaded linear regression methods are very efficient, they are suffering in poor fitting capability to exploit the non-linear and complex relationship between feature space and shape variations in unconstrained scenarios.

Considering the limitations of linear regression, some non-linear regressors based on decision-making tree, such as boosting [13]

and random forest

[14] were introduced into cascaded regression. However, these ensemble learning models are prone to over-fitting and suffered from very high computational burden [15]

. With the research wave of deep learning

[16] has been carved out in the image domain, many deep-based methods [7, 17, 18, 19]

have been proposed and achieved breakthrough successes in some big scale datasets. Because of their complicated structure and a great number of hyperparameters, these deep frameworks tend to consume massive time to train the models. Moreover, deep structure encounters a complete retraining process if the training data is supplemented.

Although these discriminative methods have accomplished superior performance for face alignment under unconstrained faces or some more challenging situations, they are limited by a static generic model that is built completely on offline training data. Nevertheless, such a static model can not be updated in real time to handle some certain specific tasks(e.g. person-specific landmarks tracking for video). Since the entire training procedure is very time-consuming and very expensive, how to best exploit discriminative cascade regression for incremental learning is an intractable issue. A few studies [9, 20] have started to work on this vital issue from the viewpoint of the incremental linear regression function. However, linear regressor tends to be limited to a linear relationship between dependent and independent variables.

In order to overcome the aforementioned limitations, in this paper, we study incremental training of cascaded regression with non-linear regressor. As the foundation of our algorithm, CR framework is one of the most practical and effective framework for localizing facial landmarks. Nevertheless, traditional CR framework still has two limitations to achieve incremental training. (1) It successively trains a series of regressors stage by stage. The entire procedure(4 or more cascaded stages) is too slow to satisfy the requirements of online learning in real time. (2) In the cascade executions, the input of each stage intensely depends on the outputting shapes of the previous stage. In that case, if the certainly stage-regressor is incrementally updated, the whole input set of the subsequent stage will be recomputed by new regressor and all samples formerly trained must be reloaded. Obviously, these limitations can lead to a vast resource-consuming and time-consuming when the scale of data is so large.

For these, we propose incremental cascade regression(ICR), which aims to train and update the series of non-linear regressors in a parallel manner instead of a sequential one. In special, we adopt Monte Carlo sampling methodology [6, 9]

to approximate the shape space, in which the facial shape is no longer depend on the outputting of the previous stage. Meanwhile, ICR is equipped extreme learning machine(ELM) as the discriminative regressor to learn the mapping between facial feature representations and the shape variations. ELM has powerful capability to approximate any linear or non-linear mapping(e.g. the least square constraints in face alignment). Moreover, ELM has very fast training speed and low computational cost without the hassle of parameters tuning when compared to the gradient descent-based regressors or decision-tree based regressors. As shown in Figure 

1, ICR divides into two parts: offline and online training procedures. In the offline training procedure, a generic model can be learned by a parallel cascade regression of ELM. Then the online training procedure can update the trained model by using the Monte Carlo sampling methodology [6, 9]. In this way, incrementally updating the trained regressor of each stage does not depend on the outputting of the previous stage, so we can update all the regressors in parallel. Meanwhile, we adopt an online sequential extreme learning machine(OS-ELM) [21] method to update the trained ELM regressors. The OS-ELM [21] is an incremental learning strategy of extreme learning machine(ELM) [22]. In summary, our main contributions are as follows:

  • To the best of our knowledge, ICR is the first parallel cascaded regression framework equipped the non-linear regressors.

  • ICR is capable of replenishing new training data and very fast updating the model without retraining from scratch, which can constantly increase the generalization and robustness of model.

  • We evaluate ICR on three datasets and demonstrate the importance of incremental learning in achieving state-of-the-art performance on sequential training data.

Fig. 1: Overview of our approach

Ii Related work

As described above, the existing methods can be split into generative and discriminative categories. Both categories have proposed diverse models for offline face alignment with varying degrees of success. The main problem that our method can address is incremental learning for face alignment. In this section, we will focus on closely related works on this task.

On the generative side, very limited research has been introduced to AAM [23]

, in which the incremental principal component analysis(iPCA)

[24]

is used to update the generic AAM’s linear appearance model with current face image. However, this method heavily relys on prebuilding a robust parametric model as well as AAM and need images of the same person for training, which is less generalization and unpractical.

On the discriminative side, Asthana et al. [9]

proposed a incremental version of SDM called iPar. In this work, a parallel strategy is used to implement the incremental update of the modified linear regressors. For each cascade stage, they utilize a set of perturbations from a pre-populated Gaussian distribution instead of the outputting shape variations of the previous stage. In this way, computing each regressor is independent of the other stages, which can facilitate the whole training procedure in a parallel manner. As experimentally shown by

[9], such approximate training strategy achieves similar accuracy as the model of SDM trained in a sequential manner. This study offers a new idea and premise for fast training incrementally discriminative regressors. However, linear regressors used in iPar are too weak to exploit the complex relationship between shape variables and appearance features.

Inspired by [9], we propose a incremental cascade regression, coined ICR, that pay more attention to non-linear discriminative regression in a parallel cascaded regression framework.

Iii Method

In this section, we present a cascade regression of extreme learning machine and its online training version. As a preliminary, we first take a brief review the ELM algorithm in detail.

Iii-a Extreme Learning Machine

ELM is an efficient way of building the single layer feed forward neural networks(SLFNs)

[25]. Given input-output training samples, arbitrary distinct samples . Here, is a

input vector and

is a target vector. ELM with hidden nodes and activation can be mathematically modeled as

(1)

where and are the randomly chosen learning parameters of hidden nodes, is the output of - hidden nodes w.r.t the input

for additive units with the activation function

(e.g., sigmoid), is defined as

(2)

where the denotes the inner product of vectors and in . The compact form of equations in Equation(1) is:

(3)

where

(4)
(5)

is the out matrix of hidden layer, where the - column of is the output vector of - hidden node with respect to inputs . In the Equation(3), the learning parameters of hidden layer nodes can be randomly generated, So the output weights

can be estimated by finding the least square solution of the linear system, according to the expression:

(6)

where is the Moore-Penrose gneralized inverse of . In practical, it is usually comes that . The Equation(6) can be rewritten as

(7)

where, .

Iii-B Cascade Regression of Extreme Learning Machine

A facial shape can be formed by a vector consisting of facial landmarks, where the are the 2D coordinates of the - landmark. Cascade regression frameworks usually begin with an initial shape , and progressively refine the shape to the ground-truth via adding a shape increment stage by stage. The is estimated by regressing the shape-indexed feature around current shape estimate. The shape-indexed feature can be represented as , where, is the input image, is the dimensionality of the feature. The function can be a learning-based mapping [12, 17] or a hand-crafted feature(e.g. SIFT [6], Hog [26]). Linear regression has been most favoured in various works based on cascade regression because its superior efficiency. However, it is not suitable for incremental learning framework because the shape variations are more and more complicated with increasing incremental training data. Therefore, we propose a cascade regression of extreme learning machine(CRELM). We following introduce the training procedure of CRELM.

Given a set of facial images and their corresponding ground-truth shapes . The set of shape increments can be calculated by , where, is a shape vector from

stage. For achieving a robust representation against illumination, we use SIFT features extracted from patches around the current shape of each stage. To decrease the training error for stage

, we can learn the stage-regressor via minimizing the least-squares error function:

(8)

Let , and , we can rewrite the Equation(8) as the format of ELM:

(9)

where. is computed by Equation(4

) and we choose sigmoid function as the activation mapping

. The Equation (9) can be resolved by (6), (7). We represent the learned regressor for stage as , where (for clearing, here, we omit ). After learning the regressor, the training shapes for stage can be updated by:

(10)

The training procedure is sequentially iterated until the average of the shape differences no longer decrease.

Fig. 2: The training procedures of CRELM and Par-CRELM

Iii-C Parallel Cascade Regression of Extreme learning machine

In order to better approximate the non-linear relationship between the image features and shape variations, in Section III-B, we introduce an efficient non-linear mapping in cascade regression framework. However, this way will inevitably increase the time-consuming of training process. Besides, it is observed that the sequential procedure involved in training CRELM is not suitable for the task of incremental learning. In CRELM, as shown in Figure 2(a), since the shape variations is totally depend on the outputting of previous stage, if new training data inputs, the entire cascade of regressors have to be retrained from the beginning. For example, if a set of new samples have to be added, the first regressor can be easily updated to by utilizing the new shape variations . However, the first regressor has changed on once, the subsequent set of inputs must be re-computed by progagating the entire augmented set through . In this case, all the regressors will retrained in sequence and all previously trained samples must be reloaded, which is time consuming and extremely expensive.

The authors of [9] pointed out that the shape variations at each stage can be approximated by a set of random perturbations drawn from a Gaussian distribution . the mean and covariance can be calculated by training the sequential cascade of regression. In addition, it has been verified in work [9] that the training procedure based on the sampling strategy can deliver a similar testing accuracy as the sequential manner. Inspired by [9], we adopt Monte Carlo sampling methodology to train a parallel cascade regression of extreme learning machine(Par-CRELM). In Par-CRELM, the shape variations required for learning the cascade of regressors do not rely on previous stages and the training can be performed in parallel. In particular, firstly, we compute the the statics for shape variations at each stage while training the cascade of regressors using the proposed CRELM on the offline train set. Then, we train the Par-CRELM procedure, the shape variations for training the cascade of ELM regressors are drawn from the corresponding stage-distribution rather than the calcaulated from previous stage. We have shown the parallel process in Figure 2(b). One advantage of this modification of CRELM is that the regressors of all stages can be learned in parallel with similar alignment accuracy as sequential training. Another advantage is that it provides a premise for fast incremental learning that will be showed in the Section III-D.

Iii-D Incremental Cascade Regression of Extreme Leaning machine

After training Par-CRELM, the offline regressors and the distribution of shape variations are preserved. Here, we present the proposed incremental cascade regression of extreme learning machine(ICRELM) in detail.

Given a set of new training data and , where is the number of new samples. We record trained regressors and distributions as and . For arbitrary stage, contains learned parameters . ICRELM aims to update the cascade of regressors ( in which do not need to update) in parallel using the new training data. For stage , let us randomly sample shape variations drawn from for the new training images and extract the shape-index features . The least-squares error function for all training data becomes:

(11)

where, , is computed by Equation(4) using the trained parameters of regressor . Then, the output weight can be calculated by Equation(7):

(12)

where

(13)

Referring to [21], we can update the via:

(14)

This way, a cascade of regressors can be updated in parallel. The complete training procedure of ICRELM is described in Algorithm 1.

0:  , , , , stages, number of new samples.
0:  updated regressors , .
1:  Parallel for
2:  
  • Get samples from distribution

3:  
  • Extract index-shape features

4:  
  • Generate and

5:  
  • Compute using Equation(4)

6:  
  • Update using Equation(13) and using Equation(14)

7:  End for
Algorithm 1 ICRELM Update Procedure

Iv Experiments

The experiments for face alignment will be presented in two parts . The first part is to evaluate the accuracy of the model which constantly updated by ICRELM with continuous batches of new training data. The second part investigates the static models trained by proposed Par-CRELM and other state-of-the-art methods on public datasets. First of all, We briefly introduce the three datasets used in the experiments of face alignment and evaluation criteria for them.

Iv-a Implementation Details

Iv-A1 Datasets

  • LFPW (29 landmarks) [27] is collected from the web including 1000 training and 300 test images. However some URLs are invalid, we only use 798 training and 221 test images. The images exhibit large variations in pose, occlusion, facial expression, and illumination.

  • HELEN (68 landmarks) [28] contains 2,330 high-resolution web images which are divided into training and test sets. Two sets have 2000 and 300 images respectively.

  • 300-W (68 landmarks) [29] is collected from existing datasets including LFPW, HELLEN, AFW and a challenging dataset called IBUG. We follow the same division in [12, 7], specifically, the training set is made up by the training samples of HELEN, the training samples of LFPW, and AFW, with 3148 images in total. According to the difficulty of alignment, the test set is grouped into Common(the test samples from LFPW and HELEN, total 554 images) and Challenging(IBUG, total 135 images) sets.

Since these datasets provide prescribed face bounding boxes, we do not use any face detectors and thus no face are missed during testing.

Iv-A2 Standard Evaluation Protocols

We adopt two types of comparisons with the value of average error and curve of cumulative error for evaluation. They are prescribed as following:

  • Average error: following almost works, we leverage the standard landmarks mean error normalized by inter-pupil distance. It can be computed by , where, and denote the - landmark coordinates of estimated and ground-truth facial landmark positions respectively, the and denote the inter-pupil distance. We report the error averaged over all annotated landmarks from each testing database. For clarity, we omit the notation in the report result.

  • CED curve: we also draw the cumulative error distribution curve of errors can be computed by the equation: , where the numerator denotes the number of samples on which the error less than the error .

Iv-A3 Settings

In the feature learning, we extracted a SIFT [30] descriptor on local patches for each landmark. We set the number of hidden nodes as 500, 1000, and 1800 for LFPW, HELEN, 300-W datasets respectively. Following almost face alignment models, we fixed the number of cascade stages to .

Iv-B Validity of Online Learning

This experiment aims to validate the utility of the ICRELM(Section III-D) when the new data batches continuously arrive. For this purpose, we have designed the following experiments for LFPW, HELEN, and 300-W datasets. Each dataset was partitioned equally into 6 batches. We used Par-CRELM(Section III-C) to train a offline model on the first batch as the baseline. Then, we employed ICRELM to continuously update the generic model batch-by-batch. From Figure 3, we can observe that a consistent increase in the face alignment accuracy as the online model is incrementally updated with new batch of training data. The worst curve is produced by offline model. It is inevitable that the model trained with small samples tend to have poor generalization and robustness. The other curves are generated after adding %, %, %, %, and % samples in batches from corresponding dataset. The curves illustrate that ICRELM method can update the model effectively when necessary and achieve higher and higher accuracy as training data increases. It is also observed in these curves that the accuracy of the updated model is no longer significantly raised after the data increases by 66%. It is because the difference between training sets is getting smaller and smaller, and the generalization capability of the model tends to be stable. It is notable that the ICRELM has a very fast speed to update the generic model. ICRELM is implemented in Matlab and ran on an Intel Single Core i5-4570@3.2GHz CPU at over 110fps, 33, and 24fps on LFPW, HELEN, and 300-W dataset respectively.

(a) LFPW Test set
(b) HELEN Test set
(c) 300-W Test set
Fig. 3: ICRELM results on different datasets

Iv-C Comparison with static models

Offline model is the foundation of online learning. In order to validate the capability of the model trained by Par-CRELM on different datasets, we compare its results with existing state-of-the-art methods including CNN-based frameworks [7, 31, 32, 33, 17, 34, 35], 3D-based model[36] and various cascade regressions(CRs). These results have been reported in the Table I. On the LFPW dataset(Table II(a)), as we can seen, Par-CRELM outperforms all the listed CR-based methods. Meanwhile, Par-CRELM can also generate a competitive result compared with CNN-based architectures. For each dataset, the Par-CRELM is lower than DR. Except for structural difference, one possible reason is that DR uses more cascaded iterations than Par-CRELM. Correspondingly, more time was consumed than ours both in training or testing. On the HELEN dataset(Table II(b)), our approach has a more accurate result than CFAN, MTCNN. Specifically, CFAN maps the local features to the shape space by utilizing deep auto-encoder networks, MTCNN applys the Multi-task convolution network in face alignment. The result shows that the Par-CRELM has a superior learning capability on smaller scaled data. Conversely, the CNN-based methods are prone to be restricted by the size of the dataset. Whereas our approach offers an advantage on HELLEN dataset. On the 300-W Common subset(Table II(c)), the result of Par-CRELM is superior to almost cascade regression methods and certain CNN frameworks like DR-Seq, SCNN DeFA and GECSAN, but lower than the methods including DR, Deep Regression. This dataset has a larger data size than LFPW and HELEN do, which provides sufficient discriminative information for CNN-based methods learning features. This may be more conductive for the CNN-based methods producing better results. Unfortunately, Par-CRELM performs unsatisfactorily in Challenging subset. In contrast, the CNN-based methods, such as SCNN, DeFA, and CECSAN, have highly accuracies. The reason is that they can learn more adequate feature via tuning by supervised information, which is essential for good performance on very challenging samples. While the static model trained by Par-ELM has a poor predictive capability to handle these challenging situations, it can be tuned with challenging samples to improve its localizing ability.

Methods
Error
OC [37] 5.07
CE [28] 3.99
EGM [38] 3.98
RFLD [39] 3.93
- -
PCPR [40] 3.50
SDM [6] 3.49
DR-Seq [7] 3.90
DR-SDM [7] 3.40
ESR [13] 3.47
LBF [12] 3.35
- -
Deep Reg [31] 3.45
RSR [41] 3.34
cGPRT[8] 3.51
- -
DR [7] 3.31
- -
- -
- -
- -
- -
Ours 3.32
(a) LFPW
Method
Error
- -
- -
- -
- -
Zhu et.al [42] 8.16
PCPR [40] 5.93
SDM [6] 5.50
DR-Seq[7] 6.47
DR-SDM[7] 5.40
- -
- -
CFAN [32] 5.53
- -
- -
- -
MTCNN [33] 5.49
DR [7] 5.09
- -
- -
- -
- -
- -
Ours 5.37
(b) HELEN
Method
Common
Subset
Challenging
Subset
- - -
- - -
- - -
- - -
Zhu et.al [42] 8.22 18.33
PCPR [40] 6.18 17.26
SDM [6] 5.70 15.40
DR-Seq[7] 5.44 17.6
DR-SDM[7] 4.67 14.30
ESR [13] 5.28 17.00
LBF [12] 4.95 11.98
CFAN [32] 5.50 -
Deep Reg [31] 4.51 13.80
- - -
- - -
- - -
DR [7] 4.35 13.3
3DDFA [36] 6.15 10.59
SCNN [17] 5.43 9.88
3DDFA+SDM[36] 5.53 9.56
DeFA [34] 5.37 9.38
GECSAN [35] 5.42 11.80
Ours 5.10 13.87
(c) 300-W
TABLE I: Results on the averaged errors with the state-of-the-art approaches (the top 2 results for each dataset are marked in bold). The results of all methods have reported in the original papers or related literatures.

Iv-D Conclusion and future works

In this paper, we have proposed a incremental learning framework for face alignment, coined incremental cascade regression(ICR), which includes offline and online training procedures. A cascade regression of extreme learning machine is first introduced and its parallel version is developed to train a offline model. Then, we present an efficient method to incrementally update a trained model to make it more generalizable or specific. The experimental results demonstrate the validity of online learning. Using our MATLAB implementation, the entire incremental learning procedure takes over 110fps, 33fps, 24fps on LFPW, HELEN, and 300-W dataset respectiely, on an Intel Single Core i5-4570@3.2GHz CPU computer. Possible future works include replacing the hand-crafted SIFT features with deep features

[43, 16] and new optimization strategy[44, 45].

References

  • [1] L. Wu, R. Hong, Y. Wang, and M. Wang, “Cross-entropy adversarial view adaptation for person re-identification,” IEEE Transactions on Circuits and Systems for Video Technology, 2019.
  • [2] T. Qiu, R. Qiao, and D. O. Wu, “Eabs: An event-aware backpressure scheduling scheme for emergency internet of things,” IEEE Transactions on Mobile Computing, vol. 17, no. 1, pp. 72–84, 2018.
  • [3] T. F. Cootes, G. J. Edwards, and C. J. Taylor, “Active appearance models,” IEEE Transactions on Pattern Analysis & Machine Intelligence, no. 6, pp. 681–685, 2001.
  • [4] G. Tzimiropoulos and M. Pantic, “Gauss-newton deformable part models for face alignment in-the-wild,” in

    Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition

    , pp. 1851–1858, 2014.
  • [5] X. Liu, “Discriminative face alignment,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 31, no. 11, pp. 1941–1954, 2009.
  • [6] X. Xiong and F. De la Torre, “Supervised descent method and its applications to face alignment,” in Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 532–539, 2013.
  • [7] B. Shi, X. Bai, W. Liu, and J. Wang, “Face alignment with deep regression,” IEEE transactions on neural networks and learning systems, vol. 29, no. 1, pp. 183–194, 2018.
  • [8] D. Lee, H. Park, and C. D. Yoo, “Face alignment using cascade gaussian process regression trees,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4204–4212, 2015.
  • [9] A. Asthana, S. Zafeiriou, S. Cheng, and M. Pantic, “Incremental face alignment in the wild,” in Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 1859–1866, 2014.
  • [10] G. Tzimiropoulos, “Project-out cascaded regression with an application to face alignment,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3659–3667, 2015.
  • [11] Z. Cui, S. Xiao, Z. Niu, S. Yan, and W. Zheng, “Recurrent shape regression,” IEEE transactions on pattern analysis and machine intelligence, 2018.
  • [12] S. Ren, X. Cao, Y. Wei, and J. Sun, “Face alignment at 3000 fps via regressing local binary features,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1685–1692, 2014.
  • [13] X. Cao, Y. Wei, F. Wen, and J. Sun, “Face alignment by explicit shape regression,” International Journal of Computer Vision, vol. 107, no. 2, pp. 177–190, 2014.
  • [14] P. Dollár, P. Welinder, and P. Perona, “Cascaded pose regression,” in 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pp. 1078–1085, IEEE, 2010.
  • [15] X. Jin and X. Tan, “Face alignment in-the-wild: A survey,” Computer Vision and Image Understanding, vol. 162, pp. 1–22, 2017.
  • [16] L. Wu, Y. Wang, and L. Shao, “Cycle-consistent deep generative hashing for cross-modal retrieval,” IEEE Transactions on Image Processing, vol. 28, no. 4, pp. 1602–1612, 2019.
  • [17] A. Jourabloo, M. Ye, X. Liu, and L. Ren, “Pose-invariant face alignment with a single cnn,” in Proceedings of the IEEE International Conference on Computer Vision, pp. 3200–3209, 2017.
  • [18] Y. Wang, X. Lin, L. Wu, and W. Zhang, “Effective multi-query expansions: Collaborative deep networks for robust landmark retrieval,” IEEE Transactions on Image Processing, vol. 26, no. 3, pp. 1393–1404, 2017.
  • [19] L. Wu, Y. Wang, L. Shao, and M. Wang, “3-d personvlad: Learning deep global representations for video-based person reidentification,” IEEE transactions on neural networks and learning systems, 2019.
  • [20] E. Sánchez-Lozano, G. Tzimiropoulos, B. Martinez, F. De la Torre, and M. Valstar, “A functional regression approach to facial landmark tracking,” IEEE transactions on pattern analysis and machine intelligence, vol. 40, no. 9, pp. 2037–2050, 2018.
  • [21] N.-Y. Liang, G.-B. Huang, P. Saratchandran, and N. Sundararajan, “A fast and accurate online sequential learning algorithm for feedforward networks,” IEEE Transactions on neural networks, vol. 17, no. 6, pp. 1411–1423, 2006.
  • [22] G. B. Huang, H. Zhou, X. Ding, and R. Zhang, “Extreme learning machine for regression and multiclass classification,” IEEE Transactions on Systems, Man, and Cybernetics, Part B (Cybernetics), vol. 42, no. 2, pp. 513–529, 2012.
  • [23] J. Sung and D. Kim, “Adaptive active appearance model with incremental learning,” Pattern recognition letters, vol. 30, no. 4, pp. 359–367, 2009.
  • [24] A. Levey and M. Lindenbaum, “Sequential karhunen-loeve basis extraction and its application to images,” IEEE Transactions on Image processing, vol. 9, no. 8, pp. 1371–1374, 2000.
  • [25] J. Zhang, H. Wang, and Y. Ren, “Robust tracking via weighted online extreme learning machine,” Multimedia Tools and Applications, pp. 1–25, 2018.
  • [26] C. Liu, L. Feng, H. Wang, and B. Wu, “Face alignment via multi-regressors collaborative optimization,” IEEE Access, vol. 7, pp. 4101–4112, 2019.
  • [27] V. Le, J. Brandt, Z. Lin, L. Bourdev, and T. S. Huang, “Interactive facial feature localization,” in European conference on computer vision, pp. 679–692, Springer, 2012.
  • [28] P. N. Belhumeur, D. W. Jacobs, D. J. Kriegman, and N. Kumar, “Localizing parts of faces using a consensus of exemplars,” IEEE transactions on pattern analysis and machine intelligence, vol. 35, no. 12, pp. 2930–2940, 2013.
  • [29] C. Sagonas, G. Tzimiropoulos, S. Zafeiriou, and M. Pantic, “300 faces in-the-wild challenge: The first facial landmark localization challenge,” in Proceedings of the IEEE International Conference on Computer Vision Workshops, pp. 397–403, 2013.
  • [30] D. G. Lowe et al., “Object recognition from local scale-invariant features.,” in iccv, vol. 99, pp. 1150–1157, 1999.
  • [31] B. Shi, X. Bai, W. Liu, and J. Wang, “Deep regression for face alignment,” arXiv preprint arXiv:1409.5230, 2014.
  • [32] J. Zhang, S. Shan, M. Kan, and X. Chen, “Coarse-to-fine auto-encoder networks (cfan) for real-time face alignment,” in European Conference on Computer Vision, vol. 8690, pp. 1–16, 2014.
  • [33] Y. Sun, X. Zhang, and C. Li, “Multi-task convolution network for face alignment,” in Journal of Physics: Conference Series, vol. 887, p. 012079, 2017.
  • [34] Y. Liu, A. Jourabloo, W. Ren, and X. Liu, “Dense face alignment,” in Proc. IEEE Int. Conf. Comput. Vis. Workshops, pp. 1619–1628, 2017.
  • [35] J. Zhang and H. Hu, “Exemplar-based cascaded stacked auto-encoder networks for robust face alignment,” Computer Vision and Image Understanding, vol. 171, pp. 95–103, 2018.
  • [36] X. Zhu, Z. Lei, and S. Z. Li, “Face alignment in full pose range: A 3d total solution,” IEEE Transactions on Pattern Analysis and Machine Intelligence, 2017.
  • [37] G. Ghiasi and C. C. Fowlkes, “Occlusion coherence: Localizing occluded faces with a hierarchical deformable part model,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2385–2392, 2014.
  • [38] F. Zhou, J. Brandt, and Z. Lin, “Exemplar-based graph matching for robust facial landmark localization,” in Computer Vision (ICCV), 2013 IEEE International Conference on, pp. 1025–1032, 2013.
  • [39] Y. Wu and Q. Ji, “Robust facial landmark detection under significant head poses and occlusion,” in Proceedings of the IEEE International Conference on Computer Vision, pp. 3658–3666, 2015.
  • [40] X. P. Burgosartizzu, P. Perona, and P. Dollár, “Robust face landmark estimation under occlusion,” in Computer Vision (ICCV), 2013 IEEE International Conference on, pp. 1513–1520, 2013.
  • [41] S. Tan, D. Chen, C. Guo, and et al, “A robust shape reconstruction method for facial feature point detection,” Computational intelligence and neuroscience, vol. 2017, 2017.
  • [42] X. Zhu and D. Ramanan, “Face detection, pose estimation, and landmark localization in the wild,” in Computer Vision and Pattern Recognition (CVPR), 2012 IEEE Conference on, pp. 2879–2886, 2012.
  • [43] M. Esmaeilpour, P. Cardinal, and A. L. Koerich, “A robust approach for securing audio classification against adversarial attacks,” arXiv preprint arXiv:1904.10990, 2019.
  • [44]

    Y. Wang, W. Zhang, L. Wu, X. Lin, M. Fang, and S. Pan, “Iterative views agreement: An iterative low-rank based structured optimization method to multi-view spectral clustering,” in

    International Joint Conference on Artificial Intelligence (IJCAI)

    , pp. 2153–2159, 2016.
  • [45] Y. Wang, L. Wu, X. Lin, and J. Gao, “Multiview spectral clustering via structured low-rank matrix factorization,” IEEE transactions on neural networks and learning systems, vol. 29, no. 10, pp. 4833–4843, 2018.