I Introduction
Face alignment aims to locate a sparse set of facial landmarks for a given facial image or video. It is a topic of interest in the domain of Computer Vision because many subsequent face analysis tasks, such as face recognition
[1], facial animation, and authentication on the Internet of Things [2], heavily depend on the accurate localizations of facial landmarks. Over the decades, various face alignment procedures have been proposed, which can be broadly classified into generative models and discriminative models. The generative approaches adopt an analysisbysynthesis loop where the optimization strategy attempt to find the optimal shape parameters by maximizing the joint posterior probability between the prebuilt deformable model and feature of the input image. The representative examples of this category are Active Appearance Model(AAM)
[3] and GaussNewton Deformable Part Model(GNDPM) [4].The discriminative models seek to learn discriminative information (i.e. discriminative function [5, 6]) which directly maps representation of facial appearance to facial landmarks. Many discriminative methods utilize popular Cascaded Regression(CR) framework, in which a series of regressors are learned in a cascaded manner to gradually refine the initialization to groundtruths. Numerous cascaded regression methodologies have shown to produce excellent results on face alignment tasks and validate the CR framework’s superior efficiency and accuracy [7, 8, 9, 10, 11]. The most efficient of these methods is LBF [12] using a set of local binary features to learn a cascade linear regressors and its running speed can be achieved over 3000fps on a standard desktop for locating a few dozens of landmarks. The authors of [6]
attempt to provide a theoretical explanation of cascade linear regression from the perspective of least squares optimization and solve it as a supervised descent methodology. While these cascaded linear regression methods are very efficient, they are suffering in poor fitting capability to exploit the nonlinear and complex relationship between feature space and shape variations in unconstrained scenarios.
Considering the limitations of linear regression, some nonlinear regressors based on decisionmaking tree, such as boosting [13]
and random forest
[14] were introduced into cascaded regression. However, these ensemble learning models are prone to overfitting and suffered from very high computational burden [15]. With the research wave of deep learning
[16] has been carved out in the image domain, many deepbased methods [7, 17, 18, 19]have been proposed and achieved breakthrough successes in some big scale datasets. Because of their complicated structure and a great number of hyperparameters, these deep frameworks tend to consume massive time to train the models. Moreover, deep structure encounters a complete retraining process if the training data is supplemented.
Although these discriminative methods have accomplished superior performance for face alignment under unconstrained faces or some more challenging situations, they are limited by a static generic model that is built completely on offline training data. Nevertheless, such a static model can not be updated in real time to handle some certain specific tasks(e.g. personspecific landmarks tracking for video). Since the entire training procedure is very timeconsuming and very expensive, how to best exploit discriminative cascade regression for incremental learning is an intractable issue. A few studies [9, 20] have started to work on this vital issue from the viewpoint of the incremental linear regression function. However, linear regressor tends to be limited to a linear relationship between dependent and independent variables.
In order to overcome the aforementioned limitations, in this paper, we study incremental training of cascaded regression with nonlinear regressor. As the foundation of our algorithm, CR framework is one of the most practical and effective framework for localizing facial landmarks. Nevertheless, traditional CR framework still has two limitations to achieve incremental training. (1) It successively trains a series of regressors stage by stage. The entire procedure(4 or more cascaded stages) is too slow to satisfy the requirements of online learning in real time. (2) In the cascade executions, the input of each stage intensely depends on the outputting shapes of the previous stage. In that case, if the certainly stageregressor is incrementally updated, the whole input set of the subsequent stage will be recomputed by new regressor and all samples formerly trained must be reloaded. Obviously, these limitations can lead to a vast resourceconsuming and timeconsuming when the scale of data is so large.
For these, we propose incremental cascade regression(ICR), which aims to train and update the series of nonlinear regressors in a parallel manner instead of a sequential one. In special, we adopt Monte Carlo sampling methodology [6, 9]
to approximate the shape space, in which the facial shape is no longer depend on the outputting of the previous stage. Meanwhile, ICR is equipped extreme learning machine(ELM) as the discriminative regressor to learn the mapping between facial feature representations and the shape variations. ELM has powerful capability to approximate any linear or nonlinear mapping(e.g. the least square constraints in face alignment). Moreover, ELM has very fast training speed and low computational cost without the hassle of parameters tuning when compared to the gradient descentbased regressors or decisiontree based regressors. As shown in Figure
1, ICR divides into two parts: offline and online training procedures. In the offline training procedure, a generic model can be learned by a parallel cascade regression of ELM. Then the online training procedure can update the trained model by using the Monte Carlo sampling methodology [6, 9]. In this way, incrementally updating the trained regressor of each stage does not depend on the outputting of the previous stage, so we can update all the regressors in parallel. Meanwhile, we adopt an online sequential extreme learning machine(OSELM) [21] method to update the trained ELM regressors. The OSELM [21] is an incremental learning strategy of extreme learning machine(ELM) [22]. In summary, our main contributions are as follows:
To the best of our knowledge, ICR is the first parallel cascaded regression framework equipped the nonlinear regressors.

ICR is capable of replenishing new training data and very fast updating the model without retraining from scratch, which can constantly increase the generalization and robustness of model.

We evaluate ICR on three datasets and demonstrate the importance of incremental learning in achieving stateoftheart performance on sequential training data.
Ii Related work
As described above, the existing methods can be split into generative and discriminative categories. Both categories have proposed diverse models for offline face alignment with varying degrees of success. The main problem that our method can address is incremental learning for face alignment. In this section, we will focus on closely related works on this task.
On the generative side, very limited research has been introduced to AAM [23]
, in which the incremental principal component analysis(iPCA)
[24]is used to update the generic AAM’s linear appearance model with current face image. However, this method heavily relys on prebuilding a robust parametric model as well as AAM and need images of the same person for training, which is less generalization and unpractical.
On the discriminative side, Asthana et al. [9]
proposed a incremental version of SDM called iPar. In this work, a parallel strategy is used to implement the incremental update of the modified linear regressors. For each cascade stage, they utilize a set of perturbations from a prepopulated Gaussian distribution instead of the outputting shape variations of the previous stage. In this way, computing each regressor is independent of the other stages, which can facilitate the whole training procedure in a parallel manner. As experimentally shown by
[9], such approximate training strategy achieves similar accuracy as the model of SDM trained in a sequential manner. This study offers a new idea and premise for fast training incrementally discriminative regressors. However, linear regressors used in iPar are too weak to exploit the complex relationship between shape variables and appearance features.Inspired by [9], we propose a incremental cascade regression, coined ICR, that pay more attention to nonlinear discriminative regression in a parallel cascaded regression framework.
Iii Method
In this section, we present a cascade regression of extreme learning machine and its online training version. As a preliminary, we first take a brief review the ELM algorithm in detail.
Iiia Extreme Learning Machine
ELM is an efficient way of building the single layer feed forward neural networks(SLFNs)
[25]. Given inputoutput training samples, arbitrary distinct samples . Here, is ainput vector and
is a target vector. ELM with hidden nodes and activation can be mathematically modeled as(1) 
where and are the randomly chosen learning parameters of hidden nodes, is the output of  hidden nodes w.r.t the input
for additive units with the activation function
(e.g., sigmoid), is defined as(2) 
where the denotes the inner product of vectors and in . The compact form of equations in Equation(1) is:
(3) 
where
(4) 
(5) 
is the out matrix of hidden layer, where the  column of is the output vector of  hidden node with respect to inputs . In the Equation(3), the learning parameters of hidden layer nodes can be randomly generated, So the output weights
can be estimated by finding the least square solution of the linear system, according to the expression:
(6) 
where is the MoorePenrose gneralized inverse of . In practical, it is usually comes that . The Equation(6) can be rewritten as
(7) 
where, .
IiiB Cascade Regression of Extreme Learning Machine
A facial shape can be formed by a vector consisting of facial landmarks, where the are the 2D coordinates of the  landmark. Cascade regression frameworks usually begin with an initial shape , and progressively refine the shape to the groundtruth via adding a shape increment stage by stage. The is estimated by regressing the shapeindexed feature around current shape estimate. The shapeindexed feature can be represented as , where, is the input image, is the dimensionality of the feature. The function can be a learningbased mapping [12, 17] or a handcrafted feature(e.g. SIFT [6], Hog [26]). Linear regression has been most favoured in various works based on cascade regression because its superior efficiency. However, it is not suitable for incremental learning framework because the shape variations are more and more complicated with increasing incremental training data. Therefore, we propose a cascade regression of extreme learning machine(CRELM). We following introduce the training procedure of CRELM.
Given a set of facial images and their corresponding groundtruth shapes . The set of shape increments can be calculated by , where, is a shape vector from
stage. For achieving a robust representation against illumination, we use SIFT features extracted from patches around the current shape of each stage. To decrease the training error for stage
, we can learn the stageregressor via minimizing the leastsquares error function:(8) 
Let , and , we can rewrite the Equation(8) as the format of ELM:
(9) 
where. is computed by Equation(4
) and we choose sigmoid function as the activation mapping
. The Equation (9) can be resolved by (6), (7). We represent the learned regressor for stage as , where (for clearing, here, we omit ). After learning the regressor, the training shapes for stage can be updated by:(10) 
The training procedure is sequentially iterated until the average of the shape differences no longer decrease.
IiiC Parallel Cascade Regression of Extreme learning machine
In order to better approximate the nonlinear relationship between the image features and shape variations, in Section IIIB, we introduce an efficient nonlinear mapping in cascade regression framework. However, this way will inevitably increase the timeconsuming of training process. Besides, it is observed that the sequential procedure involved in training CRELM is not suitable for the task of incremental learning. In CRELM, as shown in Figure 2(a), since the shape variations is totally depend on the outputting of previous stage, if new training data inputs, the entire cascade of regressors have to be retrained from the beginning. For example, if a set of new samples have to be added, the first regressor can be easily updated to by utilizing the new shape variations . However, the first regressor has changed on once, the subsequent set of inputs must be recomputed by progagating the entire augmented set through . In this case, all the regressors will retrained in sequence and all previously trained samples must be reloaded, which is time consuming and extremely expensive.
The authors of [9] pointed out that the shape variations at each stage can be approximated by a set of random perturbations drawn from a Gaussian distribution . the mean and covariance can be calculated by training the sequential cascade of regression. In addition, it has been verified in work [9] that the training procedure based on the sampling strategy can deliver a similar testing accuracy as the sequential manner. Inspired by [9], we adopt Monte Carlo sampling methodology to train a parallel cascade regression of extreme learning machine(ParCRELM). In ParCRELM, the shape variations required for learning the cascade of regressors do not rely on previous stages and the training can be performed in parallel. In particular, firstly, we compute the the statics for shape variations at each stage while training the cascade of regressors using the proposed CRELM on the offline train set. Then, we train the ParCRELM procedure, the shape variations for training the cascade of ELM regressors are drawn from the corresponding stagedistribution rather than the calcaulated from previous stage. We have shown the parallel process in Figure 2(b). One advantage of this modification of CRELM is that the regressors of all stages can be learned in parallel with similar alignment accuracy as sequential training. Another advantage is that it provides a premise for fast incremental learning that will be showed in the Section IIID.
IiiD Incremental Cascade Regression of Extreme Leaning machine
After training ParCRELM, the offline regressors and the distribution of shape variations are preserved. Here, we present the proposed incremental cascade regression of extreme learning machine(ICRELM) in detail.
Given a set of new training data and , where is the number of new samples. We record trained regressors and distributions as and . For arbitrary stage, contains learned parameters . ICRELM aims to update the cascade of regressors ( in which do not need to update) in parallel using the new training data. For stage , let us randomly sample shape variations drawn from for the new training images and extract the shapeindex features . The leastsquares error function for all training data becomes:
(11) 
where, , is computed by Equation(4) using the trained parameters of regressor . Then, the output weight can be calculated by Equation(7):
(12) 
where
(13) 
Referring to [21], we can update the via:
(14) 
This way, a cascade of regressors can be updated in parallel. The complete training procedure of ICRELM is described in Algorithm 1.
Iv Experiments
The experiments for face alignment will be presented in two parts . The first part is to evaluate the accuracy of the model which constantly updated by ICRELM with continuous batches of new training data. The second part investigates the static models trained by proposed ParCRELM and other stateoftheart methods on public datasets. First of all, We briefly introduce the three datasets used in the experiments of face alignment and evaluation criteria for them.
Iva Implementation Details
IvA1 Datasets

LFPW (29 landmarks) [27] is collected from the web including 1000 training and 300 test images. However some URLs are invalid, we only use 798 training and 221 test images. The images exhibit large variations in pose, occlusion, facial expression, and illumination.

HELEN (68 landmarks) [28] contains 2,330 highresolution web images which are divided into training and test sets. Two sets have 2000 and 300 images respectively.

300W (68 landmarks) [29] is collected from existing datasets including LFPW, HELLEN, AFW and a challenging dataset called IBUG. We follow the same division in [12, 7], specifically, the training set is made up by the training samples of HELEN, the training samples of LFPW, and AFW, with 3148 images in total. According to the difficulty of alignment, the test set is grouped into Common(the test samples from LFPW and HELEN, total 554 images) and Challenging(IBUG, total 135 images) sets.
Since these datasets provide prescribed face bounding boxes, we do not use any face detectors and thus no face are missed during testing.
IvA2 Standard Evaluation Protocols
We adopt two types of comparisons with the value of average error and curve of cumulative error for evaluation. They are prescribed as following:

Average error: following almost works, we leverage the standard landmarks mean error normalized by interpupil distance. It can be computed by , where, and denote the  landmark coordinates of estimated and groundtruth facial landmark positions respectively, the and denote the interpupil distance. We report the error averaged over all annotated landmarks from each testing database. For clarity, we omit the notation in the report result.

CED curve: we also draw the cumulative error distribution curve of errors can be computed by the equation: , where the numerator denotes the number of samples on which the error less than the error .
IvA3 Settings
In the feature learning, we extracted a SIFT [30] descriptor on local patches for each landmark. We set the number of hidden nodes as 500, 1000, and 1800 for LFPW, HELEN, 300W datasets respectively. Following almost face alignment models, we fixed the number of cascade stages to .
IvB Validity of Online Learning
This experiment aims to validate the utility of the ICRELM(Section IIID) when the new data batches continuously arrive. For this purpose, we have designed the following experiments for LFPW, HELEN, and 300W datasets. Each dataset was partitioned equally into 6 batches. We used ParCRELM(Section IIIC) to train a offline model on the first batch as the baseline. Then, we employed ICRELM to continuously update the generic model batchbybatch. From Figure 3, we can observe that a consistent increase in the face alignment accuracy as the online model is incrementally updated with new batch of training data. The worst curve is produced by offline model. It is inevitable that the model trained with small samples tend to have poor generalization and robustness. The other curves are generated after adding %, %, %, %, and % samples in batches from corresponding dataset. The curves illustrate that ICRELM method can update the model effectively when necessary and achieve higher and higher accuracy as training data increases. It is also observed in these curves that the accuracy of the updated model is no longer significantly raised after the data increases by 66%. It is because the difference between training sets is getting smaller and smaller, and the generalization capability of the model tends to be stable. It is notable that the ICRELM has a very fast speed to update the generic model. ICRELM is implemented in Matlab and ran on an Intel Single Core i54570@3.2GHz CPU at over 110fps, 33, and 24fps on LFPW, HELEN, and 300W dataset respectively.
IvC Comparison with static models
Offline model is the foundation of online learning. In order to validate the capability of the model trained by ParCRELM on different datasets, we compare its results with existing stateoftheart methods including CNNbased frameworks [7, 31, 32, 33, 17, 34, 35], 3Dbased model[36] and various cascade regressions(CRs). These results have been reported in the Table I. On the LFPW dataset(Table II(a)), as we can seen, ParCRELM outperforms all the listed CRbased methods. Meanwhile, ParCRELM can also generate a competitive result compared with CNNbased architectures. For each dataset, the ParCRELM is lower than DR. Except for structural difference, one possible reason is that DR uses more cascaded iterations than ParCRELM. Correspondingly, more time was consumed than ours both in training or testing. On the HELEN dataset(Table II(b)), our approach has a more accurate result than CFAN, MTCNN. Specifically, CFAN maps the local features to the shape space by utilizing deep autoencoder networks, MTCNN applys the Multitask convolution network in face alignment. The result shows that the ParCRELM has a superior learning capability on smaller scaled data. Conversely, the CNNbased methods are prone to be restricted by the size of the dataset. Whereas our approach offers an advantage on HELLEN dataset. On the 300W Common subset(Table II(c)), the result of ParCRELM is superior to almost cascade regression methods and certain CNN frameworks like DRSeq, SCNN DeFA and GECSAN, but lower than the methods including DR, Deep Regression. This dataset has a larger data size than LFPW and HELEN do, which provides sufficient discriminative information for CNNbased methods learning features. This may be more conductive for the CNNbased methods producing better results. Unfortunately, ParCRELM performs unsatisfactorily in Challenging subset. In contrast, the CNNbased methods, such as SCNN, DeFA, and CECSAN, have highly accuracies. The reason is that they can learn more adequate feature via tuning by supervised information, which is essential for good performance on very challenging samples. While the static model trained by ParELM has a poor predictive capability to handle these challenging situations, it can be tuned with challenging samples to improve its localizing ability.

IvD Conclusion and future works
In this paper, we have proposed a incremental learning framework for face alignment, coined incremental cascade regression(ICR), which includes offline and online training procedures. A cascade regression of extreme learning machine is first introduced and its parallel version is developed to train a offline model. Then, we present an efficient method to incrementally update a trained model to make it more generalizable or specific. The experimental results demonstrate the validity of online learning. Using our MATLAB implementation, the entire incremental learning procedure takes over 110fps, 33fps, 24fps on LFPW, HELEN, and 300W dataset respectiely, on an Intel Single Core i54570@3.2GHz CPU computer. Possible future works include replacing the handcrafted SIFT features with deep features
[43, 16] and new optimization strategy[44, 45].References
 [1] L. Wu, R. Hong, Y. Wang, and M. Wang, “Crossentropy adversarial view adaptation for person reidentification,” IEEE Transactions on Circuits and Systems for Video Technology, 2019.
 [2] T. Qiu, R. Qiao, and D. O. Wu, “Eabs: An eventaware backpressure scheduling scheme for emergency internet of things,” IEEE Transactions on Mobile Computing, vol. 17, no. 1, pp. 72–84, 2018.
 [3] T. F. Cootes, G. J. Edwards, and C. J. Taylor, “Active appearance models,” IEEE Transactions on Pattern Analysis & Machine Intelligence, no. 6, pp. 681–685, 2001.

[4]
G. Tzimiropoulos and M. Pantic, “Gaussnewton deformable part models for face
alignment inthewild,” in
Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition
, pp. 1851–1858, 2014.  [5] X. Liu, “Discriminative face alignment,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 31, no. 11, pp. 1941–1954, 2009.
 [6] X. Xiong and F. De la Torre, “Supervised descent method and its applications to face alignment,” in Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 532–539, 2013.
 [7] B. Shi, X. Bai, W. Liu, and J. Wang, “Face alignment with deep regression,” IEEE transactions on neural networks and learning systems, vol. 29, no. 1, pp. 183–194, 2018.
 [8] D. Lee, H. Park, and C. D. Yoo, “Face alignment using cascade gaussian process regression trees,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4204–4212, 2015.
 [9] A. Asthana, S. Zafeiriou, S. Cheng, and M. Pantic, “Incremental face alignment in the wild,” in Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 1859–1866, 2014.
 [10] G. Tzimiropoulos, “Projectout cascaded regression with an application to face alignment,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3659–3667, 2015.
 [11] Z. Cui, S. Xiao, Z. Niu, S. Yan, and W. Zheng, “Recurrent shape regression,” IEEE transactions on pattern analysis and machine intelligence, 2018.
 [12] S. Ren, X. Cao, Y. Wei, and J. Sun, “Face alignment at 3000 fps via regressing local binary features,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1685–1692, 2014.
 [13] X. Cao, Y. Wei, F. Wen, and J. Sun, “Face alignment by explicit shape regression,” International Journal of Computer Vision, vol. 107, no. 2, pp. 177–190, 2014.
 [14] P. Dollár, P. Welinder, and P. Perona, “Cascaded pose regression,” in 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pp. 1078–1085, IEEE, 2010.
 [15] X. Jin and X. Tan, “Face alignment inthewild: A survey,” Computer Vision and Image Understanding, vol. 162, pp. 1–22, 2017.
 [16] L. Wu, Y. Wang, and L. Shao, “Cycleconsistent deep generative hashing for crossmodal retrieval,” IEEE Transactions on Image Processing, vol. 28, no. 4, pp. 1602–1612, 2019.
 [17] A. Jourabloo, M. Ye, X. Liu, and L. Ren, “Poseinvariant face alignment with a single cnn,” in Proceedings of the IEEE International Conference on Computer Vision, pp. 3200–3209, 2017.
 [18] Y. Wang, X. Lin, L. Wu, and W. Zhang, “Effective multiquery expansions: Collaborative deep networks for robust landmark retrieval,” IEEE Transactions on Image Processing, vol. 26, no. 3, pp. 1393–1404, 2017.
 [19] L. Wu, Y. Wang, L. Shao, and M. Wang, “3d personvlad: Learning deep global representations for videobased person reidentification,” IEEE transactions on neural networks and learning systems, 2019.
 [20] E. SánchezLozano, G. Tzimiropoulos, B. Martinez, F. De la Torre, and M. Valstar, “A functional regression approach to facial landmark tracking,” IEEE transactions on pattern analysis and machine intelligence, vol. 40, no. 9, pp. 2037–2050, 2018.
 [21] N.Y. Liang, G.B. Huang, P. Saratchandran, and N. Sundararajan, “A fast and accurate online sequential learning algorithm for feedforward networks,” IEEE Transactions on neural networks, vol. 17, no. 6, pp. 1411–1423, 2006.
 [22] G. B. Huang, H. Zhou, X. Ding, and R. Zhang, “Extreme learning machine for regression and multiclass classification,” IEEE Transactions on Systems, Man, and Cybernetics, Part B (Cybernetics), vol. 42, no. 2, pp. 513–529, 2012.
 [23] J. Sung and D. Kim, “Adaptive active appearance model with incremental learning,” Pattern recognition letters, vol. 30, no. 4, pp. 359–367, 2009.
 [24] A. Levey and M. Lindenbaum, “Sequential karhunenloeve basis extraction and its application to images,” IEEE Transactions on Image processing, vol. 9, no. 8, pp. 1371–1374, 2000.
 [25] J. Zhang, H. Wang, and Y. Ren, “Robust tracking via weighted online extreme learning machine,” Multimedia Tools and Applications, pp. 1–25, 2018.
 [26] C. Liu, L. Feng, H. Wang, and B. Wu, “Face alignment via multiregressors collaborative optimization,” IEEE Access, vol. 7, pp. 4101–4112, 2019.
 [27] V. Le, J. Brandt, Z. Lin, L. Bourdev, and T. S. Huang, “Interactive facial feature localization,” in European conference on computer vision, pp. 679–692, Springer, 2012.
 [28] P. N. Belhumeur, D. W. Jacobs, D. J. Kriegman, and N. Kumar, “Localizing parts of faces using a consensus of exemplars,” IEEE transactions on pattern analysis and machine intelligence, vol. 35, no. 12, pp. 2930–2940, 2013.
 [29] C. Sagonas, G. Tzimiropoulos, S. Zafeiriou, and M. Pantic, “300 faces inthewild challenge: The first facial landmark localization challenge,” in Proceedings of the IEEE International Conference on Computer Vision Workshops, pp. 397–403, 2013.
 [30] D. G. Lowe et al., “Object recognition from local scaleinvariant features.,” in iccv, vol. 99, pp. 1150–1157, 1999.
 [31] B. Shi, X. Bai, W. Liu, and J. Wang, “Deep regression for face alignment,” arXiv preprint arXiv:1409.5230, 2014.
 [32] J. Zhang, S. Shan, M. Kan, and X. Chen, “Coarsetofine autoencoder networks (cfan) for realtime face alignment,” in European Conference on Computer Vision, vol. 8690, pp. 1–16, 2014.
 [33] Y. Sun, X. Zhang, and C. Li, “Multitask convolution network for face alignment,” in Journal of Physics: Conference Series, vol. 887, p. 012079, 2017.
 [34] Y. Liu, A. Jourabloo, W. Ren, and X. Liu, “Dense face alignment,” in Proc. IEEE Int. Conf. Comput. Vis. Workshops, pp. 1619–1628, 2017.
 [35] J. Zhang and H. Hu, “Exemplarbased cascaded stacked autoencoder networks for robust face alignment,” Computer Vision and Image Understanding, vol. 171, pp. 95–103, 2018.
 [36] X. Zhu, Z. Lei, and S. Z. Li, “Face alignment in full pose range: A 3d total solution,” IEEE Transactions on Pattern Analysis and Machine Intelligence, 2017.
 [37] G. Ghiasi and C. C. Fowlkes, “Occlusion coherence: Localizing occluded faces with a hierarchical deformable part model,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2385–2392, 2014.
 [38] F. Zhou, J. Brandt, and Z. Lin, “Exemplarbased graph matching for robust facial landmark localization,” in Computer Vision (ICCV), 2013 IEEE International Conference on, pp. 1025–1032, 2013.
 [39] Y. Wu and Q. Ji, “Robust facial landmark detection under significant head poses and occlusion,” in Proceedings of the IEEE International Conference on Computer Vision, pp. 3658–3666, 2015.
 [40] X. P. Burgosartizzu, P. Perona, and P. Dollár, “Robust face landmark estimation under occlusion,” in Computer Vision (ICCV), 2013 IEEE International Conference on, pp. 1513–1520, 2013.
 [41] S. Tan, D. Chen, C. Guo, and et al, “A robust shape reconstruction method for facial feature point detection,” Computational intelligence and neuroscience, vol. 2017, 2017.
 [42] X. Zhu and D. Ramanan, “Face detection, pose estimation, and landmark localization in the wild,” in Computer Vision and Pattern Recognition (CVPR), 2012 IEEE Conference on, pp. 2879–2886, 2012.
 [43] M. Esmaeilpour, P. Cardinal, and A. L. Koerich, “A robust approach for securing audio classification against adversarial attacks,” arXiv preprint arXiv:1904.10990, 2019.

[44]
Y. Wang, W. Zhang, L. Wu, X. Lin, M. Fang, and S. Pan, “Iterative views agreement: An iterative lowrank based structured optimization method to multiview spectral clustering,” in
International Joint Conference on Artificial Intelligence (IJCAI)
, pp. 2153–2159, 2016.  [45] Y. Wang, L. Wu, X. Lin, and J. Gao, “Multiview spectral clustering via structured lowrank matrix factorization,” IEEE transactions on neural networks and learning systems, vol. 29, no. 10, pp. 4833–4843, 2018.
Comments
There are no comments yet.