1 Introduction
Face alignment refers to the process of fitting a shape model on a face image, i.e. precisely localizing keypoints that correspond, for instance, to eye or lip corners, nose tip or eyebrow locations. It serves as an input to many humancomputer interaction systems, such as face reenactment [18] or expression recognition [8].
Most recent approaches for face alignment consist in applying a number of updates, starting from a mean shape. For each of these updates, a regressor is trained to map shapeindexed features to a displacement in either the space of a parametric face model (parametric regression), or directly in the space of the feature point locations (explicit regression). Then, given a face image, the shape is successively refined by applying the sequence of updates predicted by the learned regressors, in a cascaded way. On the one hand recent approaches such as [2], [20], and [19] advocate that the use of a parametric shape brings more stability to the alignment. Also, it allows to drastically reduce the dimensionality of the regression problem. On the other hand, approaches such as [21] and [15]
show that, given enough training data, explicit regression allows to capture finegrained shape deformations, especially in the latter stages on the cascade. In our work, we propose to combine the best of those two worlds, by first applying updates in the space of a constrained, parametric model. Then, in the latter stages, finegrained deformations can be captured by means of explicit regression layers (Figure
1).Each individual cascade stage usually consist in (a) a dimensionality reduction step, and (b) a regression step. For instance, authors in [21], [19] and [2] use PCA to perform (a). In order to achieve (b), Xiong et al. [21] use leastsquare error minimization. Asthana et al. in [2] propose an incremental leastsquares formulation. Martinez et al. in [14] use norm regularization to induce robustness to poor initializations. These approaches offer the advantage of being very fast, especially when applying regression upon learned local features [15]
. However, performing (a) and (b) sequentially can lead to suboptimal solutions. Furthermore, those method generally use linear regressions to predict the updates. Hence the low number of parameters (which is constrained by the PCA output space dimension, in the case of SDM
[21]) may hinder their ability to capture the variability of larger datasets, such as the one in [24].Moreover, deep learning techniques have recently started to show their capabilities for face alignment. In the work of Sun
et al. [17], convolutional neural networks (CNNs) are used to extract suitable image representation. Then fullyconnected layers are used to perform (a) and (b) in an endtoend fashion. Zhang
et al. [22]use a cascade of deep autoencoders. Zhang
et al. [23] design a deep learning pipeline to learn both (a) and (b) in a single pass. However, their approach require the use of large collections of images labelled with auxiliary attributes to help learn the image representations. Authors in [23] also do not use cascaded regression, as the evaluation of several deep networks, and especially fullyconnected layers, is prohibitively expensive in terms of computational load [17], [22].Thus, one can wonder what could be an ideal candidate regressor in the frame of a cascaded regression pipeline to perform face alignment. Firstly, it shall be differentiable in order to learn (a) and (b) (and possibly the image representations) in an endtoend fashion. Secondly, it shall be evaluated very fast in order to keep the runtime low when stacking several cascade layers. Thirdly, it has to embrace a good amount of parameters in order to scale well on larger databases, while at the same time it shall not overfit when trained on smaller corpses of examples.
Recently, Kontschieder et al. [10]
introduced the deep neural forest (NF) framework for training differentiable decision trees. In this work, we propose to adapt the NF framework to achieve realtime processing. We call this method greedy neural forest (GNF) and wrap the deep GNF regressors inside a semiparametric cascade. We demonstrate that the proposed cascaded semiparametric deep GNF (CSPdGNF) achieves high accuracies as well as very low runtime, while scaling very well to larger and more complex databases. To sum it up, the contributions of this paper are the following:

a semiparametric cascaded regression framework that combines the best of the two worlds, i.e. a stable parametric alignment and a flexible explicit regression.

multiple improvements over NF, namely a greedy evaluation procedure to allow realtime processing, as well as a simplified training procedure involving constant prediction nodes. The proposed approach is flexible and tend not to overfit on training data.

a complete system that outperform most stateoftheart approaches for face alignment while allowing very high framerates.
2 Methodology
2.1 Semiparametric cascade
As it is somewhat classical in the landmark alignment literature, we propose a cascaded alignment procedure. However, in our work, we use a semiparametric shape model, in which a shape prediction is provided as the sum of multiple displacements in parameter space, starting from an initial guess (usually defined as the mean shape parameterization). Then, in the latter stages, the displacement is finetuned by applying explicit updates . The final prediction can thus be written:
(1) 
This allows a constrained shape regression that is, theoretically speaking, more stable than a fully explicit method, as well as a flexible procedure that captures the finegrained feature point displacements (e.g. those who are related with facial expressions). After each step, the image descriptors are computed from the updated shape and used as inputs to the next cascade stage.
2.1.1 Parametric shape model
Two constraints that may arise when using trees for the purpose of multioutput regression are (a) covering the output regression space by filling the leaf node predictions in a somewhat exhaustive manner and (b) limiting the number of nodes, which is a function of (the exponential of) the tree depth by the number of trees. Given those requirements, it is easy to see that trying to directly predict the shape displacement is a bad idea, as the output space is highdimensional (dim. for a points markup) and the displacement value ranges can be important. For that matter, we use a shape parametrization in the first stages of the cascade, which is a classical setup for face alignment [7, 6, 2]. More specifically, shape is defined as:
(2) 
Where is a scaling parameter, is a rotation matrix parametrized by angle , and is a translation parameter. Those are the rigid parameters of the transformation.
is the mean shape and vector
describes the nonrigid deformation of the shape in the space of the Point Distribution Model (PDM) , as it was introduced in the seminal work of Cootes et al [6]. The vector of parameters is thus defined as . In our experiments, we set , hence a 20dimensional parametric shape.For each training instance, we perform Procrustes analysis on the shape to remove the rigid component. Thus, we generate the PDM matrix using PCA on the rigidlyaligned shapes. After that, we apply 100 GaussNewton iterations to retrieve the parameter vector for image with ground truth shape . Each iteration is defined as , with the Jacobian of .
2.1.2 Explicit shape model.
Contrary to parametric layers (see Paragraph 2.1.1), an explicit layer aims at directly predicting the displacements of the feature points. The output of such layer is defined as:
(3) 
As stated above, is highdimensional. Thus, for regressing these values using tree predictors, one shall restrict the ranges of the prediction values beforehand. In our case, the ranges of the deltas between a current predicted value and the ground truth feature point locations becomes smaller and smaller as cascade layers are stacked, allowing the use of explicit regression layers for the latter stages of the cascade.
2.2 Regression with greedy Neural Forests
Within the frame of a cascaded landmark alignment [21], it is crucial that each stage of the cascade (i.e., in our case, each NF predictor) does not overfit on the training data so that the residual deltas (in the case a parametric layer) do not shrink too much after one or two stages. Even though the NF predictors embraces a whole lot of parameters ( parameters for a dimensional model and trees of depth !), four mechanisms limit overfitting in practice:

We use dimensionality reduction to limit the number of parameters (see Section 2.2.2).

We use early stopping by training each NF predictor with a restricted number of Stochastic Gradient Descent (SGD) updates. Moreover, as the proposed NF training framework is fully online, we generate onthefly random perturbations that are randomly sampled within the variation range for that parameter (for scaling and translation parameters only).

During training, optimization is performed only on the split nodes. The prediction nodes remain constant, enabling fully online training as well as reducing the computational load and the number of hyperparameters. Furthermore, suboptimal prediction node values can be compensated in the ulterior cascade layers.

When the training of a cascade layer is complete, we switch the NF predictor to its corresponding GNF.
As a result, the proposed approach appears to have very good properties w.r.t. overfitting. In fact, we demonstrate in Section 3 that the same cascade achieves low error rates both on small/medium poses data ( images for training) and large pose database ( images), with exactly the same hyperparameter setting and SGD optimization.
2.2.1 GNF predictors
Soft trees with probabilistic routing.
In the case of a classical decision tree, the probability
to reach leaf node given an example is a binary function, that can be formulated as a product of Kronecker deltas that successively indicate if is rooted left or right:(4) 
Where and denotes the sets of nodes for which belongs to the left and right subtrees, respectively. Moreover, if we consider oblique splits:
(5) 
In the case of a Neural Forest (NF) [10], the probability to reach a leaf node is defined as a product of continuous split probabilities associated to each probabilistic split node (Equation (6
)), that are parametrised by Bernoulli random variables
. Taking the expected value (which corresponds to an infinite number of samplings from tree ), an example goes to the right subtree associated to nodewith a probability given by the activation function
, and to left subtree with probability .(6) 
The activation for node is defined as follows:
(7) 
Thus, the calculus of
can be seen as the activation of a neuron layer with weights
and bias . From a decision tree perspective, the successive activations define a soft routing through the trees, where each leaf node is reached with probability .Online learning with recursive backpropagation.
The prediction error for a ground truth value (in the case of a parametric layer) and a leaf of tree can be computed as the Euclidean distance between this ground truth value and the leaf prediction . The prediction error for the whole tree is thus equal to:
(8) 
The same holds true for an explicit layer (with displacements in feature point space ). Hence, for any parameter (i.e. a feature weight or the threshold value ), the parameter update is given by Equation (9) (with the learning rate hyperparameter).
(9) 
Moreover, the derivatives of w.r.t. the parameters of a split node can be calculated recursively. Specifically, for a split node we can split the sum in Equation 8 in three, by grouping the leaves that belong to the left and right subtrees , and those who do not belong to those subtrees:
(10) 
While the first and second term respectively depend on and , the last term does not depend at all on parameter . We can thus write the partial derivatives of Equation 10 as:
(11) 
With and the errors respectively for the left and right subtrees, and:
(12) 
Moreover, the error backpropagated up to node
is:(13) 
and the error corresponding to the component of an example that shall be backpropagated up to the feature level is:
(14) 
Once the trees are initialized, training samples are sequentially chosen from the data (SGD or minibatch) and a forward pass through the trees provides the values of the probabilities and activations for each node . For node the prediction error and respectively from the left and right subtrees. Parameters can thus be updated using Equations 9, 11 and 12. Equation 14 provides an update to the error that can be backpropagated up to the feature level. Eventually, the updated error up to node can be obtained by applying Equation 13.
The authors of [10]
suggest combining this split node optimization scheme to an update of the leaf probabilities, that they apply after a specific number of epochs using a convex optimization scheme while the split node parameters are left unchanged. However, in our case, we solely update the split nodes and prediction nodes remain constant during training. We found that performing optimization on split nodes only was sufficient to obtain satisfying accuracies on multiple classification and regression benchmarks. Furthermore, in the frame of a cascaded alignment, such setting effectively prevents overfitting as well as allowing a faster, fully online training procedure for each cascade stage. However, this requires careful initialization of the node predictions and tree depth hyperparameter.
Prediction node initialization.
Indeed, the tree depth has to be chosen carefully in order to ensure a minimal “resolution” in term of leaf predictions. In the case of a parametric layer, we first have to estimate the mean and standard variation of the delta between the initial position in parameter space (that corresponds to the mean shape in the shape space, for the first level of the cascade) and the ground truth objective , for each parameter . We then generate singleobjective trees for each parameter by assigning each leaf node a single prediction . During training, all the model parameters are optimized jointly by updating each tree node with Equations 9, 11, 12 using a parameterdependant learning rate in order to take into account the discrepancies in the dynamics of the different model parameters. The same holds true for the explicit layers.
We provide in the Appendix of this paper a proof that, in the regression case with constant leaf predictions initialized from a gaussian distribution, we have a sufficient condition to have each value
close to at least one leaf node prediction (in the sense that , with probability superior to ). We essentially show that this condition is satisfied if , with(15) 
In our case, setting ensure that this condition is satisfied with and for all the ranges (which experimentally vary from to ).
Greedy evaluation of NF
If for each node and tree, the evaluation of a NF (Equation 6) becomes similar to that of a decision forest with oblique splits (Equation 4). Intuitively, from a NF evaluation perspective, we successively choose the best path through the tree in a greedy fashion, node after node. Thus, we refer to this model as a Greedy Neural Forest (GNF).
On the one hand, in order to evaluate a NF that is composed of trees, we have to evaluate the probability to reach each leaf node of each tree. Consequently, each split node has to be evaluated, thus the complexity of applying NF to a dimensional input is , i.e. exponential in the tree depth . By doing so, we essentially lose the runtime advantage of using ensemble of decision trees for prediction. In case of a GNF, on the other hand, only a single, locally “best” path through the trees has to be evaluated. Hence, its complexity is equal to , i.e. linear in . Furthermore, as stated in [10], after training is complete, the split node activations of a NF are close to either or , hence only very little noise is added by switching a NF to its corresponding GNF. However, switching the two is all the more relevant in the frame of a cascaded regression, as potential errors are compensated in further stages the cascade. Furthermore, it dramatically reduces the evaluation runtime, enabling realtime processing.
2.2.2 Feature extraction and dimensionality reduction
We use SIFT features for their robustness and extraction speed. First, dimensional orientation bins and magnitude integral channels are computed for the whole face region of interest. Subsequently, SIFTs descriptors are generated for each feature point using its current position estimate and the integral channels, as it is explained in [9]. For that matter, we use nonoverlapping cells within a pixel window for each feature point. Finally, these descriptors are concatenated to form the initial shape descriptor .
Learning NFs with such highdimensional descriptors would be quite slow in terms of memory and training time, let alone overfitting issues. For those reasons, as in [21]
we perform dimensionality reduction. However, as stated above, as NF are differential classifiers, we can use a single, randomly initialized neuron layer to map the highdimensionality descriptor
to a dimensional one . During training, we update the weights of that layers in a single, topdown, supervised training pass (as opposed to, e.g., applying PCA beforehand [21]). Optionally, we can use the truncated gradient algorithm introduced in [12] to induce sparsity within the weights of the neuron layer (which will be refered to as the sparse NN), effectively reducing the computational load dedicated to dimensionality reduction. Using this setting, the update for a weight of the NN is computed as follows:(16) 
Where is the learning rate hyperparameter, denotes the relative importance of the regularization, and is the error residual of the prediction stage (Equation 14). is the truncation operator, which outputs if , otherwise. As it will be shown in the experiments, using the sparse NN allows to drastically reduce the computational runtime while maintaining a good accuracy.
3 Evaluation
3.1 Experimental setup
Even though our method embraces a lot of hyperparameters, few of them are critical to the global accuracy of the system. The region of interest for each face image corresponding to the provided bounding box is cropped according to the bounding box position, resized to a scale, and the mean shape is centered on that crop. Then a level cascade, with parametric layers ( trees per parameter making trees total per layer) and explicit layer ( trees per feature point coordinate), is applied for aligning the feature points. Tree depth is fixed to , to ensure that the condition of Equation 17 is satisfied for all parameters. The NN consists in
output units with an hyperbolic tangent activation function, which seems a good tradeof between speed and accuracy. All weights for NN and NF are randomly initialized from a uniform distribution in the interval
. These weights are optimized jointly with SGD updates corresponding to randomly sampled images and perturbations in translation and scale, with a constant learning rate of . For the sparse NN, we set and , which allows to zero out more than of the NN weights for each layer.As for evaluation, we use the standard average pointtopoint distance as the evaluation metric. As it is common in the literature, we report the mean accuracy normalized by the interpupil distance for
and points markups. For simplicity, we omit the ’’ symbol. For the large poses evaluation benchmark on , we normalize the error using the bounding box size, as in [24].3.2 Available data
The 300W database [16] consists in datasets: AFW [25], LFPW [3] and HELEN [13]. The HELEN database contains images for training and images for testing. The LFPW database contains images for training and for testing. As it is done in the literature, we train our models on the concatenation of the training partitions of the LFPW and HELEN databases, as well as the whole AFW database, which makes a total of training images. We evaluate our method on the test partition of LFPW, the test partition of HELEN (both constituting the common subset of 300W) as well as the challenging ibug database ( images) that contains several examples of partial occlusions as well as nonfrontal head poses.
The 300WLP database is an extension of the 300W database that contains face images featuring extreme pose variations on the yaw axis, ranging from to deg. The database contains a total of images obtained by generating additional views of the images from AFW, LFPW, HELEN and ibug, using the algorithm from [24].
The AFLW20003D dataset consists in fitted faces and largepose images for the first images of the AFLW database [11]. As it was done in [24], we evaluate the capacities of our method to deal with nonfrontal poses by training on 300WLP and testing on AFLW20003D. This database consists of examples in the yaw range, examples in the range and examples in the range. As in [24]
, we report accuracy for each pose range separately, as well as the mean and standard deviation across those three pose ranges.
3.3 Face alignment on small and medium poses
Impact of semiparametric alignment.
Figure 2 shows the cumulative error distribution curves on LFPW and HELEN test partitions, as well as on the ibug database. More specifically, we study the impact of swapping the last layer of a layers parametric cascade with an explicit layer. As one can see, the error is generally lower for the semiparametric cascade, notably for the most difficult examples on ibug database. This shows that using an explicit layer allows to further decrease the error as compared to another parametric layer, as the last layer captures the finegrained displacements between the ground truth and a well initialized parametric shape. Moreover, intuitively, using the parameters estimated by GaussNewton optimization as a ground truth for alignment may induce some errors as compared to directly using the ground truth shape as a regression target. Hence, the addition of an explicit layer may help to circumvent this issue as well.
Comparison with stateoftheart approaches.
Table 2 shows a comparison of our approach with results reported for recent cascaded regression approaches. Noticeably, the accuracies reported for our levels CSPdGNF are among the best results in the literature on the three databases, for both and landmark markups. This shows the interest of GNF as a predictor for cascaded regression systems, as well as the relevance of the semiparametric approach.
Also note that our method performs similarly to the current best approach (TCDCN [23]) on the common subset of 300W, whereas TCDCN is more accurate on ibug. However, authors of [23] used
additional images labelled with auxiliary attributes for pretraining their network. Even though studying the impact of using deep representations is out of the scope of this paper, using pretrained CNN layers as an input would be an interesting direction for future work. Indeed, GNF would allow to finetune upstream feature extraction layers in an endtoend manner, similarly to how we train the NN. Figures
3 and 4 displays examples of face alignment on small and medium poses, respectively.3.4 Face alignment on large poses
Table 2 draws a comparison between our approach and recent face alignment methods on large pose data using the AFLW20003D database. Results obtained with other methods are gathered from [24]. First, in case where the training set is 300W (upper part of the table), the proposed CSPdGNF achieves significantly higher accuracy than RCPR, ESR and SDM, for all three pose ranges.
Moreover, when trained on 300WLP, CSPdGNF also outperform those three methods by a wide margin. It is also more accurate than 3DDFA and 3DDFA+SDM for , and yaw angles. Note that 3DDFA benefit from dense 3D alignment before regressing the 68points shape, and is much slower than ours: ms for 3DDFA using the GPU, plus SDM workload, whereas CSPdGNDF largely runs in real time on a single CPU (See Section 3.5).
Interestingly, the accuracies reported on Table 2 where obtained using the exact same hyperparameters that were used for alignment on small and medium poses. Our approach is also fully twodimensional, and therefore would greatly benefit from using a parametric model for large pose alignment. Hence, we believe there is considerable room for improvement. However, as such, those results show that GNF scales particularly well with the number of training instances: they tend to not overfit when trained on small databases (e.g. 300W) due to their randomized decision tree nature. When trained on larger corpses (e.g. 300WLP), their high number of parameters allows to efficiently reduce the training error. Figure 5 shows examples of successful alignment on large poses.
LFPW  HELEN  IBUG  

method  
SDM [21]  4.47  5.67  4.25  5.50    15.4 
RCPR [4]  5.48  6.56  4.64  5.93    17.3 
DRMF [1]  4.40  5.80  4.60  5.80    19.8 
IFA [2]  6.12    5.86       
CFAN [22]    5.44    5.53     
POCR [19]  4.08    3.90       
L21 [14]  3.80    4.1    16.3   
CSPdGNF  3.74  4.72  3.59  4.79  10.3  12.0 
method  avg  std  

training on 300W  
RCPR [4]  4.16  9.88  22.58  12.21  9.43 
ESR [5]  4.38  10.47  20.31  11.72  8.04 
SDM [21]  3.56  7.08  17.48  9.37  7.23 
CSPdGNF  2.88  6.33  12.50  7.23  4.87 
training on 300WLP  
RCPR [4]  4.26  5.96  13.18  7.80  4.74 
ESR [5]  4.60  6.70  12.67  7.99  4.19 
SDM [21]  3.67  4.94  9.76  6.12  3.21 
3DDFA [24]  3.78  4.54  7.93  5.42  2.21 
3DDFA+SDM [24]  3.43  4.24  7.17  4.94  1.97 
CSPdGNF  2.67  4.19  7.00  4.62  2.19 
3.5 Runtime evaluation
Table 3 shows a runtime evaluation using the settings detailled in Section 3.1. Applying regularization with truncated gradient on the NN’s weights allow to decrease the NN runtime by more than . Moreover, using GNF instead of NF allows to reduce the alignment runtime by a factor . Using Sparse NN+GNF, the total runtime is reduced to ms, which allows realtime processing at approximately 80 fps, which is more than most stateoftheart approaches. This was benchmarked using a looselyoptimized C++ implementation on an CPU.
processing step  runtime (ms) 

feature extraction  0.70 
NN  11.7 
Sparse NN  1.51 
Regression (NF)  234.0 
Regression (GNF)  1.42 
Total (NN+NF)  981.0 
Total (Sparse NN+GNF)  12.6 
4 Conclusion
In this paper, we introduced a new face alignment framework, that consists in the conjunction of two novel ideas. First, we design a semiparametric cascade, in which the shape is aligned in the space of a parametric model to provide a precise first guess of the shape. Latter in the cascade, finegrained deformations are captured with explicit layers. In order to learn each (parametric or explicit) update, we introduced GNF, which contains several improvements over NF: namely a simplified training procedure involving constant prediction nodes (with theoretical guarantees that the trees are covering the regression ranges adequately) as well as a faster greedy evaluation. GNF appears as an ideal predictor for face alignment, as it combines expressivity, fast evaluation, and differentiability that allows a single pass, topdown learning of a NN for dimensionality reduction.
As is, the proposed semiparametric cascade with sparse NN and GNF allows fast and accurate face alignment for both small, medium and large head poses using baseline features. Future work will consist in incorporating CNN layers for learning lowlevel representations for face alignment using GNF, as its differentiable nature allows to learn upstream layers in a single pass. For that matter, using data labelled with auxiliary tasks (age, expression or gender prediction) will be considered for pretraining the CNNs.
References

[1]
A. Asthana, S. Zafeiriou, S. Cheng, and M. Pantic.
Robust discriminative response map fitting with constrained local
models.
In
International Conference on Computer Vision and Pattern Recognition
, pages 3444–3451, 2013.  [2] A. Asthana, S. Zafeiriou, S. Cheng, and M. Pantic. Incremental face alignment in the wild. In International Conference on Computer Vision and Pattern Recognition, pages 1859–1866, 2014.
 [3] P. N. Belhumeur, D. W. Jacobs, D. J. Kriegman, and N. Kumar. Localizing parts of faces using a consensus of exemplars. IEEE transactions on pattern analysis and machine intelligence, 35(12):2930–2940, 2013.
 [4] X. P. BurgosArtizzu, P. Perona, and P. Dollár. Robust face landmark estimation under occlusion. In International Conference on Computer Vision, pages 1513–1520, 2013.
 [5] X. Cao, Y. Wei, F. Wen, and J. Sun. Face alignment by explicit shape regression. International Journal of Computer Vision, 107(2):177–190, 2014.
 [6] T. F. Cootes, G. J. Edwards, and C. J. Taylor. Active appearance models. In European Conference on Computer Vision, pages 484–498, 1998.
 [7] T. F. Cootes, C. J. Taylor, D. H. Cooper, and J. Graham. Active shape modelstheir training and application. Computer vision and Image Understanding, 61(1):38–59, 1995.

[8]
A. Dapogny, K. Bailly, and S. Dubuisson.
Pairwise conditional random forests for facial expression recognition.
In Proceedings of the IEEE International Conference on Computer Vision, pages 3783–3791, 2015.  [9] P. Dollár, Z. Tu, P. Perona, and S. Belongie. Integral channel features. In British Machine Vision Conference, 2009.
 [10] P. Kontschieder, M. Fiterau, A. Criminisi, and S. Rota Bulo. Deep neural decision forests. In International Conference on Computer Vision, pages 1467–1475, 2015.
 [11] M. Köstinger, P. Wohlhart, P. M. Roth, and H. Bischof. Annotated facial landmarks in the wild: A largescale, realworld database for facial landmark localization. In Computer Vision Workshops (ICCV Workshops), 2011 IEEE International Conference on, pages 2144–2151. IEEE, 2011.

[12]
J. Langford, L. Li, and T. Zhang.
Sparse online learning via truncated gradient.
Journal of Machine Learning Research
, 10(Mar):777–801, 2009.  [13] V. Le, J. Brandt, Z. Lin, L. Bourdev, and T. S. Huang. Interactive facial feature localization. In European Conference on Computer Vision, pages 679–692. Springer, 2012.
 [14] B. Martinez and M. F. Valstar. L 2, 1based regression and prediction accumulation across views for robust facial landmark detection. Image and Vision Computing, 47:36–44, 2016.
 [15] S. Ren, X. Cao, Y. Wei, and J. Sun. Face alignment at 3000 fps via regressing local binary features. In International Conference on Computer Vision and Pattern Recognition, pages 1685–1692, 2014.
 [16] C. Sagonas, G. Tzimiropoulos, S. Zafeiriou, and M. Pantic. 300 faces inthewild challenge: The first facial landmark localization challenge. In Proceedings of the IEEE International Conference on Computer Vision Workshops, pages 397–403, 2013.
 [17] Y. Sun, X. Wang, and X. Tang. Deep convolutional network cascade for facial point detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 3476–3483, 2013.
 [18] J. Thies, M. Zollhöfer, M. Stamminger, C. Theobalt, and M. Nießner. Face2face: Realtime face capture and reenactment of rgb videos. Proc. Computer Vision and Pattern Recognition (CVPR), IEEE, 1, 2016.
 [19] G. Tzimiropoulos. Projectout cascaded regression with an application to face alignment. In International Conference on Computer Vision and Pattern Recognition, pages 3659–3667, 2015.
 [20] G. Tzimiropoulos and M. Pantic. Gaussnewton deformable part models for face alignment inthewild. In International Conference on Computer Vision and Pattern Recognition, pages 1851–1858, 2014.
 [21] X. Xiong and F. De la Torre. Supervised descent method and its applications to face alignment. In International Conference on Computer Vision and Pattern Recognition, pages 532–539, 2013.
 [22] J. Zhang, S. Shan, M. Kan, and X. Chen. Coarsetofine autoencoder networks for realtime face alignment. In European Conference on Computer Vision, pages 1–16, 2014.
 [23] Z. Zhang, P. Luo, C. C. Loy, and X. Tang. Learning deep representation for face alignment with auxiliary attributes. IEEE transactions on pattern analysis and machine intelligence, 38(5):918–930, 2016.
 [24] S. Zhu, C. Li, C. Change Loy, and X. Tang. Face alignment by coarsetofine shape searching. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 4998–5006, 2015.
 [25] X. Zhu and D. Ramanan. Face detection, pose estimation, and landmark localization in the wild. In Computer Vision and Pattern Recognition (CVPR), 2012 IEEE Conference on, pages 2879–2886. IEEE, 2012.
Appendix: a lower bound for the depth of randomly initialized NF with constant prediction nodes sampled from a Gaussian distribution for regression
In the case of regression, we aim at showing that, provided the tree is deep enough, for each value in the range that shall be covered by the tree, one can find at least one leaf prediction that is close to that value. This way, provided the node initialization is correct and the learning rate hyperparameter is set carefully in order to avoid saturation of the neurons, the error for each training example can theoretically be decreased to less than . Formally, we aim at proving the following proposition:
proposition:
If we consider a prediction tree with constant leaf predictions initialized from a gaussian distribution, for each value there is a probability superior to that there exists at least one leaf prediction such that if , with
(17) 
proof:
Let denote the following event: “For every value tree contains at least one leaf such that the prediction for that leaf satisfies ”.
We also define the event “For value there is at least one leaf of tree , such that ”. can be written as the product integral of probabilities on interval :
(18) 
Which is equivalent to
(19) 
Let’s then denote the event: “for every leaf of tree , . Clearly we have
(20) 
Moreover, as a tree of depth shall contain prediction nodes, we have:
(21) 
Furthermore, as the leaf predictions are randomly initialized from a gaussian distribution, for one specific leaf node we can write:
(22) 
We can use a lower bound of the gaussian function on the interval to provide an upper bound on this probability:
(23) 
thus
(24) 
(25) 
(26) 
Which we can write
(27) 
Thus, a sufficient condition to ensure (with close to ) is to have with
(28) 
Furthermore, when , the lower bound depth is equivalent to:
(29) 
Thus, given the regression range the lower bound depth grows as the logarithm of the desired “resolution” (i.e. the inverse of ).
Comments
There are no comments yet.