LandmarkDetection
None
view repo
In this study, we show that landmark detection or face alignment task is not a single and independent problem. Instead, its robustness can be greatly improved with auxiliary information. Specifically, we jointly optimize landmark detection together with the recognition of heterogeneous but subtly correlated facial attributes, such as gender, expression, and appearance attributes. This is nontrivial since different attribute inference tasks have different learning difficulties and convergence rates. To address this problem, we formulate a novel tasksconstrained deep model, which not only learns the intertask correlation but also employs dynamic task coefficients to facilitate the optimization convergence when learning multiple complex tasks. Extensive evaluations show that the proposed taskconstrained learning (i) outperforms existing face alignment methods, especially in dealing with faces with severe occlusion and pose variation, and (ii) reduces model complexity drastically compared to the stateoftheart methods based on cascaded deep model.
READ FULL TEXT VIEW PDF
Recently, facial attribute classification (FAC) has attracted significan...
read it
Facial landmark detection is a crucial prerequisite for many face analys...
read it
Face alignment (or facial landmarking) is an important task in many
face...
read it
Robust facial landmark localization remains a challenging task when face...
read it
This inherent relations among multiple face analysis tasks, such as land...
read it
Deep learning methods have achieved great success in pedestrian detectio...
read it
Face detection and alignment in unconstrained environment are challengin...
read it
None
Face alignment, or detecting semantic facial landmarks (e.g., eyes, nose, mouth corners) is a fundamental component in many face analysis tasks, such as facial attribute inference [1], face verification [2]
, and face recognition
[3]. Though great strides have been made in this field (see Sec.
2), robust facial landmark detection remains a formidable challenge in the presence of partial occlusion and large head pose variations (Fig. 1).Landmark detection is traditionally approached as a single and independent problem. Popular approaches include template fitting approaches [4, 5, 6, 7] and regressionbased methods [8, 9, 10, 11, 12]. More recently, deep models have been applied too. For example, Sun et al. [13]
propose to detect facial landmarks by coarsetofine regression using a cascade of deep convolutional neural networks (CNN). This method shows superior accuracy compared to previous methods
[14, 9] and existing commercial systems. Nevertheless, the method requires a complex and unwieldy cascade architecture of deep model.We believe that facial landmark detection is not a standalone problem, but its estimation can be influenced by a number of heterogeneous and subtly correlated factors. Changes on a face are often governed by the same rules determined by the intrinsic facial structure. For instance, when a kid is smiling, his mouth is widely opened (the second image in Fig.
1). Effectively discovering and exploiting such an intrinsically correlated facial attribute would help in detecting the mouth corners more accurately. Also, the interocular distance is smaller in faces with large yaw rotation (the first image in Fig. 1). Such pose information can be leveraged as an additional source of information to constrain the solution space of landmark estimation. Indeed, the input and solution spaces of face alignment can be effectively divided given auxiliary face attributes. In a small experiment, we average a set of face images according to different attributes, as shown in Fig. 1), where the frontal and smiling faces show the mouth corners, while there are no specific details for the image averaged over the whole dataset. Given the rich auxiliary information, treating facial landmark detection in isolation is counterproductive.This study aims to investigate the possibility of optimizing facial landmark detection (the main task) by leveraging auxiliary information from attribute inference tasks. Potential auxiliary tasks include head pose estimation, gender classification, age estimation [15], facial expression recognition, or facial attribute inference [16]. Given the multiple tasks, deep convolutional network appears to be a viable model choice since it allows for joint features learning and multiobjective inference. Typically, one can formulate a cost function that encompasses all the tasks and use the cost function in the network backpropagation learning. We show that this conventional multitask learning scheme is challenging in our problem. There are several reasons. First, the different tasks of face alignment and attribute inference are inherently different in learning difficulties. For instance, learning to identify “wearing glasses” attribute is easier than determining if one is smiling. Second, we rarely have auxiliary tasks with similar number of positive/negative cases. For instance, male/female classification enjoys more balanced samples than facial expression recognition. As a result, different tasks have different convergence rates. In many cases we observe that the joint learning with a specific auxiliary task improves the convergence of landmark detection at the beginning of the training procedure, but become ineffective when the auxiliary task training encounters local minima or overfitting. Continuing the training with all tasks jeopardizes the network convergence, leading to poor landmark detection performance.
Our study is the first attempt to demonstrate that face alignment can be jointly optimized with the inference of heterogeneous but subtly correlated auxiliary attributes. We show that the supervisory signal of auxiliary tasks can be backpropagated jointly with that of face alignment to learn the underlying regularities of face representation. Nonetheless, the learning is nontrivial due to the different natures and convergence rates of different tasks. Our key contribution is a newly proposed TasksConstrained Deep Convolutional Network (TCDCN), with new objective function to address the aforementioned challenges. In particular, our model considers the following aspects to make the learning effective:
Dynamic task coefficient  Unlike existing multitask deep models [17, 18, 19] that treat all tasks as equally important, we assign and weight each auxiliary task with a coefficient, which is adaptively and dynamically adjusted based on training and validation errors achieved so far in the learning process. Thus a task that is deemed not beneficial to the main task is prevented from contributing to the network learning. This approach can be seen as a principled way of achieving “early stopping” on specific task. In the experiments, we show that the dynamic task coefficient is essential to reach the peak performance for face alignment.
Intertask correlation modeling  We additionally model the relatedness of heterogeneous tasks in a covariance matrix in the objective function. Different from the dynamic task coefficient that concerns on the learning convergence, intertask correlation modeling helps better exploiting the relation between tasks to achieve better feature learning.
All the network parameters, including the filters, dynamic task coefficients, and intertask correlation are learned automatically using a newly proposed alternating optimization approach.
Thanks to the effective shared representation learned from multiple auxiliary attributes, the proposed approach outperforms other deep learning based approaches for face alignment, including the cascaded CNN model [13] on five facial point detection. We demonstrate that shared representation learned by a TCDCN for sparse landmarks can be readily transferred to handle an entirely different configuration with more landmarks, e.g. 68 points in the 300W dataset [20]. With the transferred configuration, our method further outperforms other existing methods [8, 9, 21, 6, 5, 12, 22] on the challenging 300W dataset, as well as the Helen [23] and COFW [8] dataset.
In comparison to our earlier version of this work [24], we introduce the new dynamic task coefficient to generalize the original idea of taskwise early stopping [24] (discussed in Sec. 3.1). Specifically, we show that the dynamic task coefficient is a relatively more effective mechanism to facilitate the convergence of a heterogeneous task network. In addition, we formulate a new objective function that learns different tasks and their correlation jointly, which further improves the performance and allows us to analyze the usefulness of auxiliary tasks more comprehensively. Apart from the methodology, the paper was also substantially improved by providing more technical details and more extensive experimental evaluations.
Facial landmark detection: Conventional facial landmark detection methods can be divided into two categories, namely regressionbased method and template fitting method. A regressionbased method estimates landmark locations explicitly by regression using image features. For example, Valstar et al. [25]
predict landmark location from local image patch with support vector regression. Cao
et al. [9] and BurgosArtizzu et al. [8] employ cascaded fern regression with pixeldifference features. A number of studies [26, 10, 27, 11, 22, 28] use random regression forest to cast votes for landmark location based on local image patch with Haarlike features. Most of these methods refine an initial guess of the landmark location iteratively, the first guess/initialization is thus critical. By contrast, our deep model takes raw pixels as input without the need of any facial landmark initialization. Importantly, our method differs in that we exploit auxiliary tasks to facilitate landmark detection learning.A template fitting method builds face templates to fit input images [4, 29, 30]. Partbased model has recently been used for face fitting [31, 6, 5]. Zhu and Ramanan [5] show that face detection, facial landmark detection, and pose estimation can be jointly addressed. Our method differs in that we do not limit the learning of specific tasks, i.e. the TCDCN is readily expandable to be trained with additional auxiliary tasks. Specifically, apart from pose, we show that other facial attributes such as gender and expression, can be useful for learning a robust landmark detector. Another difference to [5] is that we learn feature representation from raw pixels rather than predefined HOG as face descriptor.
Landmark detection by deep learning: The methods [32, 12, 13] that use deep learning for face alignment are close to our approach. The methods usually formulate the face alignment as a regression problem and use multiple deep models to locate the landmarks in a coarsetofine manner, such as the cascaded CNN by Sun et al. [13]. The cascaded CNN requires a prepartition of faces into different parts, each of which are processed by separate deep CNNs. The resulting outputs are subsequently averaged and channeled to separate cascaded layers to process each facial landmark individually. Similarly, Zhang et al.[12]
uses successive autoencoder networks to perform coarsetofine alignment. Instead, our model requires neither prepartition of faces nor cascaded networks, leading to drastic reduction in model complexity, whilst still achieving comparable or even better accuracy. This opens up possibility of application in computational constrained scenario, such as the embedded systems. In addition, the use of auxiliary task can reduce the overfitting problem of deep model because the local minimum for different tasks might be in different places. Another important difference is that our method performs feature extraction in the whole face image automatically, instead of handcraft local regions.
Learning multiple tasks in neural network
: Multitask learning (MTL) is the process of learning several tasks simultaneously with the aim of mutual benefit. This is an old idea in machine learning. Caruana
[33]provides a good overview focusing on neural network. Deep model is well suited for learning multiple tasks since it allows for joint features learning and multiobjective inference. Joint learning of multiple tasks has also proven effective in many computer vision problems
[17, 18, 19]. However, existing deep models [34, 17] are not suitable to solve our problem because they assume similar learning difficulties and convergence rates across all tasks. For example, in the work of [19], the algorithm simultaneously learns a human pose regressor and multiple bodypart detectors. This algorithm optimizes multiple tasks directly without learning the task correlation. In addition, it uses predefined task coefficients in the iterative learning process. Applying this method on our problem leads to difficulty in learning convergence, as shown in Sec. 4. We mitigate this shortcoming by introducing dynamic task coefficients in the deep model. This new formulation generalizes the idea of early stopping. Early stopping of neural network can date back to the work of Caruana [33], but it is heuristic and limited to shallow multilayer perceptrons. The scheme is also not scalable for a large quantity of tasks. Different from the work of
[35], which learns the task priority to handle outlier tasks, the dynamic task coefficient in our approach is based on the training and validation error, and aims to coordinate tasks of different convergence rates. We show that dynamic task coefficient is important for joint learning multiple objectives in deep convolutional network.
We cast facial landmark detection as a nonlinear transformation problem, which transforms the raw pixels of a face image to the positions of dense landmarks. The proposed framework is illustrated in Fig. 2
, showing that the highly nonlinear function is modeled as a DCN, which is pretrained by five landmarks and then finetuned to predict the dense landmarks. Since dense landmarks are expensive to label, the pretraining step is essential because it prevents DCN from overfitting to small dataset. In general, the pretraining and finetuning procedures are similar, except that the former step initializes filters by a standard normal distribution, while the latter step initializes filters using the pretrained network.
As shown in Fig. 2, DCN extracts a highlevel representation on a face image using a set of filters , , where is the nonlinear transformation learned by DCN. With the extracted feature x, we jointly estimate landmarks and attributes, where landmark detection is the main task and attribute prediction is the auxiliary task. Let denote a set of real values, representing the x,ycoordinate of the landmarks, and let denote a set of binary labels of the face attributes, . Specifically, equals in the pretraining step, implying that the landmarks include two centers of the eyes, nose, and two corners of the mouth. represents the number of dense landmarks in the finetuning step, such as in Helen dataset [23] and in 300W dataset [20]. This work investigates the effectiveness of 22 attributes in landmark detection, i.e. .
Both landmark detection and attribute prediction can be learned by the generalized linear models [36]. Suppose be a weight matrix, , where each column vector corresponds to the parameters of a single task. For example, indicates the parameter vector for the ycoordinate of the first landmark. With these parameters, we have
(1) 
where
represents an additive random error variable that is distributed according to a normal distribution with mean zero and variance
, i.e. . Similarly, each represents the parameter vector of the th attribute, which is model as(2) 
where is distributed following a standard logistic distribution, i.e. .
If all the tasks are independent, W can be simply modeled as a product of the multivariate normal distribution, i.e. , where 0 is a zero vector, denote a identity matrix, and is a vector representing the diagonal elements of the covariance matrix. However, this work needs to explore which auxiliary attribute is crucial to landmark detection, implying that we need to model the correlations between tasks. Therefore, we assume W is distributed according to a matrix normal distribution [37], i.e. , where 0 is a zero matrix and is a task covariance matrix. The matrix is learned in the training process and can naturally capture the correlation between the weight of different tasks.
As landmark detection and attribute prediction are heterogenous tasks, different auxiliary attribute behaves differently in the training procedure. They may improve the convergence of landmark detection at the beginning of the training procedure, but may become ineffective as training proceeds when local minima or overfitting is presented. Thus, each auxiliary attribute is assigned with a dynamic task coefficient , , which is adjusted adaptively during training. is distributed according to a normal distribution with mean and variance , i.e. , where we assume and is determined based on the training and validation errors (detailed in Sec. 3.3).
It is worth pointing out that in in the early version of this work [24], we introduce a taskwise early stopping scheme to halt a task after it is no longer beneficial to the main task. This method is heuristic and the criterion to determine when to stop learning a task is empirical. In addition, once a task is halted, it will never resume during the training process. In contrast to this earlier proposal, the dynamic task coefficient is dynamically updated. Thus a halted task may be resumed automatically if it is found useful again during the learning process. In particular, the dynamic task coefficient has no single optimal solution across the whole learning process. Instead, its value is updated to fit the current training status.
In summary, given a set of face images and their labels, we jointly estimate the filters K, the weight matrix W, the task covariance matrix , and the dynamic coefficients .
The above problem can be formulated as a probabilistic framework. Given a data set with training samples, denoted as , where , , and , and a set of parameters
, we optimize the parameters by maximizing a posteriori probability (MAP)
(3) 
Eqn.(3) is proportional to
(4) 
where the first two terms are the likelihood probabilities and the last three terms are the prior probabilities. Moreover,
and represent the first columns and the last columns of W, respectively. In the following, we will introduce each term of Eqn.(4) in detail.The likelihood probability measures the accuracy of landmark detection. As discussed in Eqn.(1
), each variable of landmark position can be modeled as a linear regression plus a Gaussian noise. The likelihood can be factorized as
(5) 
The likelihood probability measures the accuracy of attribute prediction. As introduced in Eqn.(2), each binary attribute is predicted by a linear function plus a logistic distributed random noise, implying that the probability of
is a sigmoid function, which is
, where. Thus, the likelihood can be defined as product of Bernoulli distributions
(6) 
The prior probability of the weight matrix, , is modeled by a matrix normal distribution with mean zero [37], which is able to capture the correlations between landmark detection and auxiliary attributes. It is written as
(7) 
where calculates the trace of a matrix and is a positive semidefinite matrix modeling the task covariance, denoted as , . Referring to Eqn.(7), the variance between the th landmark and the th attribute is obtained by , where denotes the element in the th row and th column, showing that the relation of a pair of tasks is measured by their corresponding weights with respect to each feature dimension . For instance, if two different tasks select or reject the same set of features, they are highly correlated. More clearly, Eqn.(7) is a matrix form of the multivariate normal distribution. They are equivalent if W is reshaped as a long vector.
The prior probability of the tasks’ dynamic coefficients is defined as a product of the normal distributions, , where the mean is adjustable based on the training and validation errors. It has significant difference with the task covariance matrix. For example, the auxiliary attribute ‘wearing glasses’ is probably related to the landmark positions of eyes. Their relation can be measured by . However, if ‘wearing glasses’ converges more quickly than the other tasks, it becomes ineffective because of local minima or overfitting. Therefore, its dynamic coefficient could be decreased to avoid these sideeffects.
The DCN filters can be initialized as a standard multivariate normal distribution as previous methods [38] did. In particular, we define .
By taking the negative logarithm of Eqn.(4) and combining Eqn.(5), (6), and (7), we obtain the MAP objective function
(8) 
Eqn.(8) contains six terms. For simplicity of discussion, we remove the terms that are constant. We also assume the variance parameters such as , , and equal one. Thus, the regularization parameters of the above terms are comparable and can be simply ignored.
Eqn.(8) can be minimized by updating one parameter with the remaining parameters fixed. First, although the first three terms are likely to be jointly convex with respect to W, in the first two terms is a highly nonlinear transformation with respect to K, i.e. . In this case, no global optima are guaranteed. Therefore, following the optimization strategies of CNN [39]
, we apply stochastic gradient descent (SGD)
[38] with weight decay [40] to search the suitable local optima for both W and K. This method has been demonstrated working reasonably well in practice [38]. Here, the fifth term can be considered as the weight decay of the filters. Second, the third term in Eqn.(8) is a convex function regarding , but the fourth term is concave since negative logarithm is a convex function. In other words, learning directly is a convexconcave problem [41]. However, with a wellknown lemma [42], has a convex upper bound, . Thus, the fourth term can be replaced by . Both the third and the fourth terms are now convex regarding . Finally, since the dynamic coefficients in Eqn.(8) are linear and independent, finding each has a closed form solution.We solve the MAP problem in an iterative manner. First, we jointly update the DCN filters K and the weight matrix W with the tasks’ dynamic coefficients and covariance matrix fixed. Second, we update the covariance matrix by fixing all the other parameters with their current values. Third, we update in a similar way to the second step.
In the first step, we optimize W and K in the DCN and fix and with their current values. In this case, the fourth and the last terms in Eqn.(8
) are constant and thus can be removed. We write the loss function in a matrix form as follows
(9) 
where y is a vector, and , 1 are both vectors. represents a diagonal matrix with being the values in the diagonal. The fourth term in Eqn.(9) can be considered as the parameterized weight decay of W, while the last term is the weight decay of the filters K, i.e. . Eqn.(9) combines the least square loss and the crossentropy loss to learn the DCN, which can be optimized by SGD [38], since they are defined over individual sample. Fig. 2 illustrates the architecture of DCN, containing four convolutional layers and one fullyconnected layer. This architecture is a tradeoff between accuracy of landmark detection and computational cost, and it works well in practice. Note that the learning method introduced in this work is naturally compatible with any deep network structure, but exploring them is out of the scope of this paper.
Now we introduce the learning procedure. At the very beginning, each column of W and each filter of K are initialized according to a multivariate standard normal distribution. To learn the weight matrix W, we calculate its derivative, , where and denote the network outputs (predictions) and the step size of the gradient descent, respectively. By simple derivation, we have
(10)  
(11) 
where is the corresponding tasks’ predictions. For example, in Eqn.(10) indicates the predictions of the landmark positions, while in Eqn.(11) indicates the predictions of auxiliary attributes. In summary, the entire weight matrix in the th iteration is updated by , where are the regularization parameters of the gradient and the weight decay.
To update filters K, we propagate the errors of DCN from top to bottom, following the wellknown backpropagation (BP) strategy [43], where the gradient of each filter is computed by the crosscorrelation between the corresponding input channel and the error map [39]. In particular, at the fullyconnected layer as shown in Fig. 2, the errors are obtained by first summing over the losses of both landmark detection and attribute predictions, and then the sum is multiplied by the transpose of the weight matrix. For each convolutional layer, the errors are achieved by the deconvolution [39] between its filters and the backpropagated errors. Several pairs of face images and their features obtained by filters K are shown in Fig. 3, which shows that the learned features are robust to large poses and expressions. For example, the features of smiling faces or faces have similar poses exhibit similar patterns.
In the second step, we optimize the covariance matrix with W, K, and fixed. As discussed in Eqn.(8), the logarithm of can be relaxed by its upper bound. The optimization problem for finding then becomes
(12) 
For simplicity, we assume . Problem (12) with respect to is a naive semidefinite programming problem and has a simple closed form solution, which is .
In the third step, we update the dynamic coefficients with W, K, and fixed. By ignoring the constant terms in Eqn.(8), the optimization problem becomes
(13) 
where is a small constant close to zero. Each has a analytical solution, which is , implying that each dynamic coefficient is determined by its expected value and the loss value averaged over training samples. Here, we can define similar to the taskwise early stopping [24]. Suppose the current iteration is , let , and be the values of the loss function of task on the validation set and training set, respectively. We can have
(14) 
where is a constant scale factor, and controls a training strip of length . The second term in Eqn.(14) represents the tendency of the validation error. If the validation error drops rapidly within a period of length , the value of the first term is large, indicating that training should be emphasized as the task is valuable. Similarly, the third term measures the tendency of the training error. We can see that the taskwise early stopping strategy proposed in[24] can be treated as a special case of the dynamic coefficient . In addition, we does not need a tuned threshold to decide whether to stop a task as[24], and we can provide better performance (see Sec. 4.2).
After training the TCDCN model on sparse landmarks and auxiliary attributes, it can be readily transferred from sparse landmark detection to handle more landmark points, e.g. 68 points as in 300W dataset [20]. In particular, we initialize the network (i.e. , the lower part of Fig. 2) with the learned shared representation and finetune using a separate training set only labeled with dense landmark points. Since the shared representation of the pretrained TCDCN model already captures the information from attributes, the auxiliary tasks learning is not necessary in the finetuning stage.
Network Structure. Fig. 2 shows the network structure of TCDCN. The input of the network is
grayscale face image (normalized to zeromean and unitvariance). The feature extraction stage contains four convolutional layers, three pooling layers, and one fully connected layer. The kernels in each convolutional layer produce multiple feature maps. The commonly used rectified linear unit is selected as the activation function. For the pooling layers, we conduct maxpooling on nonoverlap regions of the feature map. The fully connected layer following the fourth convolutional layer produces a feature vector that is shared by the multiple tasks in the estimation stage.
Evaluation metrics: In all cases, we report our results on two popular metrics [8, 9, 27, 13], i.e. mean error and failure rate. The mean error is measured by the distances between estimated landmarks and the ground truths, and normalized with respect to the interocular distance. Mean error larger than 10% is reported as a failure.
MultiAttribute Facial Landmark (MAFL)^{1}^{1}1Data and codes of this work are available at http://mmlab.ie.cuhk.edu.hk/projects/TCDCN.html. (Ping Luo is the corresponding author): To facilitate the training of TCDCN, we construct a new dataset by annotating 22 facial attributes on 20,000 faces randomly chosen from the Celebrity face dataset [44]. The attributes are listed in Table I and all the attributes are binary, indicating the attribute is presented or not. We divide the attributes into four groups to facilitate the following analyses. The grouping criterion is based on the main face region influenced by the associated attributes. In addition, we divide the face into one of five categories according to the degree of yaw rotation. This results in the fifth group named as “head pose”. All the faces in the dataset are accompanied with five facial landmarks locations (eyes, nose, and mouth corners), which are used as the target of the face alignment task. We randomly select 1,000 faces for testing and the rest for training. Example images are provided in Fig. 12.
Annotated Facial Landmarks in the Wild (AFLW) [45]: AFLW contains 24,386 face images gathered from Flickr. This dataset is selected because it is more challenging than other conventional datasets, such as BioID [46] and LFPW [14]. Specifically, AFLW has larger pose variations (39% of faces are nonfrontal in our testing images) and severe partial occlusions. Each face is annotated with 21 landmarks at most. Some landmarks are not annotated due to outofplane rotation or occlusion. We randomly select 3,000 faces for testing. Fig. 12 depicts some examples.
Caltech Occluded Faces in the Wild (COFW) [8]: This dataset is collected from the web. It is designed to present faces in occlusions due to pose, the use of accessories (e.g., sunglasses), and interaction with objects (e.g., food, hands). This dataset includes 1,007 faces, annotated with 29 landmarks, as shown in Fig. 15.
Helen [23]: Helen contains 2,330 faces from the web, annotated densely with 194 landmarks (Fig. 13).
300W [20]: This dataset is wellknown as a standard benchmark for face alignment. It is a collection of 3,837 faces from existing datasets: LFPW [14], AFW [5], Helen [23] and XM2VTS [47]. It also contains faces from an additional subset called IBUG, consisting images with difficult poses and expressions for face alignment, as shown in Fig. 16. Each face is densely annotated with 68 landmarks.
Group  Attributes  

eyes 


nose 


mouth 


global 


head pose 

Dynamic task coefficient is essential in TCDCN to coordinate the learning of different tasks with different convergence rates. To verify its effectiveness, we train the proposed TCDCN with and without this technique. Fig. 4 (a) plots the main task’s error of the training and validation sets up to 200,000 iterations. Without dynamic task coefficient, the training error converges slowly and exhibits substantial oscillations. In contrast, convergence rates of both the training and validation sets are fast and stable when using the proposed dynamic task coefficient. In addition, we illustrate the dynamic task coefficients of two attributes in Fig. 4 (b). We observe that the values of their coefficients drop after a few thousand iterations, preventing these auxiliary tasks from overfitting. The coefficients may increase when these tasks become effective in the learning process, as shown by the sawtoothlike pattern of the coefficient curves. These two behaviours work together, facilitating the smooth convergence of the main task, as shown in Fig. 4 (a).
In addition, we compare the dynamic tasks coefficient with the taskwise early stopping proposed in the earlier version of this work [24]. As shown in Table II, dynamic task coefficient achieves better performance than the taskwise early stopping scheme. This is because the new method is more dynamic in coordinating the different auxiliary tasks across the whole training process (see Sec. 3.1).
without intertask correlation learning 
with intertask correlation learning 


taskwise early stopping [24]  8.35  8.21 
dynamic task coefficient  8.07  7.95 
To investigate how the auxiliary tasks help facial landmark detection, we study the learned correlation between these tasks and the facial landmarks. In particular, as we have learned the task covariance matrix , given the relation between correlation matrix and covariance matrix, we can compute the correlation between any two tasks, by normalizing their covariance with the square root of the product of their variances. In Fig. 5, we present the learned correlation between the attribute groups and facial landmarks. In particular, for each attribute group, we compute the average absolute value of the correlation with the five facial landmarks, respectively. It is shown that for the group of “mouth”, the correlations with the according landmarks (i.e. , mouth corners) are higher than the others. Similar trends can be observed in the group of “nose” and “eyes”. For the group of “global”, the correlations are roughly even for different landmarks because the attributes are determined by the global face structure. The correlation of the “pose” group is much higher than that of the others. This is because the head rotation directly affects the landmark distribution. Moreover, in Fig. 6, we randomly choose one attribute from each attribute group and visualize its correlation to other landmarks. For clarification, for each attribute, we normalize the correlation among the landmarks (i.e. , the sum of the correlation on the five landmarks equals one). We can also observe that the attributes are more likely to be correlated to its according landmarks.
In addition, we visualize the learned correlation between the auxiliary tasks in Fig. 7. Because the attributes of “Left Profile”, “Left”, “Frontal”, “Right”, “Right Profile” are mutually exclusive (i.e., only one attribute can be positive for a face) and describe the yaw rotation, we aggregate these five attributes as one attribute (i.e. “pose”), by computing the average absolute correlation with other attributes. One can observe some intuitive results in this figure. For examples, the head pose is unrelated to other attributes; “Heavy Makeup” has high positive correlation with “Attractive”, and high negative correlation with “Male”. In Table II, we show the mean errors of facial landmark localization on MAFL dataset with and without intertask correlation learning (without correlation learning means that we simply apply multiple tasks as targets and do not use the term of in Eq. (8)). It demonstrates the effectiveness of task correlation learning.
To further examine the influence of auxiliary tasks more comprehensively, we evaluate different variants of the proposed model. In particular, the first variant is trained only on facial landmark detection. We train another five model variants on facial landmark detection along with the auxiliary tasks in the groups of “eyes”, “nose”, “mouth”, “global”, “head pose”, respectively. In addition, we synthesize a task with random objective and train it along with the facial landmark detection task, which results in the sixth model variant. The full model is trained using all the attributes. For simplicity, we name each variant by facial landmark detection (FLD) and the auxiliary tasks, such as “FLD only”, “FLD+eyes”, “FLD+pose”, “FLD+all”.
It is evident from Fig. 8 that optimizing landmark detection with auxiliary tasks is beneficial. In particular, “FLD+all” outperforms “FLD” by a large margin, with a reduction of over 7% in failure rate. When single auxiliary task group is present, “FLD+pose” and “FLD+global” perform better than the others. This is not surprising since the pose variation affects locations of all landmarks directly and the “global’ attribute group influences the whole face region. The other auxiliary tasks such as “eyes” and “mouth” are observed to have comparatively smaller influence to the final performance, since they mainly capture local information of the face. As for “FLD+random” the performance is hardly improved. This result shows that the main task and auxiliary task need to be related for any performance gain in the main task.
In addition, we show the relative improvement caused by different groups of attributes for each landmark in Fig. 9. In particular, we define , where original error is produced by the model of “FLD only”. We can observe a trend that each group facilitates the landmarks in the according face region. For example, for the group of “mouth”, the benefits are mainly observed at the corners of mouth. This observation is intuitive since attributes like smiling drives the lower part of the faces, involving Zygomaticus and levator labii superioris muscles, more than the upper facial region. The learning of these attributes develops a shared representation that describes lower facial region, which in turn facilitates the localization of corners of mouth. Similarly, the improvement of eye location is much more significant than mouth and nose for the attribute group of “eye”. However, we observe the group of “nose” improves the eye and mouth localization remarkably. This is mainly because the nose is in the central of the face, there exists constrain between the nose location and other landmarks. The horizontal coordinate of the nose is likely to be the mean of the eyes in frontal face. As for the group of “pose” and “global”, the improvement is significant in all landmarks. Fig. 10 depicts improvements led by adding “eye” and “mouth” attributes. Fig. 12 shows more example results, demonstrating the effectiveness on various face appearances of TCDCN.
Although the TCDCN, cascaded CNN [13] and CFAN [12] are built upon deep model, we show that the proposed model can achieve better detection accuracy with lower computational cost and model complexity. We use the full model “FLD+all”, and the publicly available binary code of cascaded CNN [13] and CFAN [12] in this experiment.
Landmark localization accuracy: In this experiment, we employ the testing images of MAFL and AFLW [45] for evaluation. It is observed from Fig. 11 that the overall accuracy of the proposed method is superior to that of cascaded CNN and CFAN.
Model complexity: The proposed method only has one CNN, whereas the cascaded CNN [13] deploys multiple CNNs in different cascaded layers (23 CNNs in its implementation). Also, for each CNN, both our method and cascaded CNN [13] have four convolutional layers and two fully connected layers, with comparable input image size. However, the convolutional layer in [13] uses locally unshared kernels. Hence, TCDCN has much lower computational cost and model complexity. The cascaded CNN requires 0.12s to process an image on an Intel Core i5 CPU, whilst TCDCN only takes 18ms, which is 7 times faster. Also, the TCDCN costs 1.5ms on a NVIDIA GTX760 GPU. Similarly, the complexity is larger in CFAN [12] due to the use of multiple autoencoders, each of which contains fully connected structures in all layers. Table III shows the details of the running time and network complexity.
As we discussed in Section 3.4, we can transfer the trained TCDCN to handle more landmarks beyond the five major facial points. The main idea is to pretrain a TCDCN on sparse landmark annotations and multiple auxiliary tasks, followed by finetuning with dense landmark points.
We compare against various stateofthearts. The first class of methods use regression methods that directly predict the facial landmarks: (1) Robust Cascaded Pose Regression (RCPR) [8]; (2) Explicit Shape Regression (ESR) [9]; (3) Supervised Descent Method (SDM) [21]; (4) Regression based on Local Binary Features (LBF) [22]; (5) Regression based on Ensembles of Regression Trees [48] (ERT); (6) CoarsetoFine AutoEncoder Networks (CFAN) [12] (as this method can predict dense landmarks, we compare with it again); (7) CoarsetoFine Shape Searching (CFSS) [49]. The second class of methods employ a face template: (8) Tree Structured Part Model (TSPM) [5], which jointly estimates the head pose and facial landmarks; (9) A Cascaded Deformable Model (CDM) [6]; (10) STASM [50], which is based on Active Shape Model [51]; (11) Componentbased ASM [23]; (12) Robust discriminative response map fitting (DRMF) method [31]; (13) Gaussnewton deformable part models (GNDPM) [7]; In addition, we compare with the commercial face analysis software: (14) Face++ API [52]. For the methods of RCPR [8], SDM [21], CFAN [12], TSPM [5], CDM [6], STASM [50], DRMF [31], and Face++ [52], we use their publicly available implementation. For the methods which include their own face detector (like TSPM [5] and CDM [6]), we avoid detection errors by cropping the image around the face. For methods that do not release the code, we report their results on the related literatures.
Evaluation on Helen [23]: Helen consists of 2,000 training and 330 testing images as specified in [23]. In particular, the 194landmark annotation is from the original dataset and the 68landmark annotation is from [20]. Table IV reports the performance of the competitors and the proposed method. Most of the images are in high resolution and the faces are nearfrontal. Although our method just uses the input of grey image, it still achieves better result. Fig. 13 visualizes some of our results. It can be observed that driven by rich facial attributes, our model can capture various facial expression accurately.
Method  194 landmarks  68 landmarks 

STASM [50]  11.1   
CompASM [23]  9.10   
DRMF [31]    6.70 
ESR [9]  5.70   
RCPR [8]  6.50  5.93 
SDM [21]  5.85  5.50 
LBF [22]  5.41   
CFAN [12]    5.53 
CDM [6]    9.90 
GNDPM [7]    5.69 
ERT [48]  4.90   
CFSS [49]  4.74  4.63 
TCDCN  4.63  4.60 
Evaluation on 300W [20]: We follow the same protocol in [22]: the training set contains 3,148 faces, including the AFW, the training sets of LFPW, and the training sets of Helen. The test set contains 689 faces, including IBUG, the testing sets of LFPW, and the testing sets of Helen. Table V demonstrates the superior of the proposed method. In particular, for the challenging subset (IBUG faces) TCDCN produces a significant error reduction of over 10% in comparison to the stateoftheart [49]. As can be seen from Fig. 16, the proposed method exhibits superior capability of handling difficult cases with large head rotation and exaggerated expressions, thanks to the shared representation learning with auxiliary tasks. Fig. 17 shows more results of the proposed method on Helen [23], IBUG [20], and LPFW [14] datasets.
Method  Common Subset  Challenging Subset  Fullset 

CDM [6]  10.10  19.54  11.94 
DRMF [31]  6.65  19.79  9.22 
RCPR [8]  6.18  17.26  8.35 
GNDPM [7]  5.78     
CFAN [12]  5.50  16.78  7.69 
ESR [9]  5.28  17.00  7.58 
SDM [21]  5.57  15.40  7.50 
ERT [48]      6.40 
LBF [22]  4.95  11.98  6.32 
CFSS [49]  4.73  9.98  5.76 
TCDCN  4.80  8.60  5.54 
Evaluation on COFW [8]: The testing protocol is the same as [8], where the training set includes LFPW [14] and 500 COFW faces, and the testing set includes the remaining 507 COFW faces. This dataset is more challenging as it is collected to emphasize the occlusion cases. The quantitative evaluation is reported in Fig. 14. Example results of our algorithm are depicted in Fig. 15. It is worth pointing out that the proposed method achieves better performance than RCPR [8] even that we do not explicitly learn and detect occlusions as [8].
Instead of learning facial landmark detection in isolation, we have shown that more robust landmark detection can be achieved through joint learning with heterogeneous but subtly correlated auxiliary tasks, such as appearance attribute, expression, demographic, and head pose. The proposed TasksConstrained DCN allows errors of auxiliary tasks to be backpropagated in deep hidden layers for constructing a shared representation to be relevant to the main task. We also show that by learning dynamic task coefficient, we can utilize the auxiliary tasks in a more efficient way. Thanks to learning with the auxiliary attributes, the proposed model is more robust to faces with severe occlusions and large pose variations compared to existing methods. We have observed that a deep model needs not be cascaded [13] to achieve the better performance. The lighterweight CNN allows realtime performance without the usage of GPU or parallel computing techniques. Future work will explore deep learning with auxiliary information for other vision problems.
AAAI Conference on Artificial Intelligence
, 2015.IEEE Conference on Computer Vision and Pattern Recognition
, 2012, pp. 2879–2886.T. F. Cootes, M. C. Ionita, C. Lindner, and P. Sauer, “Robust and accurate shape model fitting using random forest regression voting,” in
European Conference on Computer Vision, 2012, pp. 278–291.A. Ahmed, K. Yu, W. Xu, Y. Gong, and E. Xing, “Training hierarchical feedforward visual recognition models using transfer learning from pseudotasks,” in
European Conference on Computer Vision. Springer, 2008, pp. 69–82.R. Collobert and J. Weston, “A unified architecture for natural language processing: Deep neural networks with multitask learning,” in
International Conference on Machine Learning, 2008, pp. 160–167.A. Krizhevsky, I. Sutskever, and G. E. Hinton, “Imagenet classification with deep convolutional neural networks.”
Advances in Neural Information Processing Systems, pp. 1097–1105, 2012.
Comments
There are no comments yet.