MaskFace: multi-task face and landmark detector

05/19/2020 ∙ by Dmitry Yashunin, et al. ∙ HARMAN International 0

Currently in the domain of facial analysis single task approaches for face detection and landmark localization dominate. In this paper we draw attention to multi-task models solving both tasks simultaneously. We present a highly accurate model for face and landmark detection. The method, called MaskFace, extends previous face detection approaches by adding a keypoint prediction head. The new keypoint head adopts ideas of Mask R-CNN by extracting facial features with a RoIAlign layer. The keypoint head adds small computational overhead in the case of few faces in the image while improving the accuracy dramatically. We evaluate MaskFace's performance on a face detection task on the AFW, PASCAL face, FDDB, WIDER FACE datasets and a landmark localization task on the AFLW, 300W datasets. For both tasks MaskFace achieves state-of-the-art results outperforming many of single-task and multi-task models.



There are no comments yet.


page 5

page 8

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

In recent years facial image analysis tasks became very popular because of their attractive practical applications in automotive, security, retail and social networks. Face analysis starts from the basic tasks of bounding box detection and landmark localization. The common pipeline is to sequentially apply single-task models to solve these problems independently: 1) detect faces, 2) detect landmarks (also called keypoints) [48].

However, it is a challenge to develop a software system consisting of many sequentially applied convolutional neural networks (CNN) because each CNN should be trained separately and deals with errors made by previous models. Different heuristics and special training procedures are applied to achieve robustness of the overall system. Crucially, single-task CNNs can’t benefit from shared deep representations and additional supervision provided by multiple tasks. Recent studies show that multi-task CNNs that produce multiple predictive outputs can offer higher accuracy and better speed than single-task counterparts but are difficult to train properly

[20, 8, 35].

Figure 1:

MaskFace design. Predicted bounding boxes from context features are directly used for feature extraction and facial landmark localization.

Figure 2: Outline of the proposed approach. MaskFace adopts the feature pyramid followed by independent context modules. Outputs of context modules are used for face detection and landmark localization.

We argue that despite the recent achievements in multi-task models in the domain of facial analysis they receive low attention and their accuracy loses to single-task rivals. The most popular multi-task model MTCNN uses cascades of shallow CNNs, but they do not share feature representations [55, 4]. Modern end-to-end multi-task approaches are mainly represented by single-shot methods. For landmark localization either regression heads [6, 10] or heatmaps of keypoints [47, 32] are used. The heatmap based approaches suffer from low face detection accuracy, while regression-based ones have worse landmark localization. The reason is that regression-based methods can’t afford strong landmark prediction heads. In addition, there is a misalignment between the spatially discrete features of activation maps and continuous positions of facial landmarks. The misalignment can’t be properly handled by shallow convolutional layers.

In this paper we propose an accurate multi-task face and landmark detection model that combines advantages of previous approaches. Our model extends popular face detection approaches such as RetinaFace [10] and SSH [39] by adopting ideas of Mask R-CNN [20]. At the first stage we predict bounding boxes, at the second stage predicted bounding boxes are used for extraction of facial features from shared representations (see Figure 1). Unlike Mask R-CNN and other multi-stage approaches we predict bounding boxes in a single forward pass, that allows to increase performance [20, 40, 29, 30]. For feature extraction we use a RoIAlign layer [20] offering good pixel-to-pixel alignment between predicted bounding boxes and discrete feature maps. To improve detection of tiny faces we use a feature pyramid [29] and context modules [10, 39, 13] (see Figure 2,  3

). The feature pyramid transmits deep features to shallow layers, while the context modules increase a receptive field and make prediction layers stronger. The MaskFace’s landmark head is as fast as the original Mask R-CNN head. In the case of few faces in the image prediction of landmarks adds negligible computational overhead.

In summary our contribution is twofold: Firstly, we propose using the mask head for facial keypoint detection and we perform experiments on sparse as well as dense landmarks that are usually omitted in previous multi-task models. And secondly, we show how using the mask head we achieve state-of-the-art results on both the face detection and landmark localization tasks. We systematically study how different model parameters influence the accuracy and perform extensive experiments on popular datasets. This highlights that a well-designed model has a significant impact on the result and can outperform many of more sophisticated approaches.

2 Related work

Object detectors are mainly based on the idea of default bounding boxes (also called anchors) that densely cover an image at all scales [40]

. Anchors are classified, and their positions are refined. In single-stage face detectors bounding boxes are predicted in a single forward pass

[33, 57], in two-stage detectors there are two rounds of anchor refinement [40, 28]. Anchor-based approaches have the significant class imbalance between positive and negative anchors. The class imbalance problem is usually solved by online hard element mining (OHEM) [33] or a dynamically scaled cross entropy loss (focal loss) [30, 33].

To detect difficult objects, such as tiny faces or faces with complex poses and high occlusion, different approaches are applied. In [57, 28] the authors use densified anchor tiling. In Feature Pyramid Network (FPN) [29] semantically strong features with low resolution are combined with weak features from high resolution layers. Also, to improve detection of small objects context is incorporated. In two-stage detectors, context can be modeled by explicitly enlarging the window around the candidate proposals [50]. In single-shot methods context is incorporated by enlarging a receptive field by additional convolutional layers [39, 28].

Regression and segmentation methods are used for facial landmark prediction. Regression approaches are often based on and losses or their modifications [15, 21, 1]. Also, multiple-stages of landmark refinement can be used to improve the accuracy [61, 15]. There are approaches that use 3D face reconstruction to predict dense landmarks [14, 59].

Multi-task models combine several-single task methods in one model. In MTCNN the authors use the image pyramid and cascades of shallow CNNs to predict face bounding boxes and landmarks. Recent methods adopt feature pyramids that naturally exist in CNNs [29]. For landmark localization additional regression heads [6, 10] are added to commonly used detectors such as SSD [33], RetinaNet [30]. In [10, 5] the authors add branches that predict 3D shapes of faces. Mask R-CNN offers a general and flexible architecture for multi-tasking [20]. It is based on a RoIAlign pooling that allows to extract features for proposals containing objects. Extracted features can be used for different tasks, for example, for segmentation, keypoint localization [20] or predicting 3D shape of objects [19].

Differences from the closest works. In difference to SHH, RetinaNet and other single and multi-task face detection models we add the mask head that significantly improves landmark localization accuracy. In the most papers devoted to face detection OHEM is used, while we show that a focal loss can offer state-of-the-art results. In difference to Mask R-CNN [20] we predict bounding boxes in a single forward pass. In difference to RetinaMask [16] we use context modules and focus on the face detection and landmark localization.

3 Method

3.1 Multi-task loss

We model a landmark location as a one-hot mask, and adopt MaskFace to predict masks, one for each of the landmarks, e.g. left eye, right eye, etc. The multi-task loss for an image is defined as:


where is an anchor binary classification loss (face vs background), is a regression loss of anchors’ positions, is a localization loss of keypoints weighted with a parameter .

For anchor classification we use a focal loss [30]:


For bounding box regression we apply a smooth version of loss:


To predict landmark locations, we apply a spatial cross-entropy loss to each of the landmark masks:



is a predicted probability of anchor

being a face, is a number of positive anchors, that should be classified as faces ( should be equal to 1), negative anchors are ones that should be classified as background ( should be equal to 0). Pos and Neg are sets of indices of positive and negative anchors, respectively. is a balancing parameter between the classification loss of positive and negative anchors, is a focusing parameter that reduces the loss for well-classified anchors.

is a vector representing the 4 parameterized coordinates of a predicted bounding box, and

is that of the ground-truth box associated with a positive anchor . and

are predicted logits and a mask for a landmark

for a positive sample , respectively. , are indices of mask pixels at which a ground truth landmark in a positive sample is located. For each of the keypoints the training target is a one-hot binary mask where only a single pixel is labeled as foreground [20].

Parameters and are set to 0.25 and 2, respectively, following  [10]. If not mentioned, we select a value of the keypoint loss weight equal to 0.25. The higher the more accurate localization of landmarks but face detection degrades. Our experiments show that = 0.25 gives a good trade-off between the accuracy of face detection and landmark localization.

3.2 Architecture

MaskFace design is straightforward and based on the maskrcnn-benchmark [36]. MaskFace has two prediction heads: face detection and landmark localization. The detection head outputs bounding boxes of faces. Predicted bounding boxes are used to extract face features from fine resolution layers allowing precise localization of landmarks. To achieve good pixel-to-pixel alignment during feature extraction we adopt a RoIAlign layer following Mask R-CNN [20]. Extracted face features are used to predict localization masks of landmarks.

Face detection head. We adopt the FPN architecture [29]. FPN combines low-resolution, semantically strong features with high-resolution, semantically weak features via a top-down pathway and lateral connections (see Figure 2). The result is a feature pyramid with rich semantics at all levels, that is necessary for detection of tiny faces. FPN outputs a set of feature maps called {

} with 256 channels each and strides of {

}, respectively. Feature maps from to are calculated using feature maps of last layers with strides of {}, respectively, that are called {}. {} layers have the same spatial size as the corresponding {} layers.

is calculated by applying a max-pooling layer with a stride of 2 to


To increase the receptive field and add context to predictions we apply context modules with independent weights to {} [39, 10]. We call feature maps of context modules {

}. The context module design is similar to the inception module (see Figure

3) [43]

. Sequential 3x3 convolutional filters are used instead of larger convolutional filters to reduce the number of calculations. ReLU is applied after each convolutional layer. Outputs of all branches are concatenated. Our experiments suggest that the context modules improve accuracy.

Figure 3: Context module design. Notation of convolutional layers: filter size, number of channels.

Feature maps after context modules are used for anchor boxes’ regression and classification by applying 1x1 convolutional layers with shared weights. Note that unlike other multi-stage detectors we do not use a second stage of bounding box refinement to increase performance.

We use translation-invariant anchor boxes similar to [40]. The base anchors have areas of {} on pyramid levels from to , respectively. At each pyramid level we use anchors with sizes of {} of the base anchors for dense scale coverage. All anchors have the same aspect ratio of 1.0. There are three anchors per level and across levels they cover the scale range 16 – 406 pixels. In total there are around 112k anchors for an input image of 640x640 resolution.

Anchor matching. If an anchor box has an intersection-over-union (IoU) overlap with a ground truth box greater than 0.5 then the anchor is considered as a positive example. If the overlap is less than 0.3 the anchor is assigned a negative label. All anchors with overlaps between 0.3 and 0.5 are ignored during training. Additionally, for anchor assigning we use low-quality matching strategy. For each ground truth box, we find a set of anchor boxes that have the maximum overlap with it. For each anchor in the set, if the anchor is unmatched, then we match it to the ground truth with the highest IoU. Our experiments suggest using the low-quality matching strategy because it improves accuracy.

Landmark localization head. Predictions from the detection head are treated as region-of-interest (RoI) to extract features for landmark localization. At first, proposals are filtered: predictions with confidences less than 0.02 are ignored, non-maximum suppression (NMS)  [3] with a threshold of 0.6 is applied to remaining predictions. After that proposals are matched with ground truth boxes. If an IoU overlap of proposals with ground truth boxes is higher than 0.6 then proposals are used for extracting landmark features from the appropriate layers of the feature pyramid {}.

Following FPN we assign a RoI of width and height to the level of our feature pyramid by:


where . Eqn. 6 means that if an area of a predicted bounding box is smaller than , it is assigned to the feature layer , between to is assigned to , etc. The maximum layer is . Unlike the previous approaches  [29, 16] we use the finer resolution feature map with a stride of 4 for feature extraction. Our experiments show that high resolution feature maps are essential for precise landmark localization. The lower the more precise landmark detection (see Section 4.4). If not mentioned is used.

We adopt a RoIAlign layer [20] to extract features from assigned feature maps. RoIAlign allows properly aligning of the extracted features with the input RoIs. RoIAlign outputs 14x14 resolution features, that are fed into 8 consequent convolutional layers (conv3x3, 256 filters, stride 1), a single transposed convolutional layer (convtranspose2d 4x4,

filters, stride 2), a bilinear interpolation layer that upsamples the masks to 56x56 resolution. The output mask tensor has a size of

x56x56 ( is a number of facial landmarks).

We emphasize that the keypoint head only slightly increases the number of calculations compared to overall feature extraction during detection (see Table 1 and Section 4.5). If there are few faces on the image the keypoint head can be used almost for free while providing precise landmark localization.

Feature extractor GFLOPs
Keypoint head (1 proposal)
Table 1: GFlops of feature extractors for an input image of 640x640. For a few faces in the image landmark localization adds small overhead to overall face detection.
Val: easy
Val: medium
Val: hard
Test: easy
Test: medium
Test: hard
Figure 4: Precision-recall curves on the WIDER FACE validation and test subsets. The higher the better.

3.3 Training

Data augmentation. Training on the WIDER FACE dataset. Images are resized with a randomly chosen scale factor between 0.5 and 2.5. After that we filter image annotations: we remove small and large bounding boxes with areas out of [0.4*, 2.5*] range, where – an area of the smallest anchor equal to 16x16, – an area of the largest anchor equal to 406x406. This filtering is necessary because we use the low-quality matching strategy and all ground truth boxes including ones with a small anchors’ overlap are matched. Such loose matching for very small and large ground truth boxes will be harmful for training.

After that we crop patches of the 640x640 size. With probabilities of 0.5 we use either 1) random cropping or 2) random cropping around a randomly chosen bounding box. If there are no bounding boxes on the image, we perform common random cropping. Enforcing about one half of the cropped patches to contain at least one bounding box helps to enrich training batches with positive samples and increase accuracy. We apply random horizontal flipping and color distortions to the final images.

Training on the AFLW and 300W datasets. We randomly resize images so that a size of bounding boxes is in the range from 150 to 450 pixels. After that we randomly crop patches of the 480x480 size. We augment the data by degrees in-plane rotation, randomly flipping and color distortions.

Training details.

We train MaskFace using the SGD optimizer with the momentum of 0.9 and the weight decay of 0.0001. A batch size is equal either to 8 or 16 depending on the model size. If not mentioned all backbones are pretrained on the ImageNet-1k dataset. First two convolutional layers and all batch normalization layers are frozen. We use group normalization in the feature pyramid top-down pathway, context modules and keypoint head. A number of groups is 32. We start training using warmup strategy: at first 5k iteration a learning rate grows linearly from 0.0001 to 0.01. After that to update the learning rate we apply a step decay schedule. When accuracy on validation set stops growing, we multiply the learning rate by a factor of 0.1. The minimum learning rate is 0.0001.

4 Experiments

Figure 5: Precision-recall curves for the AFW, PASCAL and FDDB datasets. The higher the better.

4.1 Datasets

WIDER FACE. The WIDER FACE dataset [53] contains 32203 images and 393703 labeled face bounding boxes with a high degree of variability in scale, pose and occlusion. Each subset contains three levels of difficulty: easy, medium and hard. Face bounding box annotations are provided for train and validation subsets with 12880 and 3226 images respectively. In [10] the authors release annotations of 5 facial landmarks (left and right eyes, nose, left and right mouth corners) for the WIDER FACE train subset. In total they provide annotations for 84600 faces. At the time of paper submission, the authors did not provide annotations for the validation subset.

AFW. The AFW dataset [62] has 205 images with 473 faces. The images in the dataset contains cluttered backgrounds with large variations in both face viewpoint and appearance.

PASCAL face. The PASCAL dataset [52] is collected from the test set of PASCAL person layout dataset, consisting of 1335 faces with large face appearance and pose variations from 851 images.

FDDB. The FDDB dataset [23] has 5171 faces in 2845 images taken from news articles on Yahoo websites. Faces in FDDB are represented by ellipses and not all of them are marked. Therefore, for evaluation we use the annotation with additional labeled faces from the SFD paper [58].

AFLW. The AFLW dataset [24] contains 21997 real-world images with 25993 annotated faces. Collected images have a large variety in face appearance (pose, expression, ethnicity, age, gender) and environmental conditions. Each face annotation includes a bounding box and up to 21 visible landmarks. In [61] the authors revised the AFLW annotation and provided labeling of all 19 landmarks (regardless its visibility) and a visibility flag. In our experiments we use the revised annotation.

300W. The 300W dataset [41] is a combination of HELEN [27], LFPW [2], AFW [62], XM2VTS and IBUG datasets, where each face has 68 landmarks. We follow the same evaluation protocol as described in HRNet [42]. We use 3148 training images, which contains the training subsets of HELEN and LFPW and the full set of AFW. We evaluate the performance using two protocols, full set and test set. The full set contains 689 images and is further divided into a common subset (554 images) from HELEN and LFPW, and a challenging subset (135 images) from IBUG. The official test set, used for competition, contains 600 images (300 indoor and 300 outdoor images).

4.2 Evaluation

Face detection. We use common test time image augmentations: horizontal flipping and the image pyramid. The multi-scale is essential for detecting tiny faces. We apply Soft-NMS [3] to bounding box predictions for each augmented image. After that all bounding box predictions from the augmented images are joined and filtered by the box voting [17]. For evaluation on WIDER FACE and FDDB we use the officially provided toolboxes [53, 23]. For evaluation on AFW and PASCAL we use the toolbox from [37].

Landmark localization. We calculate normalized mean error (NME) metrics using the face bounding box as normalization for the AFLW dataset and the inter-ocular distance as normalization for 300W. When we make comparison with multi-task approaches an input image is resized to make the minimum image size equal to 640. When we compare our method with single-task approaches images are cropped and resized so that ground truth faces has a size of 256x256 following to HRNet [42]. We emphasize that for the landmark evaluation we do not apply any test time image augmentations as well as any kind of predictions’ averaging. MaskFace outputs several proposals per a ground truth face, but we choose only one the most confident proposal for landmark predictions.

4.3 Main results

Face detection. All results are provided for the ResNeXt-152 backbone pretrained on the ImageNet-5k and COCO datasets [20, 18]. The model is trained on the WIDER FACE with 5 facial keypoints and evaluated on the AFW, PASCAL face, FDDB datasets. In Figure 4 we show precision-recall curves of the proposed approach for the WIDER FACE validation and test subsets. In Figure 5 we show precision-recall for the AFW, PASCAL and FDDB datasets. We could not find precision-recall curves for recent state-of-the-art detectors on AFW and PASCAL, therefore in Table 2 we additionally show AP metrics collected from papers. Our experiments demonstrate that MaskFace achieves state-of-the-art results.

Method AP
BB-FCN [32]
FaceBoxes [57]
SRN [9]
Our MaskFace
Method AP
FaceBoxes [57]
SFD [58]
SRN [9]
Our MaskFace
Table 2: AP metrics on the AFW (left) and PASCAL (right) datasets. The higher the better.
Figure 6: Comparison with multi-task methods. Cumulative distribution for NME of 5 facial landmarks on the full AFLW dataset. The higher the better.

Landmark localization. First, we compare our approach with the state-of-the-art multi-task face detector RetinaFace [10] and popular MTCNN [55]. To make fair comparison with RetinaFace we train MaskFace using the same ResNet-50 backbone on WIDER FACE. RetinaFace implementation and trained weights are taken from [10]. To match predicted boxes with ground truth, we use an IoU threshold of 0.3. In Figure 6 we plot cumulative error distribution (CED) curves for NME of 5 facial landmarks on the full AFLW dataset (21997 images). For qualitative comparison see Figure 7. The CED curves characterize distributions of errors for each method, that can be also shown by the histogram of errors. If any of the methods cannot detect some faces, then it will affect only the upper part of the corresponding CED curve (the upper part will be cut off). Because usually if a method cannot detect some faces then such faces have a high value of NME. Figure 6 shows that MaskFace outperforms previous multi-task approaches by a large margin and improves the baseline on the AFLW dataset for sparse landmarks.

Backbone Full Frontal
TSR [34] VGG-S -
CPM + SBR [12] CPM -
SAN [11] ResNet-
DSRN [38] - -
LAB (w/o B) [49] Hourglass
HRNet [42] HRNetV2-W
Our MaskFace ResNet-50 + Context
Model trained with extra info.
DCFE (w/ D) [46] - -
PDB (w/ DA) [15] ResNet- -
LAB (w/ B) [49] Hourglass
Table 3: Detection results (NME) for 19 facial landmarks on the AFLW test subset. The lower the better.
Method Backbone Common Challenging Full Test
RCN [22] - -
DSRN [38] - -
CFAN [54] - - - -
SDM [51] - - - -
CFSS [60] - - - -
PCD-CNN [26] - -
CPM + SBR [12] CPM -
SAN [11] ResNet-152 -
MDM [45] - - - -
DAN [25] -
Our MaskFace ResNet-50
+ context
Chen et al. [7] Hourglass - - -
Model trained with extra info.
LAB (w/ B) [49] Hourglass
DCFE (w/ D) [46] - - -
Table 4: Detection results (NME) for 68 facial landmarks on the 300W subsets. The lower the better.
Figure 7: Illustration of differences in localization of 5 facial landmarks for MTCNN, RetinaFace and MaskFace on AFLW.
Figure 8: Visualization of 19 and 68 landmarks predicted by MaskFace on 300W.

Second, we compare our method with recent state-of-the-art single-task models. The MaskFace model is trained using the ResNet-50 backbone. For training and validation we use the same AFLW and 300W subsets as in the HRNet paper [42]. We pretrain the model on WIDER FACE and AFLW for evaluation on AFLW and 300W, respectively. Pretraining helps to slightly improve the accuracy. For matching predicted boxes with ground truth, we use an IoU threshold of 0.4. In Tables 3 and 4 we provide results for comparison with other methods. The obtained values of NME are given for the case when MaskFace detects all faces. Visualization of MaskFace predictions is shown in Figure 8. Our approach achieves top results among methods without extra information and stronger data augmentation. Note, that LAB that uses extra boundary information (w/B) [49], PDB uses stronger data augmentation (w/DA) [15], DCFE uses extra D information (D) [46].

4.4 Ablation experiments

We study how different backbones influence the AP and NME in Table 5. As expected, face detection and landmark localization benefit from stronger backbones.

Backbone Easy, AP Medium, AP Hard, AP CED@
Table 5: Dependence of face detection and landmark localization accuracies on the backbone architectures. AP is calculated on the WIDER FACE validation subset. CED@ – a CED value at % images for 5 facial landmarks on the full AFLW.

In Table 6 we provide results of ablation experiments. As the baseline we use FPN with the ResNet-50 backbone. Experiments show that face detection and landmark localization benefit from the context modules. Tiny faces (hard subset) gain the most. The keypoint head slightly decreases the face detection accuracy indicating about interference between the tasks. Note that this behavior is different from the previously reported results on the multi-task CNNs trained on the COCO dataset [31] when the additional segmentation head increases detection accuracy [20, 16]. This means that more advanced architectures or training strategies are needed to get benefits from joint face detection and landmark localization  [35, 8].

We made experiments to demonstrate dependence of NME on the value of parameter on the AFLW test subset. For NME is 1.54, for NME is 1.57, for NME is 1.64. The higher the lower spatial resolution of feature maps used for landmark predictions.

Context Keypoint head Easy / Medium / Hard, AP CED@
ResNet- + FPN / / -
+ / / -
+ / /
+ + / /
Table 6: Ablation experiments. AP is calculated on the WIDER FACE validation subset. CED@ – a CED value at % images for facial landmarks on the full AFLW.
Backbone Time GPU type
Our MaskFace ResNet-50 20 ms 2080TI
ResNeXt-152 55 ms 2080TI
RetinaFace [10] ResNet-152 75 ms Tesla P40
RefineFace [56] ResNet-50 35 ms 1080TI
ResNet-152 57 ms 1080TI
DFS [44] ResNet-50 35 ms Tesla P40
Table 7: Time comparison between different face detection methods for VGA (640x480) images.

4.5 Inference time

We measure inference time of MaskFace for different backbones using a 2080TI GPU and compare results with recent models. Results are provided in Table 7. They show that performance of MaskFace is in line with recent state-of-the-art models. MaskFace spends 0.11 ms to predict landmarks for one face and can achieve about 20 fps for 640x480 images even with the heavy ResNeXt-152 backbone.

5 Acknowledgments

The authors thank Stefan Marti, Joey Verbeke, Andrey Filimonov and Vladimir Aleshin for their help in this research project.

6 Conclusion

In this paper we have shown that adding the mask head to the face detection models significantly increases localization accuracy of sparse and dense landmarks. The proposed MaskFace model achieves top results in face and landmark detection on several popular datasets. The mask head is very fast, therefore without computational overhead MaskFace can be used in applications with few faces on the scene offering state-of-the-art face and landmark detection accuracies.


  • [1] J. T. Barron (2019)

    A general and adaptive robust loss function


    Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition

    pp. 4331–4339. Cited by: §2.
  • [2] P. N. Belhumeur, D. W. Jacobs, D. J. Kriegman, and N. Kumar (2013) Localizing parts of faces using a consensus of exemplars. IEEE transactions on pattern analysis and machine intelligence 35 (12), pp. 2930–2940. Cited by: §4.1.
  • [3] N. Bodla, B. Singh, R. Chellappa, and L. S. Davis (2017) Soft-nms–improving object detection with one line of code. In Proceedings of the IEEE international conference on computer vision, pp. 5561–5569. Cited by: §3.2, §4.2.
  • [4] Z. Cai, Q. Liu, S. Wang, and B. Yang (2018)

    Joint head pose estimation with multi-task cascaded convolutional networks for face alignment

    In 2018 24th International Conference on Pattern Recognition (ICPR), pp. 495–500. Cited by: §1.
  • [5] B. Chaudhuri, N. Vesdapunt, and B. Wang (2019) Joint face detection and facial motion retargeting for multiple faces. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 9719–9728. Cited by: §2.
  • [6] J. Chen, W. Lin, J. Zheng, and R. Chellappa (2018) A real-time multi-task single shot face detector. In 2018 25th IEEE International Conference on Image Processing (ICIP), pp. 176–180. Cited by: §1, §2.
  • [7] Y. Chen, C. Shen, X. Wei, L. Liu, and J. Yang (2017)

    Adversarial posenet: a structure-aware convolutional network for human pose estimation

    In Proceedings of the IEEE International Conference on Computer Vision, pp. 1212–1221. Cited by: Table 4.
  • [8] Z. Chen, V. Badrinarayanan, C. Lee, and A. Rabinovich (2017) Gradnorm: gradient normalization for adaptive loss balancing in deep multitask networks. arXiv preprint arXiv:1711.02257. Cited by: §1, §4.4.
  • [9] C. Chi, S. Zhang, J. Xing, Z. Lei, S. Z. Li, and X. Zou (2018) Selective refinement network for high performance face detection. arXiv preprint arXiv:1809.02693. Cited by: Table 2.
  • [10] J. Deng, J. Guo, Y. Zhou, J. Yu, I. Kotsia, and S. Zafeiriou (2019) RetinaFace: single-stage dense face localisation in the wild. arXiv preprint arXiv:1905.00641. Note: Cited by: §1, §1, §2, §3.1, §3.2, §4.1, §4.3, Table 7.
  • [11] X. Dong, Y. Yan, W. Ouyang, and Y. Yang (2018) Style aggregated network for facial landmark detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 379–388. Cited by: Table 3, Table 4.
  • [12] X. Dong, S. Yu, X. Weng, S. Wei, Y. Yang, and Y. Sheikh (2018) Supervision-by-registration: an unsupervised approach to improve the precision of facial landmark detectors. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 360–368. Cited by: Table 3, Table 4.
  • [13] S. W. Earp, P. Noinongyao, J. A. Cairns, and A. Ganguly (2019) Face detection with feature pyramids and landmarks. arXiv preprint arXiv:1912.00596. Cited by: §1.
  • [14] Y. Feng, F. Wu, X. Shao, Y. Wang, and X. Zhou (2018) Joint 3d face reconstruction and dense alignment with position map regression network. In Proceedings of the European Conference on Computer Vision (ECCV), pp. 534–551. Cited by: §2.
  • [15] Z. Feng, J. Kittler, M. Awais, P. Huber, and X. Wu (2018) Wing loss for robust facial landmark localisation with convolutional neural networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2235–2245. Cited by: §2, §4.3, Table 3.
  • [16] C. Fu, M. Shvets, and A. C. Berg (2019) RetinaMask: learning to predict masks improves state-of-the-art single-shot detection for free. arXiv preprint arXiv:1901.03353. Cited by: §2, §3.2, §4.4.
  • [17] S. Gidaris and N. Komodakis (2015) Object detection via a multi-region and semantic segmentation-aware cnn model. In Proceedings of the IEEE international conference on computer vision, pp. 1134–1142. Cited by: §4.2.
  • [18] R. Girshick, I. Radosavovic, G. Gkioxari, P. Dollár, and K. He (2018) Detectron. Note: Cited by: §4.3.
  • [19] G. Gkioxari, J. Malik, and J. Johnson (2019) Mesh r-cnn. arXiv preprint arXiv:1906.02739. Cited by: §2.
  • [20] K. He, G. Gkioxari, P. Dollár, and R. Girshick (2017) Mask r-cnn. In Proceedings of the IEEE international conference on computer vision, pp. 2961–2969. Cited by: §1, §1, §2, §2, §3.1, §3.2, §3.2, §4.3, §4.4.
  • [21] S. Honari, P. Molchanov, S. Tyree, P. Vincent, C. Pal, and J. Kautz (2018)

    Improving landmark localization with semi-supervised learning

    In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1546–1555. Cited by: §2.
  • [22] S. Honari, J. Yosinski, P. Vincent, and C. Pal (2016) Recombinator networks: learning coarse-to-fine feature aggregation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 5743–5752. Cited by: Table 4.
  • [23] V. Jain and E. Learned-Miller (2010) Fddb: a benchmark for face detection in unconstrained settings. Technical report UMass Amherst technical report. Cited by: §4.1, §4.2.
  • [24] M. Koestinger, P. Wohlhart, P. M. Roth, and H. Bischof (2011) Annotated facial landmarks in the wild: a large-scale, real-world database for facial landmark localization. In 2011 IEEE international conference on computer vision workshops (ICCV workshops), pp. 2144–2151. Note: Cited by: §4.1.
  • [25] M. Kowalski, J. Naruniec, and T. Trzcinski (2017) Deep alignment network: a convolutional neural network for robust face alignment. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 88–97. Cited by: Table 4.
  • [26] A. Kumar and R. Chellappa (2018) Disentangling 3d pose in a dendritic cnn for unconstrained 2d face alignment. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 430–439. Cited by: Table 4.
  • [27] V. Le, J. Brandt, Z. Lin, L. Bourdev, and T. S. Huang (2012) Interactive facial feature localization. In European conference on computer vision, pp. 679–692. Cited by: §4.1.
  • [28] J. Li, Y. Wang, C. Wang, Y. Tai, J. Qian, J. Yang, C. Wang, J. Li, and F. Huang (2019) Dsfd: dual shot face detector. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 5060–5069. Cited by: §2, §2.
  • [29] T. Lin, P. Dollár, R. Girshick, K. He, B. Hariharan, and S. Belongie (2017) Feature pyramid networks for object detection. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 2117–2125. Cited by: §1, §2, §2, §3.2, §3.2.
  • [30] T. Lin, P. Goyal, R. Girshick, K. He, and P. Dollár (2017) Focal loss for dense object detection. In Proceedings of the IEEE international conference on computer vision, pp. 2980–2988. Cited by: §1, §2, §2, §3.1.
  • [31] T. Lin, M. Maire, S. Belongie, J. Hays, P. Perona, D. Ramanan, P. Dollár, and C. L. Zitnick (2014) Microsoft coco: common objects in context. In European conference on computer vision, pp. 740–755. Cited by: §4.4.
  • [32] L. Liu, G. Li, Y. Xie, Y. Yu, Q. Wang, and L. Lin (2019) Facial landmark machines: a backbone-branches architecture with progressive representation learning. IEEE Transactions on Multimedia 21 (9), pp. 2248–2262. Cited by: §1, Table 2.
  • [33] W. Liu, D. Anguelov, D. Erhan, C. Szegedy, S. Reed, C. Fu, and A. C. Berg (2016) Ssd: single shot multibox detector. In European conference on computer vision, pp. 21–37. Cited by: §2, §2.
  • [34] J. Lv, X. Shao, J. Xing, C. Cheng, and X. Zhou (2017) A deep regression architecture with two-stage re-initialization for high performance facial landmark detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3317–3326. Cited by: Table 3.
  • [35] K. Maninis, I. Radosavovic, and I. Kokkinos (2019) Attentive single-tasking of multiple tasks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1851–1860. Cited by: §1, §4.4.
  • [36] F. Massa and R. Girshick (2018)

    maskrcnn-benchmark: Fast, modular reference implementation of Instance Segmentation and Object Detection algorithms in PyTorch

    Note: [4,July 2019] Cited by: §3.2.
  • [37] M. Mathias, R. Benenson, M. Pedersoli, and L. Van Gool (2014) Face detection without bells and whistles. In European conference on computer vision, pp. 720–735. Cited by: §4.2.
  • [38] X. Miao, X. Zhen, X. Liu, C. Deng, V. Athitsos, and H. Huang (2018) Direct shape regression networks for end-to-end face alignment. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 5040–5049. Cited by: Table 3, Table 4.
  • [39] M. Najibi, P. Samangouei, R. Chellappa, and L. S. Davis (2017) Ssh: single stage headless face detector. In Proceedings of the IEEE International Conference on Computer Vision, pp. 4875–4884. Cited by: §1, §2, §3.2.
  • [40] S. Ren, K. He, R. Girshick, and J. Sun (2015) Faster r-cnn: towards real-time object detection with region proposal networks. In Advances in neural information processing systems, pp. 91–99. Cited by: §1, §2, §3.2.
  • [41] C. Sagonas, G. Tzimiropoulos, S. Zafeiriou, and M. Pantic (2013) 300 faces in-the-wild challenge: the first facial landmark localization challenge. In Proceedings of the IEEE International Conference on Computer Vision Workshops, pp. 397–403. Cited by: §4.1.
  • [42] K. Sun, Y. Zhao, B. Jiang, T. Cheng, B. Xiao, D. Liu, Y. Mu, X. Wang, W. Liu, and J. Wang (2019) High-resolution representations for labeling pixels and regions. arXiv preprint arXiv:1904.04514. Cited by: §4.1, §4.2, §4.3, Table 3.
  • [43] C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, and A. Rabinovich (2015) Going deeper with convolutions. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 1–9. Cited by: §3.2.
  • [44] W. Tian, Z. Wang, H. Shen, W. Deng, Y. Meng, B. Chen, X. Zhang, Y. Zhao, and X. Huang (2018) Learning better features for face detection with feature fusion and segmentation supervision. arXiv preprint arXiv:1811.08557. Cited by: Table 7.
  • [45] G. Trigeorgis, P. Snape, M. A. Nicolaou, E. Antonakos, and S. Zafeiriou (2016) Mnemonic descent method: a recurrent process applied for end-to-end face alignment. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4177–4187. Cited by: Table 4.
  • [46] R. Valle, J. M. Buenaposada, A. Valdés, and L. Baumela (2018) A deeply-initialized coarse-to-fine ensemble of regression trees for face alignment. In Proceedings of the European Conference on Computer Vision (ECCV), pp. 585–601. Cited by: §4.3, Table 3, Table 4.
  • [47] L. Wang, X. Yu, T. Bourlai, and D. N. Metaxas (2019) A coupled encoder–decoder network for joint face detection and landmark localization. Image and Vision Computing 87, pp. 37–46. Cited by: §1.
  • [48] M. Wang and W. Deng (2018) Deep face recognition: a survey. arXiv preprint arXiv:1804.06655. Cited by: §1.
  • [49] W. Wu, C. Qian, S. Yang, Q. Wang, Y. Cai, and Q. Zhou (2018) Look at boundary: a boundary-aware face alignment algorithm. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 2129–2138. Cited by: §4.3, Table 3, Table 4.
  • [50] Y. Wu, S. Tang, S. Zhang, and H. Ogai (2019) An enhanced feature pyramid object detection network for autonomous driving. Applied Sciences 9 (20), pp. 4363. Cited by: §2.
  • [51] X. Xiong and F. De la Torre (2013) Supervised descent method and its applications to face alignment. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 532–539. Cited by: Table 4.
  • [52] J. Yan, X. Zhang, Z. Lei, and S. Z. Li (2014) Face detection by structural models. Image and Vision Computing 32 (10), pp. 790–799. Cited by: §4.1.
  • [53] S. Yang, P. Luo, C. Loy, and X. Tang (2016) Wider face: a face detection benchmark. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 5525–5533. Note: Cited by: §4.1, §4.2.
  • [54] J. Zhang, S. Shan, M. Kan, and X. Chen (2014) Coarse-to-fine auto-encoder networks (cfan) for real-time face alignment. In European conference on computer vision, pp. 1–16. Cited by: Table 4.
  • [55] K. Zhang, Z. Zhang, Z. Li, and Y. Qiao (2016) Joint face detection and alignment using multitask cascaded convolutional networks. IEEE Signal Processing Letters 23 (10), pp. 1499–1503. Cited by: §1, §4.3.
  • [56] S. Zhang, C. Chi, Z. Lei, and S. Z. Li (2019) RefineFace: refinement neural network for high performance face detection. arXiv preprint arXiv:1909.04376. Cited by: Table 7.
  • [57] S. Zhang, X. Zhu, Z. Lei, H. Shi, X. Wang, and S. Z. Li (2017) Faceboxes: a cpu real-time face detector with high accuracy. In 2017 IEEE International Joint Conference on Biometrics (IJCB), pp. 1–9. Cited by: §2, §2, Table 2.
  • [58] S. Zhang, X. Zhu, Z. Lei, H. Shi, X. Wang, and S. Z. Li (2017) S3fd: single shot scale-invariant face detector. In Proceedings of the IEEE International Conference on Computer Vision, pp. 192–201. Cited by: §4.1, Table 2.
  • [59] Y. Zhou, J. Deng, I. Kotsia, and S. Zafeiriou (2019) Dense 3d face decoding over 2500fps: joint texture & shape convolutional mesh decoders. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1097–1106. Cited by: §2.
  • [60] S. Zhu, C. Li, C. Change Loy, and X. Tang (2015) Face alignment by coarse-to-fine shape searching. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 4998–5006. Cited by: Table 4.
  • [61] S. Zhu, C. Li, C. Loy, and X. Tang (2016) Unconstrained face alignment via cascaded compositional learning. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3409–3417. Note: Cited by: §2, §4.1.
  • [62] X. Zhu and D. Ramanan (2012) Face detection, pose estimation, and landmark localization in the wild. In 2012 IEEE conference on computer vision and pattern recognition, pp. 2879–2886. Cited by: §4.1, §4.1.