ASFD: Automatic and Scalable Face Detector

03/25/2020 ∙ by Bin Zhang, et al. ∙ Tencent 0

In this paper, we propose a novel Automatic and Scalable Face Detector (ASFD), which is based on a combination of neural architecture search techniques as well as a new loss design. First, we propose an automatic feature enhance module named Auto-FEM by improved differential architecture search, which allows efficient multi-scale feature fusion and context enhancement. Second, we use Distance-based Regression and Margin-based Classification (DRMC) multi-task loss to predict accurate bounding boxes and learn highly discriminative deep features. Third, we adopt compound scaling methods and uniformly scale the backbone, feature modules, and head networks to develop a family of ASFD, which are consistently more efficient than the state-of-the-art face detectors. Extensive experiments conducted on popular benchmarks, e.g. WIDER FACE and FDDB, demonstrate that our ASFD-D6 outperforms the prior strong competitors, and our lightweight ASFD-D0 runs at more than 120 FPS with Mobilenet for VGA-resolution images.



There are no comments yet.


page 14

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Face detection is the prerequisite step of facial image analysis for various applications, such as face alignment [tai2019towards], attribute [zhang2018joint, pan2018mean], recognition [yang2016nuclear, Huang2020curricularface] and verification [deng2019arcface, wang2018cosface]

. In the past few years, tremendous progress has been made on designing the model architecture of deep Convolutional Neural Networks (CNNs) 

[he2016deep] for face detection. However, it remains a challenge to accurately detect faces with a high degree of variability in scale, pose, occlusion, expression, appearance, and illumination. In addition, the large model sizes and expensive computation costs make these detectors difficult to be deployed in many real-world applications where memory and latency are highly constrained.

Figure 1: Illustration of the mean Average Precision (mAP) regarding the number of parameters (a), FLOPs (b) and GPU latency (c) evaluated with single-model single-scale on the validation subset of WIDER FACE dataset, where mAP is equivalent to the AP of Hard set. Our ASFD D-D outperforms the prior detectors with respect to parameter numbers, FLOPs, and latency.

There have been many works aiming to develop face detector architectures, mainly composed of one-stage [tang2018pyramidbox, chi2019selective, li2019dsfd, deng2019retinaface, zhang2019refineface] and two-stage [wang2017face, wang2017detecting, zhang2018face] face detectors. Among them, the one-stage is the domain- and anchor-based face detection approach, which tiles regular and dense anchors with various scales and aspect ratios over all locations of several multi-scale feature maps. Generally, there are four key-parts in this framework, including backbone, feature module, head network, and multi-task loss. Feature module uses Feature Pyramid Network (FPN) [lin2017feature, li2017object] to aggregate hierarchical feature maps between high- and low-level features of backbone, and the module for refining the receptive field [liu2018receptive, li2019dsfd, zhang2019refineface]

, such as Receptive Field Block (RFB), is also introduced to provide rich contextual information for hard faces. Moreover, multi-task loss is composed of the binary classification and bounding box regression, in which the former classifies the predefined anchors into face and background, and the latter regresses those detected faces to accurate locations. Despite the progress achieved by above methods, there are still some problems existed in three aspects:

Feature Module. Although FPN [lin2017feature] and RFB [liu2018receptive] are simple and effective for general object detection, they may be suboptimal for face detection and many recent works [li2019dsfd, zhang2019refineface] propose various cross-scale connections or operations to combine features to generate the better representations. However, the challenge still exists in the huge design space for feature module. In addition, these methods all adopt the same feature modules for different feature maps from the backbone, which ignore the importance and contributions of different input features.

Multi-task Loss. The conventional multi-task loss in object detection includes a regression loss and a classification loss [girshick2015fast, ren2015faster, liu2016ssd, lin2017focal]. Smooth-L loss for the bounding box regression is commonly used in current face detectors [li2019dsfd, zhang2019refineface], which however achieves slow convergence and inaccurate regression for its sensitivity to variant scales. As for the classification, standard binary softmax loss in DSFD [li2019dsfd] usually lacks the power of discrimination, and RefineFace [zhang2019refineface] adopts sigmoid focal loss for better distinguishing faces from the background, which relies on predefined hyper-parameters and is extremely time-consuming.

Efficiency and Accuracy. Both DSFD and RefineFace rely on the big backbone networks, deep detection head networks and large input image sizes for high accuracy, while FaceBox [zhang2017faceboxes] is a lightweight face detector with fewer layers to achieve better efficiency by sacrificing accuracy. The above methods can not balance the efficiency and accuracy in a wide spectrum of resource constraints from mobile devices to data centers in real-world applications. An appropriate selection of network width and depth usually require tedious manual tuning.

To address these issues, we propose a novel Automatic and Scalable Face Detector (ASFD) to deliver the next generation of efficient face detector with high accuracy. Specifically, we first introduce an Automatic Feature Enhance Module (Auto-FEM) via improved differential architecture search to exploit feature module for efficient and effetive multi-scale feature fusion and context enhancement. Second, inspired by distance Intersection over Union (IoU) loss [zheng2019distance] and large margin cosine loss [wang2018cosface], we propose a Distance-based Regression and Margin-based Classification (DRMC) multi-task loss for accurate bounding boxes and highly discriminative deep features. Finally, motivated by scalable model design described in EfficientNet [tan2019efficientnet] and EfficientDet [tan2019efficientdet], We adopt compound scaling methods and uniformly scale the backbone, feature module and head networks to develop a family of our ASFD, which consistently outperforms the prior competitors in terms of parameter numbers, FLOPs and latency, as shown in Fig. 1, achieving the better trade-off between efficiency and accuracy.

In summary, the main contributions of this paper include:

Automatic Feature Enhance Module via improved differential architecture search for efficient multi-scale feature fusion and context enhancement.

Distance-based regression and margin-based classification multi-task loss for accurate bounding boxes and highly discriminative deep features.

A new family of face detectors achieved by compound scaling methods on backbone, feature module, head network and resolution.

Comprehensive experiments conducted on popular benchmarks, e.g. WIDER FACE and FDDB, to demonstrate the efficiency and accuracy of our ASFD compared with state-of-the-art methods.

2 Related Work

Face detection. Traditional face detection methods mainly rely on hand-crafted features, such as Haar-like features [viola2004robust], control point set [abramson2005yef] and edge orientation histograms [levi2004learning]

. With the development of deep learning, Overfeat 

[sermanet2013overfeat], Cascade-CNN [li2015cascadecnn], MTCNN [zhang2016mtcnn] adopt CNN to classify sliding window, which is not end-to-end and inefficient. Current state-of-the-art face detection methods have inherited some achievements from generic object detection [ren2015faster, liu2016ssd, lin2017focal, zhang2018refinedet] approaches. More recently, DSFD [li2019dsfd] and Refineface [zhang2019refineface] propose pseudo two-stage structure based on single-shot framework to make face detector more effective and accurate. There are mainly two differences between the previous face detectors and our ASFD: () Automatic feature module is obtained by improved NAS method instead of hand-designed. () The margin-based loss and distance-based loss are employed together for the power of discrimination.

Neural Architecture Search. Neural architecture search (NAS) has attracted increasing research interests. NASNet [zoph2018learning]

uses Reinforcement Learning (RL) with a controller Recurrent Neural Network (RNN) to search neural architectures sequentially. To save computational resources, Differential Architecture Search (DARTS) 

[liu2018darts] is based on continuous relaxation of a supernet and propose gradient-based search. Partially-Connected DARTS (PC-DARTS) [xu2019pcdarts] samples a small part of supernet to reduce the redundancy in network space, Based on above NAS works on image classification, some recent works attempt to develop NAS to generic object detection. DetNAS [chen2019detnas] tries to search better backbones for object detection, while NAS-FPN [ghiasi2019fpn] targets on searching for an FPN alternative based on RNN and RL, which is time-consuming. NAS-FCOS [wang2019fcos] aims to efficiently search for the FPN as well as the prediction head based on anchor-free one-stage framework. Different from DARTS or PC-DARTS, we introduce an improved NAS which only samples the path with the highest weight for each node during the forward pass of the searching phase to further reduce the memory cost. To our best knowledge, ASFD is the first work to report the success of applying differential architecture search in face detection community.

Model Scaling. There are several approaches to scale a network, for instance, ResNet [he2016deep] can be scaled down (e.g., ResNet-) or up (e.g., ResNet-) by adjusting network depth. Recently, EfficientNet [tan2019efficientnet] demonstrates remarkable model efficiency for image classification by jointly scaling up network width, depth, and resolution. For object detection, EfficientDet [tan2019efficientdet] proposes a compound scaling method that uniformly scales the resolution, depth and width for all backbone, feature network, and box/class prediction networks at the same time. Inspired by the above model scaling methods, we develop a new family of face detectors, i.e. ASFD D-D, to optimize both accuracy and efficiency.

3 Our approach

We firstly introduce the pipeline of our proposed framework in Sec. 3.1, then describe our automatic feature enhance module in Sec. 3.2, distance-based regression and margin-based classification loss in Sec. 3.3. At last, based on the improved model scaling, we develop a new family of face detectors in Sec. 3.4.

Figure 2: Illustration on the framework of ASFD. We propose an AutoFEM on right lateral of a feedforward backbone to generate the enhanced features. The original and enhanced features adopt our proposed DRMC loss.

3.1 Pipeline

Fig. 2 illustrates the overall framework of ASFD, which follows the paradigm of DSFD [li2019dsfd]

using the dual shot structure. The ImageNet-pretrained backbone generates six pyramidal feature maps

, whose stride varies from

to . Our proposed AutoFEM transfers these original feature maps into six enhanced feature maps. Both regression and classification head networks consist of several convolutions and map the original and enhanced features to produce class and bounding box. In particular, the two shots share the same head network and adopt the proposed DRMC loss for optimizing.

3.2 AutoFEM

Multi-scale features from different layers of the backbone network are commonly used in object detection for predicting objects of various sizes. In this way, FEM aims at utilizing information from these features and refining the receptive fields of them for better prediction, as verified in [lin2017feature, liu2018receptive, li2019dsfd, liu2018path, zhang2019refineface] by inserting the modules designed manually into the network. In this work, we incorporate an improved NAS method to search the proposed AutoFEM, which consists of AutoFEM-FPN and AutoFEM-CPM modules, for enhancing all features with the reasonable classification and regression capacities.

(a) FPN [lin2017feature]
(b) PAN [liu2018path]
(c) BiFPN [tan2019efficientdet]
(d) AutoFEM-FPN
Figure 3: Structures of state-of-the-art FPNs and our AutoFEM-FPN. The different paths are represented by different colors and line types, in which solid blue line, dashed green line and dotted black line indicate top-down, bottom-up path and skip connection respectively, and the bold lines represent the mixed operations during searching and are related to the searched operations indicated by the corresponding colors in Fig. 2. In particular, (a) introduces a top-down path to fuse the multi-scale features, (b) and (c) enhance entire feature hierarchy by the bottom-up path augmentation, the output of (d) is the sum of features along the top-down, bottom-up and skip paths weighted by the learnable weights to highlight their own importance.

3.2.1 AutoFEM-FPN.

As shown in Fig. 3, FPNs usually take multi-scale features as input and generate the enhanced features in identical scales. NAS based methods are used in NAS-FPN [ghiasi2019fpn] and Auto-FPN [xu2019auto] to discover the better architecture, which create a fully-connected FPN as the supernet to search the reasonable connections and operations among feature layers, but the computation is expensive since each layer takes all scale of features as input. Our approach is conducted in an efficient way that searches the reasonable operations along a predefined pathway, which takes each feature and those of neighboring resolutions as input and augments top-down and bottom-up paths together.

In fact, FPN is comprised of several fusion cells to aggregate features at different resolutions. For the given multi-scale features , the conventional top-down FPN [lin2017feature] is conducted as



is usually the bilinear interpolation for resolution matching, and

is usually a convolution operation for feature processing. Similarly, the fusion operation of our AutoFEM-FPN, presented in Fig. 2, is formulated as,


where and are the top-down and bottom-up features of , indicated by blue and green circles in Fig. 3 (d); , and are operations for feature processing; , and are learnable scalars computed by softmax fuction to weight the different features;

is the max pooling operation, and

denotes the element-wise multiplication.

Here, the candidate set of operations in Eq. (2) is conv, depthwise-separable conv, depthwise-separable conv, dilated conv with rate , dilated conv with rate , dilated conv with rate and all of these operations are implemented by the depthwise-separable convolution. The decisions of which operation to select are made by the architecture parameters within search phase, where the PC-DARTS [xu2019pcdarts] based approach is involved. Compared to the search space of DARTS [liu2018darts] and PC-DARTS [xu2019pcdarts], we remove the none operation due to the predesigned pathway of AutoFEM-FPN and replace skip connection with conv for more stable searching and more robust features. During the search phase, , and are mixed operations, denoting the sum of operations within weighted by the architecture parameters. In addition, , and inherit the setting of edge normalization in [xu2019pcdarts] to adaptively learn the importance of features along each path. At the end of search, the final AutoFEM-FPN can be obtained by replacing the mixed operation by the most likely one, which has the maximal weight in .

Figure 4: The captures of basic cells for PC-DARTS [xu2019pcdarts] (a) during and after searching (left and right sides respectively), and (b) for our improved method. The green nodes with number 0 and the blue nodes with number 4 are input and output features respectively, the others indicate the features of intermediate layers, the bold lines represent the mixed operation during searching, and the number on the line denotes its weight. For simplicity, only three intermediate nodes are plotted and only the strongest path is preserved for PC-DARTS.

3.2.2 AutoFEM-CPM.

Recent researches have confirmed that the multi-branch structures with different kernel sizes can capture the multi-scale information, which are effective for the detection of objects with different aspect ratios and scales [szegedy2016rethinking, szegedy2017inception, liu2018receptive, zhang2019refineface, li2019dsfd]. Inspired by these works, we propose an improved method based on PC-DARTS to search Context Prediction Modules (CPM) for each pyramid level so as to suit the specific receptive fields, which takes the state-of-the-art structures, RFB [liu2018receptive], FEM [li2019dsfd] and RFE [zhang2019refineface] into the search space.

Reviewing that PC-DARTS [xu2019pcdarts] can reduce the memory cost by almost times by sampling only of channels for each operation selection. However, the memory requirements are still large when PC-DARTS is applied to search several head modules of detection tasks due to the higher resolution of input images compared against classification tasks. As shown in Fig. 4, instead of taking all the front nodes as input in PC-DARTS, our improved approach only keep path with the largest weight during the forward procedure, which hence reduces both computation and memory cost several times.

Recall that the computation of th node in PC-DARTS [xu2019pcdarts] is,


in which is the weight of path from th node to th node, is the feature map of th node, and is the corresponding mixed operation weighted by architecture parameters. Our improved version is efficient due to the sampling behavior for connections, which is implemented via Gumble-Softmax trick [jang2016categorical, maddison2016concrete, dong2019searching],


where with , the i.i.d sample drawn from Gumbel(0, 1), and is the softmax temperature. Note that the function cannot back-propagate gradients, Gumbel-Softmax uses softmax during the backward pass to address this problem. However, the gradients of softmax may be unstable due to the various lengths of path weights for different nodes indicated by . For instance, the length of weights for the first intermediate node is , but for the sixth one. Hence, we modify the gradients of Gumbel-Softmax in Eq. (3.2.2) as follows,


where the length of is multiplied by the original gradients to address the problem of imbalanced gradients.

On the other hand, our sampling behavior simplifies the search space of PC-DARTS, which replaces a directed acyclic graph [xu2019pcdarts, liu2018darts] by a tree taking the input node as the parent with multiple branches. A capture during searching is presented in the left panel of Fig. 4 (b). Our approach only concatenates the leaf nodes of the tree as output rather than all intermediate nodes for more robust features, which is motivated by the fact that the leaf nodes usually have large receptive fields and more semantic information, which is verified by simulations in Sec. 4.2.

During the search process, we define the candidate operations of our AutoFEM-CPM as conv, depthwise-separable conv, depthwise-separable conv, conv, conv, conv followed by conv, conv, conv, conv followed by conv, dilated conv with rate , dilated conv with rate , dilated conv with rate , in which and convolution operations are added to provide rectangle receptive fields. Same as AutoFEM-FPN, all operations are implemented in the depthwise-separable manner. After searching, the final architecture of AutoFEM-CPM is obtained by sampling the path with the largest weight for each node and selecting the strongest operation within for each path (see right panel of Fig. 4 (b)).

3.3 DRMC Loss

3.3.1 Distance-based Regression Loss.

Traditional regression losses adopted by single-stage detectors, such as , and smooth-, are suboptimal for accurate localization due to the weak correlations between the classification and regression tasks. The IoU based losses [wu2019iou, rezatofighi2019giou, zheng2019distance] focus more attention on the positive anchors with high IoU and decrease the gradient of examples with low IoU, which can improve the localization accuracy significantly. We employ DIoU [zheng2019distance] loss here that considers the overlap area and central point distance of bounding boxes simultaneously, given by


where and are the boxes and corresponding ground truth, and and are their corresponding central points, is the function for the computation of IoU between two boxes, is the Euclidean distance and is the diagonal length of the smallest enclosing box covering the two boxes.

3.3.2 Margin-based Classification Loss.

Softmax and sigmoid based cross-entropy losses are commonly used in state-of-the-art detectors, however, they usually lack the power of discrimination especially for hard objects. Motivated by the large margin loss in face recognition [wen2016discriminative, wang2018cosface, deng2019arcface]

, the margin-based classification loss is used in our work to maximize the inter-class variances and minimize the intra-class variances. Suppose

is the predicted confidence score before the softmax corresponding to the ground-truth one-hot label of , the margin-based classification loss is defined as


where is the margin added between the classes for prediction, the Iverson bracket indicator function outputs when the condition is true, and otherwise. Then, assuming that the predicted box and confidence of the first shot are and , and are the corresponding ground truth, similarly, , , and for the second shot, the total DRMC loss is defined as


where and are weights to balance the distance-based loss and margin-based loss, DRMC loss for the first shot and second shot respectively, and are the numbers of positive anchors for the first shot and second shot.

3.4 Improved Model Scaling

ASFD Backbone AutoFEM Head
Family Network
D0() MobileNet0.25 64 0.5 64 1
D1() MobileNet1.0 64 0.5 64 1
D2() ResNet18 128 0.5 128 1
D3() ResNet34 192 1 192 2
D4() ResNet50 256 1 256 2
D5() ResNet101 320 2 320 4
D6() ResNet152 384 2 384 4
Table 1: Model scaling configs for ASFD D0-D6 by using coefficient to control backbone, AutoFEM and head network. D has the same setting as D except a smaller backbone.

In order to satisfy a wide spectrum of resource constraints, we develop a family of face detectors to optimize both accuracy and efficiency. Inspired by EfficientNet[tan2019efficientnet] and EfficientDet[tan2019efficientdet], we propose improved modeling scaling methods for face detection, which jointly scaling up depth and width of backbone, feature module and head network. Unlike EfficientDet, we did not scale up the resolution and fix training input size to improve recall of faces with a high degree of variability in scale. For backbone, we adopt the MobileNets[howard2017mobilenets] and ResNet[he2016deep] as detection backbone, these classification networks can be scaled down or up by adjusting network depth and width. In terms of feature module, we linearly grow AutoFEM width (), and exponentially increase depth (). Considering head network, we fix its width () to be always the same as AutoFEM, exponentially increase depth (). The coefficient is employed to jointly scale up the above parameters for our improved model scaling method, given by


Following Eq. (9), we propose ASFD from D to D with different as shown in Table 1. Notably, the backbone of MobileNet in D is still more expensive than other modules. Therefore, we simply modify D to D by using the smaller backbone MobileNet for a more efficient detector.

Figure 5: Example of the AutoFEM-FPN (left) architecture and AutoFEM-CPM architecture (right) found by improved DARTS based on WIDER FACE.

4 Experiments

4.1 Implementation Details

During training, we use ImageNet pretrained models to initialize the backbone. SGD optimizer is applied to fine-tune the models with momentum, weight decay and batch size on four Nvidia Tesla V (GB) GPUs. The learning rate is linearly risen from to at the first iterations using the warmup strategy, then divided by at and epochs and ending at epochs. For inference, non-maximum suppression is applied with Jaccard overlap of to produce the top high confident faces from high confident detections. All models are only trained on the training set of WIDER FACE.

In the search scenario, ResNet is selected as the backbone of our supernet, the channels of both AutoFEM-FPN and AutoFEM-CPM are set to , and each AutoFEM-CPM consists of intermediate nodes. For efficiency, only features are sampled on each edge following the setting of PC-DARTS. We use Adam with learning rate and weight decay to optimize the architecture parameters after epochs, and total searching epoch number is .

Feature module Baseline and Contributions
FEM-FPN [li2019dsfd]
BiFPN [tan2019efficientdet]
PAN [liu2018path]
FEM-CPM [li2019dsfd]
RFE [zhang2019refineface]
Easy 0.947 0.954 0.954 0.953 0.956 0.950 0.951 0.948 0.954 0.954 0.952 0.955 0.956 0.958
Medium 0.932 0.944 0.945 0.945 0.947 0.933 0.934 0.933 0.944 0.945 0.943 0.945 0.947 0.949
Hard 0.822 0.881 0.874 0.883 0.884 0.827 0.830 0.834 0.882 0.882 0.881 0.886 0.886 0.887
Table 2: Comparison of Average Precision (AP) among AutoFEM and state-of-the-art structures on validation set of WIDER FACE. Multi-scale results ensemble is adopted during test-time.

4.2 Analysis on ASFD

4.2.1 AutoFEM.

The architectures of AutoFEM-FPN and AutoFEM-CPM are searched on the basis of light DSFD [li2019dsfd] respectively, which uses ResNet as backbone and has -layer head modules. The proposed AutoFEM is obtained by cascading these two modules together, an example of AutoFEM is presented in Fig. 5, which is adopted in our ASFD. As for AutoFEM-FPN, each output level fuses the features from its neighbor levels with varied convolution and its same levels with convolution, suggesting the importance of information from bottom and up layers, and different context prediction modules are obtained for 6 detection layers, in which the low-level CPM have larger receptive fields for capturing more context to improve performance of hard faces.

To demonstrate the effectiveness of our searched AutoFEM in ASFD, experiments are conducted to compare our AutoFEM-FPN and AutoFEM-CPM with other state-of-the-art structures. The DSFD-based detector with backbone of ResNet without FEM module is employed as the baseline, and all experimental results of applying feature pyramid network and context prediction module to feature module are shown in Table 2, which indicates our AutoFEM improves the detection performance. It is obvious that after using the AutoFEM-FPN, the AP scores of the baseline detector are improved from , , to , , on the Easy, Medium and Hard subsets, respectively, which surpasses other structures like FEM-FPN [li2019dsfd], BiFPN [tan2019efficientdet] and PAN [liu2018path], and the performance is further improved to , , by cascading AutoFEM-FPN and AutoFEM-CPM together.

Method 4 inter nodes 6 inter nodes 8 inter nodes
Easy Medium Hard Easy Medium Hard Easy Medium Hard
PC-DARTS 0.956 0.945 0.881 0.957 0.947 0.882 - - -
ours+catall 0.957 0.947 0.883 0.957 0.948 0.884 0.957 0.948 0.885
ours+catleaf 0.957 0.947 0.885 0.958 0.949 0.887 0.958 0.948 0.887

  • “catall” means all intermediate nodes are concatenated as the output, and “catleaf” means only the leaf ones are concatenated.

Table 3: Comparison of Average Precision (AP) of PC-DARTS and our improved method for searching AutoFEM-CPM evaluated on validation set of WIDER FACE. Multi-scale results ensemble is adopted during test-time.

Moreover, simulations are conducted to verify the effectiveness of our improved NAS approach for searching AutoFEM-CPM compared against PC-DARTS as shown in Table 3, where modules with intermediate nodes are only searched with our method due to the memory limitation. As we can see our improved method with 6 intermediate nodes achieves the greatest AP scores on the Easy, Medium and Hard subsets by concatenating the leaf nodes only.

Components Easy Medium Hard
Baseline 0.954 0.944 0.883
Baseline+Auxiliary loss 0.954 0.945 0.884
Baseline+Auxiliary loss+MC loss 0.954 0.945 0.885
Baseline+Auxiliary loss+DR loss 0.955 0.946 0.883
Baseline+Auxiliary loss+DRMC loss 0.957 0.947 0.884
Baseline+AutoFEM+Auxiliary loss+DRMC loss 0.961 0.953 0.888
Table 4: Comparison of Average Precision (AP) of DRMC loss in validation set of WIDER FACE. Multi-scale results ensemble is adopted during test-time.
Model Easy Medium Hard #Params Ratio #FLOPS Ratio LAT(ms) Ratio
ASFD-D0 0.901 0.875 0.744 0.62M 1x 0.73B 1x 8.9 1x
EXTD(mobilenet) [yoo2019extd] 0.851 0.823 0.672 0.68M 1.1x 10.62B 14.5x 34.4 3.9x
ASFD-D1 0.933 0.917 0.820 3.90M 1x 4.27B 1x 9.2 1x
SRN(Res50) [chi2019selective] 0.930 0.873 0.713 80.18M 20.6x 189.69B 44.4x 55.1 6.0x
ASFD-D2 0.951 0.937 0.836 13.56M 1x 20.48B 1x 12.4 1x
Retinaface(Res50) [deng2019retinaface] 0.957 0.943 0.828 26.03M 1.9x 33.41B 2.4x 29.3 2.4x
ASFD-D3 0.953 0.943 0.848 26.56M 1x 46.32B 1x 23.1 1x
PyramidBox(Res50) [tang2018pyramidbox] 0.951 0.943 0.844 64.15M 2.4x 111.09B 2.4x 54.5 2.4x
ASFD-D4 0.956 0.945 0.858 36.76M 1x 70.45B 1x 28.7 1x
DSFD(Res152) [li2019dsfd] 0.955 0.942 0.851 114.5M 3.1x 259.55B 3.7x 83.3 2.9x
ASFD-D5 0.957 0.947 0.859 67.73M 1x 147.40B 1x 32.8 1x
ASFD-D6 0.958 0.947 0.860 86.10M 1x 183.11B 1x 37.7 1x
We omit ensemble and test-time multi-scale results, Latency are measured on the same machine.
Table 5: Performance on WIDER FACE. and denote the number of parameters and multiply-adds. LAT denotes network inference latency with VGA resolution image.

4.2.2 DRMC Loss.

We use DSFD [li2019dsfd] as the baseline to add Distance-based Regression and Margin-based Classification loss for comparison. As presented in Table 4, the proposed DRMC loss together with the auxiliary one, that is, the loss operating on the output of the first shot brings the performance improvements of , and on Easy, Medium and Hard subsets respectively for the DSFD baseline, and , and for the AutoFEM-based DSFD.

4.2.3 Improved Model Scaling.

As discussed in Sec. 3.4, an improved model scaling approach is proposed to make a trade-off between speed and accuracy by jointly scaling up depth and width of backbone, feature enhance module and head network of our ASFD. The comparisons of our ASFD D-D with other methods are presented in Table 5, where our models achieve better efficiency than others, suggesting the superiority of AutoFEM searched by the improved NAS method and benefits of jointly scaling by balancing the dimensions of different architectures. In specific, our ASFD D and D can run at more than frame-per-second (FPS) on Nvidia P GPU with the lightweight backbone. Even the model with the highest AP scores, e.g. ASFD-D, can run at FPS approximately, which is still times faster than DSFD with better performance.

(a) Easy
(b) Medium
(c) Hard
Figure 6: Precision-recall curves on the validation set of WIDER FACE.
(a) Discontinuous ROC curves.
(b) Continuous ROC curves.
Figure 7: ROC curves on the FDDB dataset.

4.3 Comparisons with State-of-the-Art Methods

Finally, we evaluate our ASFD on two popular benchmarks, WIDER FACE [yang2016wider] and FDDB  [jain2010fddb] using ASFD-D. Our model is trained only on the training set of WIDER FACE and evaluated on both benchmarks without any fine-tuning. We also follow the setting in [li2019dsfd] to build image pyramids for multi-scale testing for better performance. Our ASFD-D obtains the highest AP scores of , and on the Easy, Medium and Hard subsets of WIDER FACE validation as in Fig. 6, setting a new state-of-the-art face detector, meanwhile, the ASFD-D is faster than Refineface (37.7 vs 56.6 ms) even it is our best competitor in performance [zhang2019refineface]. The state-of-the-art performance is also achieved on FDDB, i.e., and true positive rates on discontinuous and continuous curves when the number of false positives is , as shown in Fig. 7. More examples of our ASFD on handling face with various variations are shown in Fig. 8 to demonstrate its effectiveness.

Figure 8: Illustration of our ASFD to various large variations. Red bounding boxes indicate the detection confidence is above .

5 Conclusions

In this work, a novel Automatic and Scalable Face Detector (ASFD) is proposed with significantly better accuracy and efficiency, in which we adopt a differential architecture search to discover feature enhance modules for efficient multi-scale feature fusion and context enhancement. Besides, the Distance-based Regression and Margin-based classification (DRMC) losses are introduced to effectively generate accurate bounding boxes and highly discriminative deep features. We also adopt improved model scaling methods to develop a family of ASFD by scaling up and down the backbone, feature module, and head network. Comprehensive experiments conducted on popular benchmarks FDDB and WIDER FACE to demonstrate the efficiency and accuracy of our proposed ASFD compared with state-of-the-art methods.