Face detection is the prerequisite step of facial image analysis for various applications, such as face alignment [tai2019towards], attribute [zhang2018joint, pan2018mean], recognition [yang2016nuclear, Huang2020curricularface] and verification [deng2019arcface, wang2018cosface]
. In the past few years, tremendous progress has been made on designing the model architecture of deep Convolutional Neural Networks (CNNs)[he2016deep] for face detection. However, it remains a challenge to accurately detect faces with a high degree of variability in scale, pose, occlusion, expression, appearance, and illumination. In addition, the large model sizes and expensive computation costs make these detectors difficult to be deployed in many real-world applications where memory and latency are highly constrained.
There have been many works aiming to develop face detector architectures, mainly composed of one-stage [tang2018pyramidbox, chi2019selective, li2019dsfd, deng2019retinaface, zhang2019refineface] and two-stage [wang2017face, wang2017detecting, zhang2018face] face detectors. Among them, the one-stage is the domain- and anchor-based face detection approach, which tiles regular and dense anchors with various scales and aspect ratios over all locations of several multi-scale feature maps. Generally, there are four key-parts in this framework, including backbone, feature module, head network, and multi-task loss. Feature module uses Feature Pyramid Network (FPN) [lin2017feature, li2017object] to aggregate hierarchical feature maps between high- and low-level features of backbone, and the module for refining the receptive field [liu2018receptive, li2019dsfd, zhang2019refineface]
, such as Receptive Field Block (RFB), is also introduced to provide rich contextual information for hard faces. Moreover, multi-task loss is composed of the binary classification and bounding box regression, in which the former classifies the predefined anchors into face and background, and the latter regresses those detected faces to accurate locations. Despite the progress achieved by above methods, there are still some problems existed in three aspects:
Feature Module. Although FPN [lin2017feature] and RFB [liu2018receptive] are simple and effective for general object detection, they may be suboptimal for face detection and many recent works [li2019dsfd, zhang2019refineface] propose various cross-scale connections or operations to combine features to generate the better representations. However, the challenge still exists in the huge design space for feature module. In addition, these methods all adopt the same feature modules for different feature maps from the backbone, which ignore the importance and contributions of different input features.
Multi-task Loss. The conventional multi-task loss in object detection includes a regression loss and a classification loss [girshick2015fast, ren2015faster, liu2016ssd, lin2017focal]. Smooth-L loss for the bounding box regression is commonly used in current face detectors [li2019dsfd, zhang2019refineface], which however achieves slow convergence and inaccurate regression for its sensitivity to variant scales. As for the classification, standard binary softmax loss in DSFD [li2019dsfd] usually lacks the power of discrimination, and RefineFace [zhang2019refineface] adopts sigmoid focal loss for better distinguishing faces from the background, which relies on predefined hyper-parameters and is extremely time-consuming.
Efficiency and Accuracy. Both DSFD and RefineFace rely on the big backbone networks, deep detection head networks and large input image sizes for high accuracy, while FaceBox [zhang2017faceboxes] is a lightweight face detector with fewer layers to achieve better efficiency by sacrificing accuracy. The above methods can not balance the efficiency and accuracy in a wide spectrum of resource constraints from mobile devices to data centers in real-world applications. An appropriate selection of network width and depth usually require tedious manual tuning.
To address these issues, we propose a novel Automatic and Scalable Face Detector (ASFD) to deliver the next generation of efficient face detector with high accuracy. Specifically, we first introduce an Automatic Feature Enhance Module (Auto-FEM) via improved differential architecture search to exploit feature module for efficient and effetive multi-scale feature fusion and context enhancement. Second, inspired by distance Intersection over Union (IoU) loss [zheng2019distance] and large margin cosine loss [wang2018cosface], we propose a Distance-based Regression and Margin-based Classification (DRMC) multi-task loss for accurate bounding boxes and highly discriminative deep features. Finally, motivated by scalable model design described in EfficientNet [tan2019efficientnet] and EfficientDet [tan2019efficientdet], We adopt compound scaling methods and uniformly scale the backbone, feature module and head networks to develop a family of our ASFD, which consistently outperforms the prior competitors in terms of parameter numbers, FLOPs and latency, as shown in Fig. 1, achieving the better trade-off between efficiency and accuracy.
In summary, the main contributions of this paper include:
Automatic Feature Enhance Module via improved differential architecture search for efficient multi-scale feature fusion and context enhancement.
Distance-based regression and margin-based classification multi-task loss for accurate bounding boxes and highly discriminative deep features.
A new family of face detectors achieved by compound scaling methods on backbone, feature module, head network and resolution.
Comprehensive experiments conducted on popular benchmarks, e.g. WIDER FACE and FDDB, to demonstrate the efficiency and accuracy of our ASFD compared with state-of-the-art methods.
2 Related Work
Face detection. Traditional face detection methods mainly rely on hand-crafted features, such as Haar-like features [viola2004robust], control point set [abramson2005yef] and edge orientation histograms [levi2004learning]
. With the development of deep learning, Overfeat[sermanet2013overfeat], Cascade-CNN [li2015cascadecnn], MTCNN [zhang2016mtcnn] adopt CNN to classify sliding window, which is not end-to-end and inefficient. Current state-of-the-art face detection methods have inherited some achievements from generic object detection [ren2015faster, liu2016ssd, lin2017focal, zhang2018refinedet] approaches. More recently, DSFD [li2019dsfd] and Refineface [zhang2019refineface] propose pseudo two-stage structure based on single-shot framework to make face detector more effective and accurate. There are mainly two differences between the previous face detectors and our ASFD: () Automatic feature module is obtained by improved NAS method instead of hand-designed. () The margin-based loss and distance-based loss are employed together for the power of discrimination.
Neural Architecture Search. Neural architecture search (NAS) has attracted increasing research interests. NASNet [zoph2018learning]
uses Reinforcement Learning (RL) with a controller Recurrent Neural Network (RNN) to search neural architectures sequentially. To save computational resources, Differential Architecture Search (DARTS)[liu2018darts] is based on continuous relaxation of a supernet and propose gradient-based search. Partially-Connected DARTS (PC-DARTS) [xu2019pcdarts] samples a small part of supernet to reduce the redundancy in network space, Based on above NAS works on image classification, some recent works attempt to develop NAS to generic object detection. DetNAS [chen2019detnas] tries to search better backbones for object detection, while NAS-FPN [ghiasi2019fpn] targets on searching for an FPN alternative based on RNN and RL, which is time-consuming. NAS-FCOS [wang2019fcos] aims to efficiently search for the FPN as well as the prediction head based on anchor-free one-stage framework. Different from DARTS or PC-DARTS, we introduce an improved NAS which only samples the path with the highest weight for each node during the forward pass of the searching phase to further reduce the memory cost. To our best knowledge, ASFD is the first work to report the success of applying differential architecture search in face detection community.
Model Scaling. There are several approaches to scale a network, for instance, ResNet [he2016deep] can be scaled down (e.g., ResNet-) or up (e.g., ResNet-) by adjusting network depth. Recently, EfficientNet [tan2019efficientnet] demonstrates remarkable model efficiency for image classification by jointly scaling up network width, depth, and resolution. For object detection, EfficientDet [tan2019efficientdet] proposes a compound scaling method that uniformly scales the resolution, depth and width for all backbone, feature network, and box/class prediction networks at the same time. Inspired by the above model scaling methods, we develop a new family of face detectors, i.e. ASFD D-D, to optimize both accuracy and efficiency.
3 Our approach
We firstly introduce the pipeline of our proposed framework in Sec. 3.1, then describe our automatic feature enhance module in Sec. 3.2, distance-based regression and margin-based classification loss in Sec. 3.3. At last, based on the improved model scaling, we develop a new family of face detectors in Sec. 3.4.
Fig. 2 illustrates the overall framework of ASFD, which follows the paradigm of DSFD [li2019dsfd]
using the dual shot structure. The ImageNet-pretrained backbone generates six pyramidal feature maps
, whose stride varies fromto . Our proposed AutoFEM transfers these original feature maps into six enhanced feature maps. Both regression and classification head networks consist of several convolutions and map the original and enhanced features to produce class and bounding box. In particular, the two shots share the same head network and adopt the proposed DRMC loss for optimizing.
Multi-scale features from different layers of the backbone network are commonly used in object detection for predicting objects of various sizes. In this way, FEM aims at utilizing information from these features and refining the receptive fields of them for better prediction, as verified in [lin2017feature, liu2018receptive, li2019dsfd, liu2018path, zhang2019refineface] by inserting the modules designed manually into the network. In this work, we incorporate an improved NAS method to search the proposed AutoFEM, which consists of AutoFEM-FPN and AutoFEM-CPM modules, for enhancing all features with the reasonable classification and regression capacities.
As shown in Fig. 3, FPNs usually take multi-scale features as input and generate the enhanced features in identical scales. NAS based methods are used in NAS-FPN [ghiasi2019fpn] and Auto-FPN [xu2019auto] to discover the better architecture, which create a fully-connected FPN as the supernet to search the reasonable connections and operations among feature layers, but the computation is expensive since each layer takes all scale of features as input. Our approach is conducted in an efficient way that searches the reasonable operations along a predefined pathway, which takes each feature and those of neighboring resolutions as input and augments top-down and bottom-up paths together.
In fact, FPN is comprised of several fusion cells to aggregate features at different resolutions. For the given multi-scale features , the conventional top-down FPN [lin2017feature] is conducted as
is usually the bilinear interpolation for resolution matching, andis usually a convolution operation for feature processing. Similarly, the fusion operation of our AutoFEM-FPN, presented in Fig. 2, is formulated as,
where and are the top-down and bottom-up features of , indicated by blue and green circles in Fig. 3 (d); , and are operations for feature processing; , and are learnable scalars computed by softmax fuction to weight the different features;
is the max pooling operation, anddenotes the element-wise multiplication.
Here, the candidate set of operations in Eq. (2) is conv, depthwise-separable conv, depthwise-separable conv, dilated conv with rate , dilated conv with rate , dilated conv with rate and all of these operations are implemented by the depthwise-separable convolution. The decisions of which operation to select are made by the architecture parameters within search phase, where the PC-DARTS [xu2019pcdarts] based approach is involved. Compared to the search space of DARTS [liu2018darts] and PC-DARTS [xu2019pcdarts], we remove the none operation due to the predesigned pathway of AutoFEM-FPN and replace skip connection with conv for more stable searching and more robust features. During the search phase, , and are mixed operations, denoting the sum of operations within weighted by the architecture parameters. In addition, , and inherit the setting of edge normalization in [xu2019pcdarts] to adaptively learn the importance of features along each path. At the end of search, the final AutoFEM-FPN can be obtained by replacing the mixed operation by the most likely one, which has the maximal weight in .
Recent researches have confirmed that the multi-branch structures with different kernel sizes can capture the multi-scale information, which are effective for the detection of objects with different aspect ratios and scales [szegedy2016rethinking, szegedy2017inception, liu2018receptive, zhang2019refineface, li2019dsfd]. Inspired by these works, we propose an improved method based on PC-DARTS to search Context Prediction Modules (CPM) for each pyramid level so as to suit the specific receptive fields, which takes the state-of-the-art structures, RFB [liu2018receptive], FEM [li2019dsfd] and RFE [zhang2019refineface] into the search space.
Reviewing that PC-DARTS [xu2019pcdarts] can reduce the memory cost by almost times by sampling only of channels for each operation selection. However, the memory requirements are still large when PC-DARTS is applied to search several head modules of detection tasks due to the higher resolution of input images compared against classification tasks. As shown in Fig. 4, instead of taking all the front nodes as input in PC-DARTS, our improved approach only keep path with the largest weight during the forward procedure, which hence reduces both computation and memory cost several times.
Recall that the computation of th node in PC-DARTS [xu2019pcdarts] is,
in which is the weight of path from th node to th node, is the feature map of th node, and is the corresponding mixed operation weighted by architecture parameters. Our improved version is efficient due to the sampling behavior for connections, which is implemented via Gumble-Softmax trick [jang2016categorical, maddison2016concrete, dong2019searching],
where with , the i.i.d sample drawn from Gumbel(0, 1), and is the softmax temperature. Note that the function cannot back-propagate gradients, Gumbel-Softmax uses softmax during the backward pass to address this problem. However, the gradients of softmax may be unstable due to the various lengths of path weights for different nodes indicated by . For instance, the length of weights for the first intermediate node is , but for the sixth one. Hence, we modify the gradients of Gumbel-Softmax in Eq. (3.2.2) as follows,
where the length of is multiplied by the original gradients to address the problem of imbalanced gradients.
On the other hand, our sampling behavior simplifies the search space of PC-DARTS, which replaces a directed acyclic graph [xu2019pcdarts, liu2018darts] by a tree taking the input node as the parent with multiple branches. A capture during searching is presented in the left panel of Fig. 4 (b). Our approach only concatenates the leaf nodes of the tree as output rather than all intermediate nodes for more robust features, which is motivated by the fact that the leaf nodes usually have large receptive fields and more semantic information, which is verified by simulations in Sec. 4.2.
During the search process, we define the candidate operations of our AutoFEM-CPM as conv, depthwise-separable conv, depthwise-separable conv, conv, conv, conv followed by conv, conv, conv, conv followed by conv, dilated conv with rate , dilated conv with rate , dilated conv with rate , in which and convolution operations are added to provide rectangle receptive fields. Same as AutoFEM-FPN, all operations are implemented in the depthwise-separable manner. After searching, the final architecture of AutoFEM-CPM is obtained by sampling the path with the largest weight for each node and selecting the strongest operation within for each path (see right panel of Fig. 4 (b)).
3.3 DRMC Loss
3.3.1 Distance-based Regression Loss.
Traditional regression losses adopted by single-stage detectors, such as , and smooth-, are suboptimal for accurate localization due to the weak correlations between the classification and regression tasks. The IoU based losses [wu2019iou, rezatofighi2019giou, zheng2019distance] focus more attention on the positive anchors with high IoU and decrease the gradient of examples with low IoU, which can improve the localization accuracy significantly. We employ DIoU [zheng2019distance] loss here that considers the overlap area and central point distance of bounding boxes simultaneously, given by
where and are the boxes and corresponding ground truth, and and are their corresponding central points, is the function for the computation of IoU between two boxes, is the Euclidean distance and is the diagonal length of the smallest enclosing box covering the two boxes.
3.3.2 Margin-based Classification Loss.
Softmax and sigmoid based cross-entropy losses are commonly used in state-of-the-art detectors, however, they usually lack the power of discrimination especially for hard objects. Motivated by the large margin loss in face recognition [wen2016discriminative, wang2018cosface, deng2019arcface]
, the margin-based classification loss is used in our work to maximize the inter-class variances and minimize the intra-class variances. Supposeis the predicted confidence score before the softmax corresponding to the ground-truth one-hot label of , the margin-based classification loss is defined as
where is the margin added between the classes for prediction, the Iverson bracket indicator function outputs when the condition is true, and otherwise. Then, assuming that the predicted box and confidence of the first shot are and , and are the corresponding ground truth, similarly, , , and for the second shot, the total DRMC loss is defined as
where and are weights to balance the distance-based loss and margin-based loss, DRMC loss for the first shot and second shot respectively, and are the numbers of positive anchors for the first shot and second shot.
3.4 Improved Model Scaling
In order to satisfy a wide spectrum of resource constraints, we develop a family of face detectors to optimize both accuracy and efficiency. Inspired by EfficientNet[tan2019efficientnet] and EfficientDet[tan2019efficientdet], we propose improved modeling scaling methods for face detection, which jointly scaling up depth and width of backbone, feature module and head network. Unlike EfficientDet, we did not scale up the resolution and fix training input size to improve recall of faces with a high degree of variability in scale. For backbone, we adopt the MobileNets[howard2017mobilenets] and ResNet[he2016deep] as detection backbone, these classification networks can be scaled down or up by adjusting network depth and width. In terms of feature module, we linearly grow AutoFEM width (), and exponentially increase depth (). Considering head network, we fix its width () to be always the same as AutoFEM, exponentially increase depth (). The coefficient is employed to jointly scale up the above parameters for our improved model scaling method, given by
Following Eq. (9), we propose ASFD from D to D with different as shown in Table 1. Notably, the backbone of MobileNet in D is still more expensive than other modules. Therefore, we simply modify D to D by using the smaller backbone MobileNet for a more efficient detector.
4.1 Implementation Details
During training, we use ImageNet pretrained models to initialize the backbone. SGD optimizer is applied to fine-tune the models with momentum, weight decay and batch size on four Nvidia Tesla V (GB) GPUs. The learning rate is linearly risen from to at the first iterations using the warmup strategy, then divided by at and epochs and ending at epochs. For inference, non-maximum suppression is applied with Jaccard overlap of to produce the top high confident faces from high confident detections. All models are only trained on the training set of WIDER FACE.
In the search scenario, ResNet is selected as the backbone of our supernet, the channels of both AutoFEM-FPN and AutoFEM-CPM are set to , and each AutoFEM-CPM consists of intermediate nodes. For efficiency, only features are sampled on each edge following the setting of PC-DARTS. We use Adam with learning rate and weight decay to optimize the architecture parameters after epochs, and total searching epoch number is .
|Feature module||Baseline and Contributions|
4.2 Analysis on ASFD
The architectures of AutoFEM-FPN and AutoFEM-CPM are searched on the basis of light DSFD [li2019dsfd] respectively, which uses ResNet as backbone and has -layer head modules. The proposed AutoFEM is obtained by cascading these two modules together, an example of AutoFEM is presented in Fig. 5, which is adopted in our ASFD. As for AutoFEM-FPN, each output level fuses the features from its neighbor levels with varied convolution and its same levels with convolution, suggesting the importance of information from bottom and up layers, and different context prediction modules are obtained for 6 detection layers, in which the low-level CPM have larger receptive fields for capturing more context to improve performance of hard faces.
To demonstrate the effectiveness of our searched AutoFEM in ASFD, experiments are conducted to compare our AutoFEM-FPN and AutoFEM-CPM with other state-of-the-art structures. The DSFD-based detector with backbone of ResNet without FEM module is employed as the baseline, and all experimental results of applying feature pyramid network and context prediction module to feature module are shown in Table 2, which indicates our AutoFEM improves the detection performance. It is obvious that after using the AutoFEM-FPN, the AP scores of the baseline detector are improved from , , to , , on the Easy, Medium and Hard subsets, respectively, which surpasses other structures like FEM-FPN [li2019dsfd], BiFPN [tan2019efficientdet] and PAN [liu2018path], and the performance is further improved to , , by cascading AutoFEM-FPN and AutoFEM-CPM together.
|Method||4 inter nodes||6 inter nodes||8 inter nodes|
“catall” means all intermediate nodes are concatenated as the output, and “catleaf” means only the leaf ones are concatenated.
Moreover, simulations are conducted to verify the effectiveness of our improved NAS approach for searching AutoFEM-CPM compared against PC-DARTS as shown in Table 3, where modules with intermediate nodes are only searched with our method due to the memory limitation. As we can see our improved method with 6 intermediate nodes achieves the greatest AP scores on the Easy, Medium and Hard subsets by concatenating the leaf nodes only.
|Baseline+Auxiliary loss+MC loss||0.954||0.945||0.885|
|Baseline+Auxiliary loss+DR loss||0.955||0.946||0.883|
|Baseline+Auxiliary loss+DRMC loss||0.957||0.947||0.884|
|Baseline+AutoFEM+Auxiliary loss+DRMC loss||0.961||0.953||0.888|
|We omit ensemble and test-time multi-scale results, Latency are measured on the same machine.|
4.2.2 DRMC Loss.
We use DSFD [li2019dsfd] as the baseline to add Distance-based Regression and Margin-based Classification loss for comparison. As presented in Table 4, the proposed DRMC loss together with the auxiliary one, that is, the loss operating on the output of the first shot brings the performance improvements of , and on Easy, Medium and Hard subsets respectively for the DSFD baseline, and , and for the AutoFEM-based DSFD.
4.2.3 Improved Model Scaling.
As discussed in Sec. 3.4, an improved model scaling approach is proposed to make a trade-off between speed and accuracy by jointly scaling up depth and width of backbone, feature enhance module and head network of our ASFD. The comparisons of our ASFD D-D with other methods are presented in Table 5, where our models achieve better efficiency than others, suggesting the superiority of AutoFEM searched by the improved NAS method and benefits of jointly scaling by balancing the dimensions of different architectures. In specific, our ASFD D and D can run at more than frame-per-second (FPS) on Nvidia P GPU with the lightweight backbone. Even the model with the highest AP scores, e.g. ASFD-D, can run at FPS approximately, which is still times faster than DSFD with better performance.
4.3 Comparisons with State-of-the-Art Methods
Finally, we evaluate our ASFD on two popular benchmarks, WIDER FACE [yang2016wider] and FDDB [jain2010fddb] using ASFD-D. Our model is trained only on the training set of WIDER FACE and evaluated on both benchmarks without any fine-tuning. We also follow the setting in [li2019dsfd] to build image pyramids for multi-scale testing for better performance. Our ASFD-D obtains the highest AP scores of , and on the Easy, Medium and Hard subsets of WIDER FACE validation as in Fig. 6, setting a new state-of-the-art face detector, meanwhile, the ASFD-D is faster than Refineface (37.7 vs 56.6 ms) even it is our best competitor in performance [zhang2019refineface]. The state-of-the-art performance is also achieved on FDDB, i.e., and true positive rates on discontinuous and continuous curves when the number of false positives is , as shown in Fig. 7. More examples of our ASFD on handling face with various variations are shown in Fig. 8 to demonstrate its effectiveness.
In this work, a novel Automatic and Scalable Face Detector (ASFD) is proposed with significantly better accuracy and efficiency, in which we adopt a differential architecture search to discover feature enhance modules for efficient multi-scale feature fusion and context enhancement. Besides, the Distance-based Regression and Margin-based classification (DRMC) losses are introduced to effectively generate accurate bounding boxes and highly discriminative deep features. We also adopt improved model scaling methods to develop a family of ASFD by scaling up and down the backbone, feature module, and head network. Comprehensive experiments conducted on popular benchmarks FDDB and WIDER FACE to demonstrate the efficiency and accuracy of our proposed ASFD compared with state-of-the-art methods.