Feature Agglomeration Networks for Single Stage Face Detection

12/03/2017 ∙ by Jialiang Zhang, et al. ∙ Singapore Management University Zhejiang University 0

Recent years have witnessed promising results of face detection using deep learning, especially for the family of region-based convolutional neural networks (R-CNN) methods and their variants. Despite making remarkable progresses, face detection in the wild remains an open research challenge especially when detecting faces at vastly different scales and characteristics. In this paper, we propose a novel framework of "Feature Agglomeration Networks" (FAN) to build a new single stage face detector, which not only achieves state-of-the-art performance but also runs efficiently. As inspired by the recent success of Feature Pyramid Networks (FPN) lin2016fpn for generic object detection, the core idea of our framework is to exploit inherent multi-scale features of a single convolutional neural network to detect faces of varied scales and characteristics by aggregating higher-level semantic feature maps of different scales as contextual cues to augment lower-level feature maps via a hierarchical agglomeration manner at marginal extra computation cost. Unlike the existing FPN approach, we construct our FAN architecture using a new Agglomerative Connection module and further propose a Hierarchical Loss to effectively train the FAN model. We evaluate the proposed FAN detector on several public face detection benchmarks and achieved new state-of-the-art results with real-time detection speed on GPU.



There are no comments yet.


page 2

page 10

page 11

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Face detection is generally the first key step towards face related applications, such as face alignment, face recognition and facial expression analysis, etc. Despite being studied extensively, detecting faces in the wild remains an open research problem due to various challenges with real-world faces.

Early works of face detection mainly focused on crafting effective features manually and then building powerful classifiers 

[20], which are often sub-optimal and may not always achieve satisfactory results. Recent years have witnessed the successful applications of deep learning techniques for face detection tasks [17, 12]. Despite being extensively studied, it remains an open challenge for building a fast face detector with high accuracy in any real-world scenario.

In general, face detection can be viewed as a special case of generic object detection [17, 12]. Many previous state-of-the-art face detectors inherited a lot of successful techniques from generic object detection, especially for the family of region-based CNN (R-CNN) methods. Among the family of R-CNN based face detectors, there are two major categories of detection frameworks: (i) two-stage detectors (a.k.a. “proposal-based”), such as Fast R-CNN [6], Faster R-CNN [17], etc; and (ii) single-stage detectors (a.k.a. “proposal-free”) , such as Region Proposal Networks (RPN) [17], Single-Shot Multibox Detector (SSD) [12], etc. The single-stage detection framework enjoys higher inference efficiency, and thus has attracted increasing attention recently due to the high demand of real-time face detectors in real applications.

Despite enjoying computational advantages, single-stage detectors’ performance can may dramaticaly when handling small faces. In order to build a robust detector, there exist two major routes for improvement. One way is to train multi-shot single-scale detectors by using the idea of image pyramid to train multiple separate single-scale detectors each for one specific scale (e.g., the HR detector in [8]). However, such approach is computationally expensive since it has to pass a very deep network multiple times during inference. Another way is to train a single-shot multi-scale detector by exploiting multi-scale feature representations of a deep convolutional network, requiring only a single pass to the network during inference. For example, S3FD [28] follows the second approach by extending SSD [12] for face detection.

Though achieving promising performance, S3FD shares the similar drawback of SSD-style detection frameworks, where each of multi-scale feature maps is used alone for prediction and thus a high-resolution semantically weak feature map may fail to perform accurate predictions. Inspired by the recent success of Feature Pyramid Networks (FPN) [11] for generic object detection, we propose a novel simple yet effective detection framework of “Feature Agglomeration Networks” (FANet) to overcome this by combining low-resolution semantically strong features with high-resolution semantically weak features. In particular, FANet aims to create a hierarchical feature pyramid with rich semantics at all scales to boost the prediction performance of high-resolution feature maps using rich contextual cues from low-resolution semantically strong features. Unlike FPN that creates feature pyramid using the skip connection module, we propose a novel “Agglomeration Connection” module to create a new hierarchical feature pyramid for FANet. Besides, a new Hierarchical Loss (HL) is presented to train the FANet model effectively in an end-to-end manner. We conduct extensive experiments on several public face detection benchmarks to validate the efficacy of our proposed FANet structure as well as the HL training scheme.

As a summary, the main contributions of this paper include the following

  • We introduce Agglomeration Connection module to enhance the feature representation power in high-resolution shallow layers.

  • We propose a simple yet effective framework of Feature Agglomeration Networks (FANet) for single stage face detection, which creates a new hierarchical effective feature pyramid with rich semantics at all scales;

  • An effective Hierarchical Loss based training scheme is presented to train the proposed FANet model in an end-to-end manner, which guids a more stable and better training for discriminative features;

  • Comprehensive experiments are carried out on several public Face Detection benchmarks to demonstrate the superiority of the proposed FANet framework.

2 Related Work

Generic Object Detection. As a special case of generic object detection, many face detectors inherit successful techniques for generic object detection [28, 12, 18]. There are two major categories of Region-based CNN variants for object detection: (i) two-stage detection systems where proposals are generated in the first stage and further classified in the second stage; and (ii) single-stage detection systems where the object detection and classification are performed simultaneously from the feature maps without a separate proposal generation stage. The two-stage detection systems include Fast R-CNN [6], Faster R-CNN [17] and their variants, and the single-stage detection systems include YOLO [16], RPN [17], SSD [12], etc. Our detector essentially belongs to the single-stage detection framework.

Multi-shot single-scale Face Detector. To detect faces with a large range of scales, one way is to train multiple detectors each of which targets for a specific scale. Hu et al. [8] trained multiple separate RPN detectors for different scales and made inference using image pyramids. However, their method is very time-consuming since images are required to pass a very deep network multiple times during inference. Hao et al. [7]

learned a Scale Aware Network by estimating face scales in images and built image pyramids according to the estimated values. Although avoiding computation cost to some extent, multiple passes are still required if faces of varied ranges of scales presented in one image. Due to high computational cost, such paradigm is not suitable for real-time applications.

Single-shot multi-scale Face Detector. Single-shot multi-box detector (SSD) [12] applies multi-scale feature representations for detecting different scales and thus only a single pass is required. S3FD [28] inherits SSD framework with carefully designed scale-aware anchors. However, S3FD shares the same limitation of SSD, where each feature is used alone for prediction and as a consequence, high-resolution features may fail to provide robust prediction due to the weak semantics. As inspired by FPN [11], we propose a new framework of FANet to effectively address the limitation of S3FD by aggregating low-resolution semantically strong features with high-resolution semantically weak features using the proposed Agglomeration Connection module.

Context Modeling Contextual information is important to improve face detection. In [26], context is modeled by enlarging the window around proposals. For face detection, CMS-RCNN [30] utilizes a larger window with the cost of duplicating the classification head. This increases the memory requirement as well as detection time. SSH [13]

uses the idea of inception module to create context module. While in our Agglomeration Connection module, other than an inception-like context module, we also incorporate the semantics from a deeper feature maps through agglomeration manner.

Feature Pyramid. Feature pyramid is a structure which combines semantically weak features with semantically strong features using skip-connection. IoN [1] extracts RoI features from different feature maps and concatenates them together. HyperNet [10] makes prediction on a Hyper Feature Map produced by aggregating multi-scale feature maps. DSSD [5] and RON [19] also apply the idea of lateral skip connection to create feature pyramids and achieve promising performance. In this paper, we propose a new Agglomerate Connection module which can aggregate multi-scale features more effectively than the skip connection module. Besides, we also introduce a novel Hierarchical Loss on the proposed FANet framework which enables us to train this powerful detector effectively and robustly in an end-to-end approach.

3 Feature Agglomeration Networks

In this section, we present the Feature Agglomeration Networks (FANet) framework for face detection. First, we present the overall architecture of FANet. Then we propose the core agglomeration connection module for building FANet. The third part is the detailed configuration of our detector. Finally, the Hierarchical loss will be introduced to guide a more stable and better training in our designed network structure.

Figure 1: The network architecture of the proposed “Feature Agglomeration Networks” (FANet). This is a three-level FANet architecture with VGG-16 variant as the backbone CNN network. Hierarchical Loss() accounts for all level feature maps while detection is performed on the last level feature maps.

3.1 Overall Architecture

Our goal is to create an effective feature hierarchy with rich semantics at all levels to achieve robust multi-feature detection. Figure. 1 shows our proposed Feature Agglomeration Network (FANet) with 3-level feature hierarchies. The proposed FANet framework is general-purpose. Without loss of generality, in this paper, we consider the widely used VGG16 model as the backbone CNN architecture and SSD as the single stage detector. As shown in Figure. 1, the detection is performed on layers of feature maps (ranging from index to ). The existing SSD-like detector simply runs detections on the the six feature maps of the first level of feature hierarchy for face detection. By contract, we create the multi-level feature hierarchy with feature agglomeration, and run face detection on the enhanced feature maps (The level, highlighted as blue feature maps in Figure. 1). Specifically, the proposed Feature Agglomeration operation for an -level FANet () can be mathematically defined as follows:


where denotes the feature maps in the -th layer and the -th hierarchy. Specifically, for , i.e., the first-level hierarchy, is the original feature maps in vanilla S3FD, and is the non-linear function to transform the feature maps from -th layer to

-th layer, which consists of Convolution, ReLU and Pooling layers, etc. For

, Eq.(2) denotes that is generated through a feature agglomeration function to agglomerate two adjacent-layer feature maps in the same hierarchy (, ). The agglomeration function is critical to the performance of the proposed detector. In the following, we propose a novel Agglomeration Connection block for the agglomeration function.

Figure 2: The Agglomeration Connection block (-block), where the left diagram is a context-aware extraction module.

3.2 Agglomeration Connection

Figure. 2 illustrates the idea of the proposed Agglomeration Connection building block, called “-block” for short. It consists of two input feature maps, a shallower feature map and a deeper one . First of all, we notice that feature maps in the shallower layers generally lack of semantics. We thus apply an inception-like module on the shallower feature to enhance its feature representation and at the meantime change the output channel to a fixed number (e.g., ). Specifically, as shown in the left diagram of Figure. 2, we use 4 branches with 4 kinds of filters in the shallow feature enhancement block, e.g., , together with and . The proportion (output channels) of each branch is , respectively. To ensure the high efficiency of this module, we first apply a convolution layer to reduce the dimension to channels, and thus our shallow feature enhancement module uses fewer parameters compared with those directly using filters.

For the deeper feature map , we first reduce the dimensionality using a convolution layer by reducing the channel size to of (e.g., ), and then we apply a bilinear upsampling in order to match the same size as . The final feature of the Agglomeration Connection block is obtained by concating these two features followed with a convolution smooth layer.

3.3 Final Detector with Detailed Configurations

The final detection exploits the -th (m=3) hierarchy of feature maps, including a total of six detection layers {, , , , , }. The final detection result can be expressed as follows:


where denotes the final detection process including bounding box regression and class prediction followed by Non-Maximum Suppression to obtain the final detection results. As shown in Figure. 1, the red dotted line denotes the connected 2 blobs share the memory, eg. and are identical to , etc.

We also discuss the details of configuring our proposed -level FANet for single-stage face detector. In Figure. 1

, six detection layers have strides of

, respectively. We follow the settings of [28], each of the six feature maps is associated with a specific scale anchor with aspect ratio

to detect corresponding scale’s faces. Since the shallow feature with high resolution plays a key role in detecting small faces, while deep feature is already with sufficient semantics, we build our FANet structure starting aggregating the feature maps from the

-th layer instead of . We found it does not hurt the performance while reducing the model complexity.

For anchor-based detectors, we need to match each anchor as a positive or negative according to the ground truth bounding boxes. We adopt the following matching strategy: (i) for each face, the anchor with the best Jaccard overlap is matched; and (ii) each anchor is matched to the face that has Jaccard overlap larger than .

Remark. The insight behind hierarchical agglomeration design is that in vanilla SSD, shallower features which are important to detect small faces are semantically weak in feature representation. The -block hierarchically aggregates semantics information from deeper layers to form a stronger set of hierarchical multi-feature maps. The ratio of deeper and shallower feature in one -block is set to , which ensures that the deeper feature generally plays a role of providing extra contextual cues. Besides, we notice that the receptive field largely impacts the performance of detecting small faces. As shown in [8], either too large or too small receptive field can hurt the performance. The superiority of our structure is that the -block only incorporates semantics from one deeper layer, and thus we can easily control the receptive field of each feature map through our hierarchical design. This is in contrast to FPN [11], where a feature map incorporates information from all the deeper layers.

3.4 Hierarchical Loss

In order to train the proposed FANet effectively, we propose a new loss function called hierarchical loss defined on the proposed FANet structure. The key idea is to define a loss function that accounts for all hierarchies of feature maps, and meanwhile allows to train the entire network effectively in an end-to-end manner. See Figure.

1 for more details. To this end, we propose the hierarchical loss as follows


where is a weight parameter for the loss of the -th hierarchy. accounts for the loss on the -th hierarchy, which is SSD [12] multibox loss.


Using the hierarchical loss, we can train the FANet detector end-to-end. Specifically, during training, all the losses are simultaneously computed, and the gradients are back propagated to each hierarchy of feature maps, respectively.

In contrast to the standard loss, the proposed hierarchical loss enjoys some key advantages. On one hand, hierarchical loss plays a crucial role in training the FANet model robustly and effectively. This is because FANet has more newly added parameters than vanilla SSD for optimization, which is not easy to be directly trained with the existing loss in vanilla SSD training. With multiple hierarchies, hierarchical loss guides a better training process which gradually increases the power of feature maps representation. This allows us to supervise the training process hierarchically to obtain more robust features. On the other hand, compared with the standard single loss, the use of hierarchical loss does not incur extra computation cost during inference after the model has been trained.

3.5 Other Training Strategies

In this section, we introduce our data augmentation, hard negative mining and other implementation details.

Data augmentation. We use similar data augmentation strategies as in SSD[12], like random flip, color distortion, expansion, etc. Besides, we follow the setting of S3FD[28], instead of directly resizing the whole image to a squared patch (e.g., we use as input size for training), we first crop a squared patch from original image whose scale ranges from to of the short size of original image. After random cropping, the final patch is resized to . Hard negative mining. After anchor matching, most of the anchors are assigned as negatives, which results in a significant imbalance between positive and negative samples. We use an online hard negative mining strategy [12] during training. The ratio of negative and positive anchors is at most .

Other implementation details. We choose in Eq.(5) and in Eq.(4) as uniform for simplicity. The training starts from fine-tuning VGG16 backbone network using SGD with momentum of 0.9, weight decay of 0.0005, and a total batch size of on two GPUs. The newly added layers are initialized with “xavier”. We train our FANet for epochs and a learning rate of for first epochs and continue training for epochs with and

. Our implementation is based on Pytorch 

[15], and our source code will be made publicly available.

4 Experiments

In this section, we conduct extensive experiments and ablation studies to evaluate the effectiveness of the proposed FANet framework in two folds. Firstly, we examine the impact of several key components including Agglomeration Connection module, the layer-wise Hierarchical Loss, and other techniques used in our solution. Secondly, we compare the proposed FANet face detector with the state-of-the-art face detectors on popular face detection benchmarks and finally evaluate the inference speed of the proposed face detector.

4.1 Results on WIDER FACE Datasets

Dataset. We conduct model analysis on the WIDER FACE dataset [24], which has 32,203 images with about 400k faces for a large range of scales. It consists of three subsets: for training, for validation, and

for testing. The annotations of training and validation sets are online available. According to the difficulty of detection tasks, it has three splits: Easy, Medium and Hard. The evaluation metric is mean average precision (mAP) with Interception-of-Union (IoU) threshold as 0.5. We train FANet on the training set of WIDER Face, and evaluate it on the validation and testing set. If not specified, the results in Table

1 and 2 are obtained by single scale testing in which the shorter size of image is resized to while keeping image aspect ratio.

Loss  Easy Medium  Hard
vanilla 94.5 92.9 82.8
w/ FPN 94.3 93.1 83.8
w/ 2-level 94.8 93.6 84.4
w/ 2-level w/ HL 94.8 93.6 85.3
Table 1: Evaluation of our FANet for learning discriminative features in contrast to vanilla S3FD and a simple FPN. For fair comparison, we only use 2-level hierarchy without context module.

Baseline. We adopt the closely related detector S3FD [28] as the baseline to validate the effectiveness of our technique. S3FD achieved the state-of-the-art results on several well-known face detection benchmarks. It inherited the standard SSD framework with carefully designed scale-aware anchors according to effective receptive fields. We follow the same experimental setup in S3FD.

Agglomeration Connection. We validate the contribution of Agglomeration Connection module with different hierarchies.

FANet (ours)
mAP (Easy) 94.5 94.8 94.8 94.7 94.8 95.0 94.8
mAP (Medium) 92.9 93.6 93.6 93.3 93.6 93.9 93.6
mAP (Hard) 82.8 84.4 85.3 84.8 85.7 86.7 88.4
Table 2: Evaluation Results on WIDER FACE validation set. Shorter- indicates the single scale testing that the shorter size of image is resized to while keeping image aspect ratio. We use this to compare the performance with different input size.

First, we validate the efficacy of our hierarchical structure design without HL and inception-like context extraction module. In Table 2, with increasing number of hierarchies, the performance of FANet significantly improves, especially for hard cases which our design mainly aims for. Specifically, the performance gains in hard task are , for 2-level and 3-level, respectively. See column . With the context module, the performance further improved, in hard tasks, shown in column . Next we will show that Hierarchical Loss is necessary in guiding a better and more effective training with high hierarchical-level Agglomeration Connection.

Hierarchical Loss. In this part, we compared the performance of our FANet optimized with and without Hierarchical Loss. In Table 2, the performance of both 2-level and 3-level FANet with Hierarchical Loss gains significant improvement compared with its single loss setting in hard levels ( for 2-level, and for 3-level). Besides, 3-level FANet outperforms 2-level FANet consistently, which indicates high level Agglomeration Connection is crucial to improve detection accuracy with Hierarchical Loss optimization method.

Robust Feature Learning. We build “S3FD w/ FPN” based on S3FD with skip-connection in top-down structure as FPN[11]. In Table 1, compared with S3FD, “S3FD w/ FPN” gains improvement in hard level, which validates the efficacy of feature pyramid for improving feature representation. Our FANet with 2-level HL outperforms “S3FD w/ FPN” with large margin, in hard task, which demonstrates the superiority of our agglomeration connection over the skip connection.

Multi-scale Inference. Multi-scale testing is a widely used technique in object detection, which can further boost the detection accuracy especially for small objects. In Table 2, We show that with larger input size eg. compared with , it can significantly improve the performance in hard task. The final multi-scale testing result of our FANet is shown in the last column.

Algorithms Backbone Easy Med Hard
MTCNN [27] - 84.8 82.5 59.8
LDCF+ [14] - 79.0 76.9 52.2
ScaleFace [25] ResNet50 86.8 86.7 77.2
HR [8] ResNet101 92.5 91.0 80.6
Face R-FCN [22] ResNet101 94.7 93.5 87.4
Zhu[29] ResNet101 94.9 93.3 86.1
CMS-RCNN [30] VGG16 89.9 87.4 62.4
MSCNN [2] VGG16 91.6 90.3 80.2
Face R-CNN [21] VGG19 93.7 92.1 83.1
SSH [13] VGG16 93.1 92.1 84.5
S3FD[28] VGG16 93.7 92.5 85.9
FANet(ours) VGG16
Table 3: Evaluation on WIDER FACE validation set (mAP).

Comparisons with the State of the Art. Our Final FANet model is trained with 3-level HL. Figure. 5 and Figure. 5 show the precision-recall curves on WIDER Face evaluation and test set and Table 3 summarizes the state-of-the-art results on the WIDER Face validation and test set.

Our FANet reaches the state-of-the-art result among all detectors. WIDER FACE is a very challenging face benchmark and the results strongly prove the effectiveness of FANet in handling high scale variances, especially for small faces.

4.2 Evaluation on Other Public Face Benchmarks.

In addition to the WIDER FACE data set, we also want to examine the generalization performance of the proposed face detector for other data sets. We thus test the pre-trained FANet face detector on another two popular face detection benchmarks.

FDDB. The Face Detection Data set and Benchmark (FDDB)[9] is a well-known face detection benchmark with 5,171 faces in 2,845 images. We compare our FANet detector trained on the WIDER FACE training set with other published results on FDDB. Figure. 6 shows the evaluation results, in which our FANet detector achieves the state-of-the-art performance on both discrete and continuous ROC curves.

PASCAL FACE. This dataset was collected from PASCAL person layout test set [4], with 1,335 labeled faces in 851 images. Figure. 3 shows the evaluation results of the precision-recall curves. Among all the existing methods, the proposed FANet achieved the best mAP (98.78%), which outperforms the previous state-of-the-art detectors S3FD (98.45%), and significantly beats the other submitted methods. [23, 3].

Figure 3: Benchmark evaluation on PASCAL FACE dataset.

4.3 Inference Speed

Our FANet detector is a single-stage detector and thus enjoys high inference speed. It runs in real-time inference speed with 35.6 FPS for VGA-resolution input images on a computing environment with NVIDIA GPU GTX 1080ti and CuDNN-v6.

(a) Easy
(b) Medium
(c) Hard
(a) Easy
(b) Medium
(c) Hard
Figure 4: Evaluation of various state-of-the-art methods on the validation set of WIDER FACE
Figure 5: Evaluation of various state-of-the-art methods on the test set of WIDER FACE
Figure 4: Evaluation of various state-of-the-art methods on the validation set of WIDER FACE
(a) FDDB Discrete ROC Curves
(b) FDDB Continuous ROC Curves
Figure 6: Evaluation on FDDB face detection benchmarks. Recall@fp2000

5 Conclusion

This paper proposed a novel framework of “Feature Agglomeration Networks” (FANet) for building single stage face detectors. The proposed FANet based face detector achieves the state-of-the-art performance on several well-known face detection benchmarks, yet still enjoys real-time inference speed on GPU due to the nature of the single-stage detection framework. FANet introduces two key novel components: (i) the “Agglomeration Connection” module for context aware feature enhancing and multi-scale features agglomeration with hierarchical structure, which effectively handles scale variance in face detection; and (ii) the Hierarchical Loss to guide a more stable and better training in an end-to-end manner. We noted that the general idea of the Feature Agglomeration Networks is perhaps not restricted to face detection tasks, and might also be beneficial to other types of object detection tasks. For future work, we will explore the extension of the Feature Agglomeration Networks (FANet) framework for more other object detection tasks in computer vision, including generic object detection or specialized object detection tasks in other domains, such as pedestrian detection, vehicle detection, etc.


The authors would like to acknowledge the assistance and collaboration from colleagues of DeepIR Inc.


  • [1] S. Bell, C. Lawrence Zitnick, K. Bala, and R. Girshick.

    Inside-outside net: Detecting objects in context with skip pooling and recurrent neural networks.


    The IEEE Conference on Computer Vision and Pattern Recognition (CVPR)

    , 2016.
  • [2] Z. Cai, Q. Fan, R. S. Feris, and N. Vasconcelos. A unified multi-scale deep convolutional neural network for fast object detection. In European Conference on Computer Vision, 2016.
  • [3] D. Chen, G. Hua, F. Wen, and J. Sun.

    Supervised transformer network for efficient face detection.

    In European Conference on Computer Vision, 2016.
  • [4] M. Everingham, L. Van Gool, C. K. I. Williams, J. Winn, and A. Zisserman. The pascal visual object classes (voc) challenge. Intl. Journal of Computer Vision (IJCV), 2010.
  • [5] C.-Y. Fu, W. Liu, A. Ranga, A. Tyagi, and A. C. Berg. Dssd: Deconvolutional single shot detector. arXiv preprint arXiv:1701.06659, 2017.
  • [6] R. Girshick. Fast r-cnn. In The IEEE International Conference on Computer Vision (ICCV), 2015.
  • [7] Z. Hao, Y. Liu, H. Qin, J. Yan, X. Li, and X. Hu. Scale-aware face detection. The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2017.
  • [8] P. Hu and D. Ramanan. Finding tiny faces. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2017.
  • [9] V. Jain and E. Learned-Miller. Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst, 2010.
  • [10] T. Kong, A. Yao, Y. Chen, and F. Sun. Hypernet: Towards accurate region proposal generation and joint object detection. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2016.
  • [11] T.-Y. Lin, P. Dollár, R. Girshick, K. He, B. Hariharan, and S. Belongie. Feature pyramid networks for object detection. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2017.
  • [12] W. Liu, D. Anguelov, D. Erhan, C. Szegedy, S. Reed, C.-Y. Fu, and A. C. Berg. SSD: Single shot multibox detector. In European Conference on Computer Vision, 2016.
  • [13] M. Najibi, P. Samangouei, R. Chellappa, and L. Davis. Ssh: Single stage headless face detector. In The IEEE International Conference on Computer Vision (ICCV), 2017.
  • [14] E. Ohn-Bar and M. M. Trivedi. To boost or not to boost? on the limits of boosted trees for object detection. In Pattern Recognition (ICPR), 2016.
  • [15] A. Paszke, S. Gross, S. Chintala, G. Chanan, E. Yang, Z. DeVito, Z. Lin, A. Desmaison, L. Antiga, and A. Lerer. Automatic differentiation in pytorch. 2017.
  • [16] J. Redmon, S. Divvala, R. Girshick, and A. Farhadi. You only look once: Unified, real-time object detection. In IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), 2016.
  • [17] S. Ren, K. He, R. Girshick, and J. Sun. Faster r-cnn: Towards real-time object detection with region proposal networks. In Advances in Neural Information Processing Systems, 2015.
  • [18] X. Sun, P. Wu, and S. C. Hoi. Face detection using deep learning: An improved faster rcnn approach. Neurocomputing, 299:42–50, 2018.
  • [19] A. Y. H. L. M. L. Y. C. Tao Kong, Fuchun Sun. Ron: Reverse connection with objectness prior networks for object detection. In IEEE Conference on Computer Vision and Pattern Recognition, 2017.
  • [20] P. Viola and M. J. Jones. Robust real-time face detection. International Journal of Computer Vision, 2004.
  • [21] H. Wang, Z. Li, X. Ji, and Y. Wang. Face r-cnn. arXiv preprint arXiv:1706.01061, 2017.
  • [22] Y. Wang, X. Ji, Z. Zhou, H. Wang, and Z. Li. Detecting faces using region-based fully convolutional networks. CoRR, abs/1709.05256, 2017.
  • [23] S. Yang, P. Luo, C.-C. Loy, and X. Tang. From facial parts responses to face detection: A deep learning approach. In IEEE Intl. Conference on Computer Vision (ICCV), 2015.
  • [24] S. Yang, P. Luo, C.-C. Loy, and X. Tang. Wwider face: A face detection benchmark. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2016.
  • [25] S. Yang, Y. Xiong, C. C. Loy, and X. Tang. Face detection through scale-friendly deep convolutional networks. CoRR, abs/1706.02863, 2017.
  • [26] S. Zagoruyko, A. Lerer, T.-Y. Lin, P. O. Pinheiro, S. Gross, S. Chintala, and P. Dollár. A multipath network for object detection. In BMVC, 2016.
  • [27] K. Zhang, Z. Zhang, Z. Li, and Y. Qiao. Jjoint face detection and alignment using multi-task cascaded convolutional networks. IEEE Signal Processing Letters, 2016.
  • [28] S. Zhang, X. Zhu, Z. Lei, H. Shi, X. Wang, and S. Z. Li. S3fd: Single shot scale-invariant face detector. In IEEE Intl. Conference on Computer Vision (ICCV), 2017.
  • [29] C. Zhu, R. Tao, K. Luu, and M. Savvides. Seeing small faces from robust anchor’s perspective.
  • [30] C. Zhu, Y. Zheng, K. Luu, and M. Savvides. Cms-rcnn: Contextual multi-scale region-based cnn for unconstrained face detection. In Deep Learning for Biometrics. 2017.


We first showed the precision-recall curves of our ablation experiments, Figure. 8. Then the Speed Accuracy tradeoff and the Model-Size Accuracy tradeoff of our FANet are analyzed. Finally, we demonstrate some qualitative results of our detector. In Figure. 7, the result is obtained on the hard task of WIDER face [24] evaluation set 8(c) by single scale testing in which the shorter size of image is resized to while keeping image aspect ratio. We test the result under a computing environment with NVIDIA GPU GTX 1080ti and CuDNN-v6.

6 Precision-recall curves

Figure. 8 showed the precision-recall curves of our ablation experiments.

7 Speed Accuracy Tradeoff

In this section, we compared the Speed Accuracy tradeoff between vanilla S3FD [28] structure, FPN [11] and our FANet. As shown in Figure. 7(a), our 2-level is a good tradeoff between speed and accuracy. Specifically, it achieves comparable speed as FPN structure (45.4 fps vs 45.9 fps), while getting much better result than FPN (85.3 vs 83.8). Our final FANet improved +3.9% over vanilla S3FD while still reaching the real-time speed.

8 Model-Size Accuracy Tradeoff

In this section, we showed the Model Size Accuracy tradeoff of our FANet. As shown in Figure. 7(b), our 2-level structure achieves better results than FPN while still having less parameters than it. Compared with vanilla S3FD, our 2-level has only 11% more parameters while reaching much better results (85.3 vs 82.8). The final FANet gets significant improved results +3.9% with tolerable increased parameters.

(a) Speed accuracy tradeoff.
(b) ModelSize accuracy tradeoff.
Figure 7: Model performance analysis. 1-level is of the same structure as S3FD and 3-level w/ context is our final FANet.

9 Qualitative Analysis

In this section, we demonstrate some qualitative results of our FANet, including World’s Largest Selfie, Figure. 9, in which our FANet can find 858 faces out of 1000 facial images present, results on FDDB [9] dataset, Figure. 10 and results under various conditions, Figure. 11. With hierarchical loss training scheme, our FANet is very stable, the variance is within 0.2%.

(a) Easy
(b) Medium
(c) Hard
Figure 8: Precision-recall curves of ablation experiments on the validation set of WIDER Face
Figure 9: Example of face detection with the proposed method. In the above image, the proposed method can find 858 faces out of 1000 facial images present. The detection confidence scores are also given by the color bar as shown on the right. Best viewed in color.
Figure 10: Qualitative results on FDDB. Our model is robust to occlusion and scale variance
Figure 11: Qualitative results of our FANet. Our model is robust to blur, occlusion, pose, expression, makeup, illumination, etc.