Speed and accuracy are two main directions that current object detection systems are pursuing. Fast and accurate detection systems would make autonomous cars safer, enable computers to understand scene information dynamically, and help robots act more intelligently.
The community have strived to improve both speed and accuracy of detectors. Most recent state-of-the-art detection systems are based on deep convolutional neural networks. The basic pipeline of these modern detectors can be summarized as: generate bounding box proposals, extract features for each proposal, and apply a high-quality classifier. To obtain higher accuracy, better pretrained models, improved region proposal methods, context information, and novel training strategies can be utilized. But, these methods often suffer from high computational costs, e.g., tens of thousands of region proposals are required to obtain high accuracy. On the other hand, there are a few works focusing on building faster detectors by hacking regular stages designed for traditional systems. YOLO replaces the general region proposal procedures by generating bounding boxes from regular grids. Single Shot MultiBox Detectors (SSD) make several improvements on existing approaches, and the core of it is to calculate category scores and box offsets at a fixed set of bounding boxes using small and separated convolution kernels . Although these approaches speed up the detection process, their accuracy rates are still lower than those slow but accurate detectors. In fact, as shown in Table 1, accuracy drops as speed increases.
We humans are able to adaptively tune between detection speed and recognition accuracy. When you step into the kitchen, it might seem easy to find the cabinet in few milliseconds, but it surely will cost you longer to locate the toaster. However, most modern detection models “look at” different input images in the same way. Specifically, the time cost is nearly the same across different images. For example, regardless of the number of persons in the foreground, the region proposal network in Faster R-CNN  generates tens of thousands of proposals which will definitely decrease the processing speed in images containing only few or even no people.
In this paper, we propose to adaptively process different test images using different detection models, in which we utilize two detectors: one fast but inaccurate, and one accurate but slow. We first decide whether an input image is “easy” (suitable for the fast detector) or “hard” (for which the accurate detector is desirable), such that the test image can be adaptively fed to different detectors. We hope the entire detector to be as fast as the fast detector while maintaining the accuracy in the accurate one.
To make this promising tradeoff, we propose a novel technique, adaptive feeding (AF), to efficiently extract features that are useful for this purpose and to learn a classifier that is simple and fast. Specifically, we build a cascade of object detectors, in which an extremely fast detector is first used to generate few instance proposals, based on which the AF classifier is able to adaptively choose either the fast or the accurate model to finish the detection task. Experiments (including timing and accuracy analyses) on several detector pairs and datasets show that there are three benefits in our AF pipeline:
The AF detector runs much faster than the accurate model (in many cases its speed is similar to or comparable to the fast model). Meanwhile, the accuracy of AF is much higher than the fast model (in many cases close to or comparable to the accurate model). Hence, by combining a fast (but inaccurate) and an accurate (but slow) model, we simultaneously achieve fast and accurate detection in AF.
AF can directly utilize existing models even with different architectures. And there is no need for additional training data.
AF employs an imbalanced learning framework to distinguish easy from hard images, in which it is easy to adjust the tradeoff between the speed and accuracy of the combined system.
2 Related Work
Object detection is one of the most fundamental challenges in computer vision, which generally consists of feature extraction at various locations (grids or proposals) and classification or bounding box regression. Prior to fast R-CNN, these two steps were usually optimized separately. Fast R-CNN employed an end-to-end learning approach to optimize the whole detector, and Faster R-CNN  further incorporated the proposal generation process into learning. Unlike these methods, we focus on the utilization of pretrained models. In this section, we review existing methods, in particular those trying to accelerate the detection.
Detection systems. The deformable parts model (DPM) is a classic object detection method based on mixtures of multiscale deformable part models , which can capture significant variations in object appearances. It is trained using a discriminative procedure that only requires bounding boxes for the objects. DPM uses disjoint steps and histogram of gradients features , which is not as competitive as ConvNet-based approaches.
starts another revolution of object detection after DPM. R-CNN is among the first to employ deep features into detection systems, and obtained significant improvements over existing detectors at its time. However, the resulting system is very slow because features are extracted from every object proposal. Compared with R-CNN, Fast R-CNN not only trained the very deep VGG16  network but also uses ROI pooling layer  to perform feature extraction, and was 200 faster at test time. After that, to speed up the proposal generation process, Faster R-CNN  proposed the region proposal network (RPN) to generate bounding box proposals and thus achieves improvements on both speed and accuracy. Recently, ResNet  begins to replace the VGG net in some detection systems, such as Faster R-CNN  and R-FCN . However, state-of-the-art accurate detectors are in general significantly slower than real-time.
Fast detectors. Accelerating the test process is a hot research topic in object detection. As rich object categories often have many variations, few research focus on the speed optimization prior to DPM. Fast detectors mainly focused on detecting a specific object, such as face and human detectors [34, 29]. After DPM was invented, many DPM-based detectors [24, 30, 6] focused on optimizing different parts in the pipeline. Dean  exploited a locality-sensitive hashing method which achieves a mean average precision of 0.16 over the full set of 100,000 object classes. Yan  accelerated three prohibitive steps in the cascade version of DPM, and then get an 0.29 second average time on PASCAL VOC 2007 while maintaining nearly the same performance as DPM. Sadeghi and Forsyth  reimplemented the deformable parts model and achieved a near real-time version.
In recent years, after R-CNN’s invention, many works tend to speed up the detection pipeline by importing new functional layers in deep models. However, it is not until recently that we begin to approach real-time detection. YOLO 
framed object detection as a regression problem to spatially separated bounding boxes and associated class probabilities, and proposed a unified architecture which is extremely fast. The SSD models leave separated convolution kernels in charge of default proposal boxes with different size and ratios. Both YOLO and SSD share something in common: a) decrease the number of default bounding box proposals; b) employ a unified network and incorporate different stages into the same framework. These fast detectors are, however, less accurate than slow but accurate models such as R-FCN (Table 1).
Recently, there are also some researches utilizing cascaded and boosting methods     . Shrivastava  make the traditional boosting algorithm available on deep networks which achieves higher accuracy and maintain the same detection speed. Similar to ours, Angelova  is based on sliding window and processes different image regions independently. However, recent deep detectors use fully-convolutional networks and take the whole image as input. On the contrary, our adaptive feeding method make a choice on the image level (not the region level) and thus saves a lot of time.
The proposed adaptive feeding method follows a different route: to seek a combination of two detection systems with different strengths. In this paper, we tackle such a situation: one detector is fast, the other is slow but more accurate, which widely exist as aforementioned. Our approach looks a bit like the ensemble methods because both of them rely on the diversity of different models. However, ensemble methods often suffer from enormous computations and are difficult to implement in real-time, while our method approaches the accuracy advantage of the accurate detector and maintains the speed advantage of the fast one.
3 Motivation: “Easy” vs. “Hard” Images
Given a fast and an accurate detector, the motivation and foundation of adaptive feeding is the following empirical observation: although on average the accurate model has higher accuracy than the fast model, in most images the fast model is as precise as the accurate one (and in few cases it is even better). For convenience, we use “easy” to denote these images for which the fast detector is as precise as the accurate one, and the rest the “hard”. Furthermore, when combining these detectors, we call the fast model the basic model, and the other more accurate detector as the partner model. In Figure 1, examples are shown for cases where the basic model is better than, same as or worse than the partner model.
In order to create groundtruth labels for the easy vs. hard distinction, we apply the mean average precision (mAP) detection evaluation metric to one single image. In the PASCAL VOC Challenge[9, 8]
, the interpolated average precision (AP) is used to evaluate detection results. A detection system submits a bounding box for each detection, with a confidence level and a predicted class for each bounding box. For a given class, the precision/recall curve is computed from a method’s ranked output based on the confidence scores. The AP metric summarizes the shape of precision/recall curve, and a further mAP (mean average precision) averages the AP in all classes. In PASCAL VOC and MS COCO , mAP is calculated by the formula
where is the number of classes in the dataset.
However, our evaluation target is a single image. Hence, we apply Equation 1 but focus on one image (), as
where is the number of classes in this image, is the average precision for class in this image, and represents the mean average precision in this image. In the rest of this paper, we call Equation 2 mAPI, which stands for mean Average Precision per Image.
Given two models and , we assume is more accurate than , but runs much faster than . We evaluate both models on a set of images, which returns and , where is the mAPI for model () and image (). We then split the difference set into two parts, the easy and the hard, according to a simple rule: if (i.e., if the accurate model has larger mAPI on image than the fast one), this image is a “hard” one; if (i.e., if the fast model performs as good as or better than the accurate detector), this image is an “easy” one.
We can now collect statistics about easy vs. hard images with the groundtruth labels defined, as shown in Table 2 for different setups. In Table 2, SSD300 and SSD500 are the SSD models applied to different input image sizes ( and , respectively) . Results in Table 2 show that most (around 80%) images are easy.
|0 (Easy)||> 0 (Hard)|
That is, for a large proportion (80% or so) of easy images, we can use (the fast model) for detection, which has both fast speed and accurate results; for the rest small portion (20% or so) of hard images, we apply (the slow accurate detector) to maintain high accuracy. However, since the percentage of hard examples is small, they will not significantly reduce the overall detection speed.
4 Adaptive Feeding
The proposed adaptive feeding (AF) is straightforward if we know how to separate easy images from hard ones. Figure 2 shows the framework which mainly contains three parts: instance proposals, a binary classifier and a pair of detectors. At the first step, an instance generator is employed to generate instance proposals, based on which the binary classifier decides either to feed a test image to the basic or the partner. In this section, we propose techniques for how to make this decision.
4.1 Instance Proposals: Features for easy vs. hard
Since the label of “easy” or “hard” is determined by the detection results, we argue that instances in the image should play a major role. This encourages us to put an instance generator at the first stage in AF (Figure 2) to extract features for easy vs. hard classification. To obtain both high speed and accuracy in the following classification, we require that these proposals carry predicted class labels which will provide detailed information; and, just a few of them are able to describe the whole image well. As a result, an extremely fast detector with reasonable accuracy should be the first choice. In this paper, we use Tiny YOLO, an improved version of Fast YOLO , as our instance generator. Tiny Yolo takes less than 5ms to process one image on a modern GPU and achieves 56.4% mAP on VOC07, which makes the features powerful and fast to extract.
The proposals generated by the instance generator that have top confidence values contain a lot of information about objects in an image. Specifically, one proposal include three components: (predicted class label), (confidence score) and (bounding box coordinates). We extract features based on the top
proposals with highest confidence scores to determine whether this image is easy or hard using a binary linear support vector machine (SVM) classifier. Since the feature length is small ifis small, the rest SVM classification takes little time (less than 0.1ms) per image, and is negligible.
Ablation Analysis. Several ways are available to organize information in into a feature vector. Ablation studies are carried out to find out the best practice using the PASCAL VOC07 dataset (which has 20 classes), with results in Table 3
. For the simplest case, we utilize an VGG-16 model pretrained on ImageNet to do the binary classification and report the mAP inrow 0. We can see that the basic image classification model usually has bad performance on this simple task.
The predicted class labels of proposals () can form a 20-dim histogram (denoted as “20” in Table 3), or 20-dim confidence vectors (one for each proposal, denoted as “20-prob”). Comparing rows 3 and 6, we find that the histogram of predicted classes is not only shorter, but also more accurate. We believe the confidence for each proposal (, denoted as “conf” in Table 3) is useful and it is included in all setups of Table 3. The information are reorganized to have two formats: “4s” and “4c”. A comparison between row 2, 1 and 6 shows that removing these coordinates will reduce the mAP by at most 0.4%, and those features including bounding box size are more powerful (comparing row 1 with row 6). Summarizing observations from these experiments, we use “20+(conf+4s)K” as our features. The coordinates are normalized to be within 0 and 1.
We also evaluated the effect of . Comparing rows 4, 5 and 6, we find that too many proposals () not only reduces speed, but also lowers accuracy. Too few () proposal also lead to lower accuracy. Hence, we choose to extract our feature vector on Pascal VOC dataset, which is the last row in Table 3.
4.2 Learning the easy vs. hard classifier
It is not trivial to learn the easy vs. hard SVM classifier, even after we have fixed the feature representation. This classification problem is both imbalanced  and cost-sensitive , whose training requires special care.
As shown in Table 2, most images are easy (i.e., suitable for the fast detector). Hence, a classifier with low error rate may make a lot of errors in the hard images. For example, if the classifier simply classifies all images as easy, its classification accuracy is around 80% (could even be as high as 89.4%, Table 2). However, this simple classifier will reduce AF to the fast detector, whose detections are not accurate enough.
Beyond the classification accuracy, we also want the classifier to correctly predict a high percentage of hard examples. This requires a high recall, which is the percentage of positive examples (, i.e., hard images) to be correctly classified. A high recall ensures that the slow accurate detector is applied to appropriate examples such that the AF detection results will be accurate.
A simple approach to solve the class imbalance problem is to assign different resampling weights for hard and easy images. Because we use SVM as our easy vs. hard classifier, this can be equivalently achieved by assigning different misclassification costs for hard and easy images. Suppose we have a set of training examples (), where is the AF features extracted for the -th image, and is its label (easy or hard). A linear classifier is learned by solving the following standard SVM problem:
is a hyperparameter that balances between large margin and small empirical error, andis the cost associated with the -th image .
In this standard SVM formulation, easy and hard images are treated equally. In order to obtain high recall, a classic method is to assign different weights for different images. We fix the resampling weight for easy images as and the resampling weight for hard images . A larger value puts more emphasis on the correct classification of hard images, and hence will in general lead to higher recall. The SVM problem is now
Ablation Analysis. We start from the balanced resampling ratio (i.e., ratio between the number of easy and hard images), and gradually decrease it. In the pair of SSD300 vs. SSD500, as shown in Figure (a)a and Figure (a)a, treating easy and hard examples equally () leads to a low recall rate and low detection mAP, but the detection speed is very fast. When gradually increases, the classification accuracy and fps gradually decrease but the recall rate keeps increasing. Accordingly, the detection becomes more accurate, but at the price of dropped detection speed. The same trends hold for SSD300 vs. R-FCN, too, as shown in Figure (b)b and Figure (b)b.
We note that it is a practical method to adjust the tradeoff between detection speed and accuracy by adjusting the resampling weight . When you care more about the precision, a balanced weight could be the first choice, otherwise a lower weight might fit the situation. However, in both cases, our AF achieves considerable speed-up ratio.
5 Experimental Results on Object Detection
We evaluate our method on the VOC 2007 test set, VOC 2012 test set  as well as MS COCO . We demonstrate the effect on achieving fast and accurate detections when combining two models using our AF approach.
We implement the SVM using scikit-learn  and set
. We use the LIBLINEAR solver in the primal space. We use the default mode in scikit-learn for setting the resampling weight. For the basic and partner models, we directly use those publicly available pretrained detection models without any change if not specifically mentioned. All evaluations were carried out on an Nvidia Titan X GPU card, using the Caffe deep learning framework.
For all experiments listed below, we utilize the Tiny YOLO detector as the instance generator if not otherwise specified. We choose Tiny YOLO because it is one of the fastest deep learning based general object detectors. SSD300 is used as the basic model because it runs fast and performs well. On the other side, SSD500 and R-FCN are the partner models in different experiments, because their detections are more accurate than SSD300.
On the PASCAL VOC datasets, these models are trained on VOC07+12 trainval, and tested on both VOC07 test and VOC12 test. For the sake of fairness, we don’t train with extra data (VOC07 test) when testing on VOC12 test. We also conduct experiments on MS COCO  and report numbers from the test-dev 2015 evaluation server.
5.2 PASCAL VOC 2007 results
On this dataset, we perform experiments on two pairs of models: SSD300 vs. SSD500 and SSD300 vs. R-FCN, and compare against SSD300, SSD500 and R-FCN. Specifically, the training data is VOC07+12 trainval (16551 images) and test set is VOC07 test (4952 images). Experimental results are displayed in Table 4. Since we do not have an independent validation set, we train the SVM on VOC07+12 trainval. We randomly split the 16551 images into two parts, where the first part contains 13,000 images and the second keeps the rest 3,551 images. We train our SVM on the 13k set and validate it on the 3.5k images. We use 20+(conf+4s)25 as the features for SVM learning because it performs the best among different types of features in Table 3.
We provide two modes during the combination: accurate (A) or fast (F). The accurate mode takes a balanced sampling weight (5.13 for SSD500, and 8.43 for R-FCN), while the fast mode uses a lower weight (3 and 5, respectively). Compared with SSD500, 300-500-A has a slightly higher performance because the classifier makes the right choice for those images fit for the basic model. 300-R-FCN-F even outperforms SSD500 by two percent points while runs 5fps faster. If we compare AF with R-FCN, 300-R-FCN-A achieves 113% speed-up ratio at a slight cost in mAP (0.7 points). As additional baselines, we also make experiments on SSD400 and a simple ensemble of SSD300 and SSD500. We implement SSD400 following the instructions in . 300-500-F surpass SSD400 by 0.4 points while reaches the same speed with negligible training cost. The Simple Ensemble method brutely combines the detection results of SSD300 and SSD500 but its mAP is worse than SSD500.
5.3 PASCAL VOC 2012 results
The same two pairs of detectors are evaluated on the VOC12 test set. Accurate and fast modes are both performed, respectively. For consideration of consistence, we take the same group of sampling weights for all four combinations: 5.13 for 300-500-A, 3.0 for 300-500-F, 8.43 for 300-R-FCN-A, and 5.0 for 300-R-FCN-F. Similar to VOC 2007 test, the SVM classifier is also trained on instance proposals from VOC07+12 trainval.
Table 5 shows the results on VOC 2012 test. The AF approach shows different effects in different pairs. For the accurate mode, 300-500-A improves the mAP from 68.9% (SSD300) to 71.1% which is the same as SSD500, but is 8fps faster than SSD500 (about 42% speed-up ratio). Even though its speed (27fps) is slower than the basic model (SSD300, 46fps), its speed is still faster than what is required for real-time processing. 300-R-FCN-A runs twice as fast as R-FCN, while only loses 1.0 mAP. For the fast mode, 300-500-F runs much faster while keeps comparable precision with SSD500. 300-R-FCN-F not only performs 2.0 points higher but also is 5fps faster when compared with SSD500.
5.4 MS COCO results
To further validate our approach, we train and test our approach on the MS COCO dataset . All models are trained on trainval35k  (including Tiny YOLO). For convenience, we use minival2014 (about 5,000 images) to train the SVM. Note that minival2014 is not contained in trainval35k. Since there are 80 classes in MS COCO, we take the top 50 proposals and the feature is (80+(conf+4s)50) for SVM learning. This datasets exhibits different property than the PASCAL VOC datasets. From Table 6, in both pairs, the ratio of easy to hard images is only slightly larger than 1, which can be explained by the fact that MS COCO is “harder” than PASCAL VOC, because there are many small objects in it. However, although this ratio is not as large as that in VOC datasets, the adaptive feeding method still achieves convincing results.
|0 (Easy)||> 0 (Hard)|
Table 7 shows the AF results on MS COCO test-dev 2015. For the accurate mode, on the standard COCO evaluation metric, SSD300 scores 20.8% AP, and our approach improves it to 23.7%. It is also interesting to note that, since SSD500 and R-FCN are better at small and medium sized objects, our approach improves SSD300 mainly on these two parts.
5.5 Feature Visualization
In this part, we want to find what makes an input image an “easy” or “hard” one. To achieve this goal, a visualization of the learned SVM model is needed. We use linear SVM and set +1 for hard while -1 for easy ones. Features are learned on VOC07+12 trainval and COCO minival, respectively. We only report results for the natural balancing weights, and it is worth noting that there can be small changes when employing different class weights.
From Table 8, we can see that "conf" (confidence in top-k proposals) is the most influential factor. Thus, many high-confidence proposals make the input image a "hard" one. This fits our intuition: an image with many objects might be hard for a detector.
There are some other interesting observations 1) large proposal hints "easy" images. We argue that images with large objects often have few instances, especially in VOC and COCO; 2) "easy" images prefer shorter proposals (w>h) while "hard" images like taller instances; 3) xmin and ymin have small weights, hence positions of proposals have small impact.
6 Pedestrian Detection
Since pedestrian detection can be regarded as a special case of object detection, we also apply our adaptive feeding approach to existing detectors on the Caltech Pedestrian dataset.
The basic model employs a stand-alone region proposal network (RPN). The original RPN in Faster R-CNN  is developed as a class-agnostic detector with multi-scale anchors to describe different ratios of objects at each position. Zhang  found that a single RPN also comes with good performance in pedestrian detection. In our experiments, we build RPN based on VGG-16 and follow the strategies in  when designing anchors on the Caltech Pedestrian dataset. For the partner model, we directly use CompACT , which proposed a novel algorithm for learning a complexity-aware cascade. In our case, we make use of CompACT-Deep which incorporates CNN features into the cascading detectors.
Note that RPN achieves an MR (missing rate) of 15.2% on the Caltech Pedestrian testset at 10fps. CompACT-Deep reaches 12.0% MR, but is 7fps slower than RPN. The easy to hard ratio between these two detectors is 4.25, which seems to be a good situation for AF. RPN is also used as an instance generator here, which means every input image should first pass the RPN to make a decision. For the feature inputs to SVM, we employ a similar format: ((conf+4s)25). The settings of linear SVM are the same as those in object detection.
Experimental results are reported in Table 9. With RPN as the basic model, AF achieves satisfying speed-up ratio while maintaining acceptable miss rate. With a sampling weight of 4.25, the accurate mode is 67% faster than the original CompACT-Deep model, at a cost of 0.5 higher MR. When the weight drops to 3, RPN-CompACT-F has 2.0 points lower miss rate than RPN at a comparable running speed. These experiments also show that the basic model can be used as an instance generator.
We presented adaptive feeding (AF), a simple but effective method to combine existing object detectors for speed or accuracy gains. Given one input image, AF makes a choice on either the fast (but less accurate) or the accurate (but slow) detector should be applied. Hence, AF can achieve fast and accurate detection simultaneously. The other advantage of AF is that it needs no additional training data and the training time is negligible. By combining different pairs of models, we reported state-of-the-art results on the PASCAL VOC, MS COCO and Caltech Pedestrian datasets when detection speed and accuracy are both taken into account.
Though we used pairs of models (one basic and one partner model) throughout this paper, we believe AF can be used with combinations of more than two region-based ConvNet detectors. For example, a triplet combination can adds an extra model, which is more accurate but slower than those models in our experiments [11, 31], to further improve the detection accuracy without losing AF’s speed benefits.
This work was supported in part by the National Natural Science Foundation of China under Grant No. 61422203.
-  A. Angelova, A. Krizhevsky, V. Vanhoucke, A. S. Ogale, and D. Ferguson. Real-Time Pedestrian Detection with Deep Network Cascades. In BMVC, pages 32.1–32.12, 2015.
S. Bell, C. L. Zitnick, and R. Girshick.
Inside-Outside Net: Detecting Objects in Context with Skip Pooling and Recurrent Neural Networks.In CVPR, pages 2874–2883, 2016.
-  Z. Cai, M. Saberian, and N. Vasconcelos. Learning Complexity-Aware Cascades for Deep Pedestrian Detection. In ICCV, pages 3361–3369, 2015.
-  J. Dai, Y. Li, K. He, and J. Sun. R-FCN: Object Detection via Region-based Fully Convolutional Networks. In NIPS, pages 379–387, 2016.
-  N. Dalal and B. Triggs. Histograms of oriented gradients for human detection. In CVPR, volume 1, pages 886–893. IEEE, 2005.
-  T. Dean, M. A. Ruzon, M. Segal, J. Shlens, S. Vijayanarasimhan, and J. Yagnik. Fast, Accurate Detection of 100,000 Object Classes on a Single Machine. In CVPR, pages 1814–1821, 2013.
-  C. Elkan. The Foundations of Cost-Sensitive Learning. In IJCAI, volume 2, pages 973–984, 2001.
-  M. Everingham, S. M. Ali Eslami, L. Van Gool, C. K. I. Williams, J. Winn, and A. Zisserman. The PASCAL Visual Object Classes Challenge: A Retrospective. IJCV, 111(1):98–136, 2015.
-  M. Everingham, L. Van Gool, C. K. I. Williams, J. Winn, and A. Zisserman. The PASCAL Visual Object Classes (VOC) Challenge. IJCV, 88(2):303–338, 2010.
-  P. F. Felzenszwalb, R. B. Girshick, D. McAllester, and D. Ramanan. Object Detection with Discriminatively Trained Part Based Models. PAMI, 32(9):1627–1645, 2010.
-  S. Gidaris and N. Komodakis. Object detection via a multi-region and semantic segmentation-aware CNN model. In ICCV, pages 1134–1142, 2015.
-  R. Girshick. Fast R-CNN. In ICCV, pages 1440–1448, 2015.
-  R. Girshick, J. Donahue, T. Darrell, and J. Malik. Rich feature hierarchies for accurate object detection and semantic segmentation. In CVPR, pages 580–587, 2014.
-  K. He, X. Zhang, S. Ren, and J. Sun. Spatial pyramid pooling in deep convolutional networks for visual recognition. PAMI, 37(9):1904–1916, 2015.
-  K. He, X. Zhang, S. Ren, and J. Sun. Deep Residual Learning for Image Recognition. In CVPR, pages 770–778, 2016.
-  Y. Jia, E. Shelhamer, J. Donahue, S. Karayev, J. Long, R. Girshick, S. Guadarrama, and T. Darrell. Caffe: Convolutional Architecture for Fast Feature Embedding. In ACM MM, pages 675–678, 2014.
-  T.-Y. Lin, M. Maire, S. Belongie, L. Bourdev, R. Girshick, J. Hays, P. Perona, D. Ramanan, C. L. Zitnick, and P. Dollar. Microsoft COCO: Common Objects in Context. In arXiv preprint arXiv:1405.0312v3, 2014.
-  W. Liu, D. Anguelov, D. Erhan, C. Szegedy, S. Christian, F. Cheng-Yang, and C. Alexander. SSD: Single Shot MultiBox Detector. In ECCV, pages 21–37, 2016.
-  X.-Y. Liu, J. Wu, and Z.-H. Zhou. Exploratory under-sampling for class-imbalance learning. IEEE Trans. on Systems, Man, and Cybernetics – Part B: Cybernetics, 39(2):539–550, 2009.
F. Pedregosa, G. Varoquaux, A. Gramfort, V. Michel, B. Thirion, O. Grisel,
M. Blondel, P. Prettenhofer, R. Weiss, V. Dubourg, et al.
Scikit-learn: Machine learning in Python.JMLR, 12(Oct):2825–2830, 2011.
H. Qin, J. Yan, X. Li, and X. Hu.
Joint Training of Cascaded CNN for Face Detection.In CVPR, pages 3456–3465, 2016.
-  J. Redmon, S. Divvala, R. Girshick, and A. Farhadi. You Only Look Once: Unified, Real-Time Object Detection. In CVPR, pages 779–788, 2016.
-  S. Ren, K. He, R. Girshick, and J. Sun. Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks. In NIPS, pages 91–99, 2015.
-  M. A. Sadeghi and D. Forsyth. 30Hz Object Detection with DPM V5. In ECCV, pages 65–79, 2014.
-  G. Salton and M. J. McGill. Introduction to Modern Information Retrieval. McGraw-Hill, Inc. New York, 1986.
-  A. Shrivastava, A. Gupta, and R. Girshick. Training Region-Based Object Detectors with Online Hard Example Mining. In CVPR, pages 761–769, 2016.
-  K. Simonyan and A. Zisserman. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556, 2014.
-  S. Teerapittayanon, B. McDanel, and H. Kung. BranchyNet: Fast Inference via Early Exiting from Deep Neural Networks. In ICPR, pages 2464–2469, 2016.
-  P. Viola and M. Jones. Rapid Object Detection using a Boosted Cascade of Simple Features. In CVPR, pages 511–518, 2001.
-  J. Yan, Z. Lei, L. Wen, and S. Z. Li. The Fastest Deformable Part Model for Object Detection. In CVPR, pages 2497–2504, 2014.
-  B. Yang, J. Yan, Z. Lei, and S. Z. Li. Craft objects from images. In CVPR, 2016.
-  F. Yang, W. Choi, and Y. Lin. Exploit All the Layers: Fast and Accurate CNN Object Detector with Scale Dependent Pooling and Cascaded Rejection Classifiers. In CVPR, pages 2129–2137, 2016.
-  L. Zhang, L. Liang, X. Liang, and K. He. Is Faster R-CNN Doing Well for Pedestrian Detection? In ECCV, pages 443–457, 2016.
-  Q. Zhu, Y. Mei-Chen, K.-T. Cheng, and S. Avidan. Fast human detection using a cascade of histograms of oriented gradients. In CVPR, pages 1491–1498, 2006.