Detecting objects at vastly different scales from images is a fundamental challenge in computer vision. One traditional way to solve this issue is to build feature pyramids upon image pyramids directly. Despite the inefficiency, this kind of approaches have been applied for object detection and many other tasks along with hand-engineered features [7, 12].
We focus on detecting objects with deep ConvNets in this paper. Aside from being capable of representing higher-level semantics, ConvNets are also robust to variance in scale, thus making it possible to detect multi-scale objects from features computed on a single scale input[39, 17]. However, recent works suggest that taking pyramidal representations into account can further boost the detection performance [30, 20, 16]. This is due to its principle advantage of producing multi-scale feature representations in which all levels are semantically strong, including the high-resolution features.
There are several typical works exploring the feature pyramid representations for object detection. The Single Shot Detector (SSD)  is one of the first attempts on using such technique in ConvNets. Given one input image, SSD combines the predictions from multiple feature layers with different resolutions to naturally handle objects of various sizes. However, SSD fails to capture deep semantics for shallow-layer feature maps, since the bottom-up pathway in SSD can learn strong features only for deep layers but not for the shallow ones. This causes the key bottleneck of SSD for detecting small instances.
To overcome the disadvantage of SSD and make the networks more robust to object scales, recent works (e.g., FPN , DSSD , RON  and TDM ) propose to combine low-resolution and semantically-strong features with high-resolution and semantically-weak features via lateral connections in a top-down pathway. In contrast to the bottom-up fashion in SSD, the lateral connections pass the semantic information down to the shallow layers one by one, thus enhancing the detection ability of shallow-layer features. Such technology is successfully used in object detection [15, 31], segmentation 
, pose estimation[47, 5], etc.
Ideally, the pyramid features in ConvNets should: (1) reuse multi-scale features from different layers of a single network, and (2) improve features with strong semantics at all scales. The FPN works  satisfy these conditions by lateral connections. Nevertheless, the FPN, as demonstrated by our analysis in § 3, is actually equivalent to a linear combination of the feature hierarchy. Yet, the linear combination of features is too simple to capture highly-nonlinear patterns for more complicate and practical cases. Several works are trying to develop more suitable connection manners [25, 46, 48], or to add more operations before combination .
The basic motivation of this paper is to enable the networks learn information of interest for each pyramid level in a more flexible way, given a ConvNet’s feature hierarchy. To achieve this goal, we explicitly reformulate the feature pyramid construction process as feature reconfiguration functions in a highly-nonlinear yet efficient way. To be specific, our pyramid construction employs a global attention to emphasize global information of the full image followed by a local reconfiguration to model local patch within the receptive field. The resulting pyramid representation is capable of spreading strong semantics to all scales. Compared to previous studies including SSD and FPN-like models, our pyramid construction is more advantageous in two aspects: 1) the global-local reconfigurations are non-linear transformations, thus depicting more expressive power; 2) the pyramidal precessing for all scales are performed simultaneously and are hence more efficient than the layer-by-layer transformation (e.g. in lateral connections).
In our experiments, we compare different feature pyramid strategies within SSD architecture, and demonstrate the proposed method works more competitive in terms of accuracy and efficiency. The main contributions of this paper are summarized as follows:
We propose the global attention and local reconfiguration for building feature pyramids to enhance multi-scale representations with semantically strong information;
We compare and analysis popular feature pyramid methodologies within the standard SSD framework, and demonstrate that the proposed reconfiguration works more effective;
The proposed method achieves the state-of-the-art results on standard object detection benchmarks (i.e., PASCAL VOC 2007, PASCAL VOC 2012 and MS COCO) without losing real-time processing speed.
2 Related work
are popular for feature extraction. To make them scale-invariant, these features are computed over image pyramids[9, 13]. Several attempts have been performed on image pyramids for the sake of efficient computation [4, 7, 8]. The sliding window methods over multi-scale feature pyramids are usually applied in object detection [10, 14].
lead dramatic improvement for object detection. Particularly, OverFeat adopts a similar strategy to early face detectors by applying a ConvNet as the sliding window detector on image pyramids; R-CNN employs a region proposal-based strategy and classifies each scale-normalized proposal with a ConvNet. The SPP-Net and Fast R-CNN  speed up the R-CNN approach with RoI-Pooling that allows the classification layers to reuse the CNN feature maps. Since then, Faster R-CNN  and R-FCN  replace the region proposal step with lightweight networks to deliver a complete end-to-end system. More recently, Redmon et al. [37, 38]
propose a method named YOLO to predict bounding boxes and associate class probabilities in a single step.
Deep feature pyramids: To make the detection more reliable, researchers usually adopt multi-scale representations by inputting images with multiple resolutions during training and testing [20, 21, 3]. Clearly, the image pyramid methods are very time-consuming as them require to compute the features on each of image scale independently and thus the ConvNet features can not be reused. Recently, a number of approaches improve the detection performance by combining predictions from different layers in a single ConvNet. For instance, the HyperNet  and ION  combine features from multiple layers before making detection. To detect objects of various sizes, the SSD  spreads out default boxes of different scales to multiple layers of different resolutions within a single ConvNets. So far, the SSD is a desired choice for object detection satisfying the speed-vs-accuracy trade-off . More recently, the lateral connection (or reverse connection) is becoming popular and used in object detection [15, 30, 26]. The main purpose of lateral connection is to enrich the semantic information of shallow layers via the top-down pathway. In contrast to such layer-by-layer connection, this paper develops a flexible framework to integrate the semantic knowledge of multiple layers in a global-local scheme.
In this section, we firstly revisit the SSD detector, then consider the recent improvements of lateral connection. Finally, we present our feature pyramid reconfiguration methodology.
ConvNet Feature Hierarchy: The object detection models based on ConvNets usually adopt a backbone network (such as VGG-16, ResNets). Consider a single image that is passed through a convolutional network. The network comprises layers, each of which is implemented by a non-linear transformation , where indexes the layer.
is a combination transforms such as convolution, pooling, ReLU, etc. We denote the output of thelayer as . The total backbone network outputs are expressed as .
Without feature hierarchy, object detectors such as Faster R-CNN  use one deep and semantic layer such as to perform object detection. In SSD , the prediction feature map sets can be expressed as
where 111For VGG-16 based model, since we begin to predict from conv4_3 layer.. Here, the deep feature maps learn high-semantic abstraction. When , becomes shallower thus has more low-level features. SSD uses deeper layers to detect large instances, while uses the shallow and high-resolution layers to detect small ones222Here the ‘small’ means that the proportion of objects in the image is small, not the actual instance size. . The high-resolution maps with limited-semantic information harm their representational capacity for object recognition. It misses the opportunity to reuse deeper and semantic information when detecting small instances, which we show is the key bottleneck to boost the performance.
Lateral Connection: To enrich the semantic information of shallow layers, one way is to add features from the deeper layers333When the resolutions of the two layers are not the same, usually upsample and linear projection are carried out before combination.. Taking the FPN manner  as an example, we get
where , are weights. Without loss of generality,
where is the generated final weights for layer output after similar polynomial expansions. Finally, the features used for detection are expressed as:
From Eq.3 we see that the final features is equivalent to the linear combination of . The linear combination with deeper feature hierarchy is one way to improve information of a specific shallow layer. And the linear model can achieve a good extent of abstraction when the samples of the latent concepts are linearly separable. However, the feature hierarchy for detection often lives on a non-linear manifold, therefore the representations that capture these concepts are generally highly non-linear function of the input [29, 33, 23]. It’s representation power, as we show next, is not enough for the complex task of object detection.
3.1 Deep Feature Reconfiguration
Given the deep feature hierarchy of a ConvNet, the key problem of object detection framework is to generate suitable features for each level of detector. In this paper, the feature generating process at level is viewed as a non-linear transformation of the given feature hierarchy (Fig. 2):
where is the feature hierarchy considered for multi-scale detection. For ease of implementation, we concatenate the multiple inputs of in Eq.5
into a single tensor before following transformations444For a target scale which has spatial resolution, adaptive sampling is carried out before concatenation..
Given no priors about the distributions of the latent concepts of the feature hierarchy, it is desirable to use a universal function approximator for feature extraction of each scale. The function should also keep the spatial consistency, since the detector will activate at the corresponding locations. The final features for each level are non-linear transformations for the feature hierarchy, in which learnable parameters are shared between different spatial locations.
In this paper, we formulate the feature transformation process as global attention and local reconfiguration problems. Both global attention and local reconfiguration are implemented by a light-weight network so they could be embedded into the ConvNets and learned end-to-end. The global and local operations are also complementary to each other, since they deal with the feature hierarchy from different scales.
3.1.1 Global Attention for Feature Hierarchy
Given the feature hierarchy, the aim of the global part is to emphasise informative features and suppress less useful ones globally for a specific scale. In this paper, we apply the Squeeze-and-Excitation block  as the basic module. One Squeeze-and-Excitation block consists of two steps, squeeze and excitation. For the level layer, the squeeze stage is formulated as a global pooling operation on each channel of which has dimensions:
where specifies one element at channel, column and row. If there are channels in feature , Eq.8 will generate output elements, denoted as .
The excitation stage is two fully-connected layers followed by sigmoid activation with input :
where refers to the ReLU function, is the sigmoid activation, and . is set to 16 to make dimensionality-reduction. The final output of the block is obtained by rescaling the input with the activations:
then , denotes channel-wise multiplication. More details can be referred to the SENets  paper.
The original SE block is developed for explicitly modelling interdependencies between channels, and shows great success in object recognition . In contrast, we apply it to emphasise channel-level hierarchy features and suppress less useful ones. By dynamically adopting conditions on the input hierarchy, SE Block helps to boost feature discriminability and select more useful information globally.
3.1.2 Local Reconfiguration
The local reconfiguration network maps the feature hierarchy patch to an output feature patch, and is shared among all local receptive fields. The output feature maps are obtained by sliding the operation over the input. In this work, we design a residual learn block as the instantiation of the micro network, which is a universal function approximator and trainable by back-propagation (Fig.3).
Formally, one local reconfiguration is defined as:
where is a linear projection to match the dimensions555When dimensions are the same, there is no need to use it, denoted as dotted line in Fig.3. represents the residual mapping that improves the semantics to be learned.
Discussion A direct way to generate feature pyramids is just use the term in Eq.9. However, as demonstrated in , it is easier to optimize the residual mapping than to optimize the desired underlying mapping. Our experiments in Section 4.1.4 also prove this hypothesize.
We note there are some differences between our residual learn module and that proposed in ResNets . Our hypothesize is that the semantic information is distributed among feature hierarchy and the residual learn block could select additional information by optimization. While the purpose of the residual learn in  is to gain accuracy by increasing network depth. Another difference is that the input of the residual learning is the feature hierarchy, while in , the input is one level of convolutional output.
The form of the residual function is also flexible. In this paper, we involve a function that has three layers (Fig.3), while more layers are possible. The element-wise addition is performed on two feature maps, channel by channel. Because all levels of the pyramid use shared operations for detection, we fix the feature dimension (numbers of channels, denoted as ) in all the feature maps. We set in this paper and thus all layers used for prediction have 256-channel outputs.
We conduct experiments on three widely used benchmarks: PASCAL VOC 2007, PASCAL VOC 2012  and MS COCO datasets . All network backbones are pretrained on the ImageNet1k classification set  and fine-tuned on the detection dataset. We use the pre-trained VGG-16 and ResNets models that are publicly available666https://github.com/pytorch/vision. Our experiments are based on re-implementation of SSD , Faster R-CNN  and Feature Pyramid Networks 
using PyTorch. For the SSD framework, all layers in are resized to the spatial size of layer conv8_2 in VGG and conv6_x in ResNet-101 to keep consistency with DSSD. For the Faster R-CNN pipeline, the resized spatial size is as same as the conv4_3 layer in both VGG and ResNet-101 backbones.
4.1 Pascal Voc 2007
Implementation details. All models are trained on the VOC 2007 and VOC 2012 trainval sets, and tested on the VOC 2007 test set. For one-stage SSD, we set the learn rate to
for the first 160 epochs, and decay it toand for another 40 and 40 epochs. We use the default batch size 32 in training, and use VGG-16 as the backbone networks for all the ablation study experiments on the PASCAL VOC dataset. For two-stage Faster R-CNN experiments, we follow the training strategies introduced in . We also report the results of ResNets used in these models.
For fair comparisons with original SSD and its its feature pyramid variations, we conduct two baselines: Original SSD and SSD with feature lateral connections. In Table 1, the original SSD scores 77.5%, which is the same as that reported in . Adding lateral connections in SSD improves results to 78.5% (SSD+lateral). When using the global and local reconfiguration strategy proposed above, the result is improved to 79.6, which is 1.6% better than SSD with lateral connection. In the next, we discuss the ablation study in more details.
4.1.2 How important is global attention?
In Table 1, the fourth row shows the results of our model without the global attention. With this modification, we remove the global attention part and directly add local transformation into the feature hierarchy. Without global attention, the result drops to 79.0% mAP (-0.6%). The global attention makes the network to focus more on features with suitable semantics and helps detecting instance with variation.
4.1.3 Comparison with the lateral connections
Adding global and local reconfiguration to SSD improves the result to 79.6, which is 2.1% better than SSD and 1.1% better than SSD with lateral connection. This is because there are large semantic gaps between different levels on the bottom-up pyramid. And the global and local reconfigurations help the detectors to select more suitable feature maps. This issue cannot be simply remedied by just lateral connections. We note that only adding local reconfiguration, the result is better than lateral connection (+0.5%).
4.1.4 Only use the term
One way to generate the final feature pyramids is just use the term . in Eq.9. Compared with residual learn block, the result drops 0.4%. The residual learn block can avoid the gradients of the objective function to directly flow into the backbone network, thus gives more opportunity to better model the feature hierarchy.
4.1.5 Use all feature hierarchy or just deeper layers?
In Eq.3, the lateral connection only considers feature maps that are deeper (and same) than corresponding levels. To better compare our method with lateral connection, we conduct a experiment that only consider the deep layers too. Other settings are the same with the previous baselines. We find that just using deeper features drops accuracy by a small margin(-0.2%). We think the difference is that when using the total feature hierarchy, the deeper layers also have more opportunities to re-organize its features, and has more potential for boosting results, similar conclusions are also drawn from the most recent work of PANet .
4.1.6 Accuracy vs. Speed
We present the inference speed of different models in the third column of Table 1. The speed is evaluated with batch size 1 on a machine with NVIDIA Titan X, CUDA 8.0 and cuDNN v5. Our model has a 2.7% accuracy gain with 39.5 fps. Compared with the lateral connection based SSD, our model shows higher accuracy and faster speed. In lateral connection based model, the pyramid layers are generated serially, thus last constructed layer considered for detection becomes the speed bottleneck ( in Eq. 4). In our design, all final pyramid maps are generated simultaneously, and is more efficient.
4.1.7 Under Faster R-CNN pipeline
To validate the generation of the proposed feature reconfiguration method, we conduct experiment under two-stage Faster R-CNN pipeline. In Table 2, Faster R-CNN with ResNet-101 get mAP of 78.9%. Feature Pyramid Networks with lateral connection improve the result to 79.8% (+0.9%). When replacing the lateral connection with global-local transformation, we get score of 80.6% (+1.8%). This result indicate that our global-and-local reconfiguration is also effective in two-stage object detection frameworks and could improve its performance.
4.1.8 Comparison with other state-of-the-arts
Table 3 shows our results on VOC2007 test set based on SSD . Our model with achieves 79.6% mAP, which is much better than baseline method SSD300 (77.5%) and on par with SSD512. Enlarging the input image to improves the result to 81.1%. Notably our model is much better than other methods which try to include context information such as MRCNN  and ION . When replace the backbone network from VGG-16 to ResNet-101, our model with scores 82.4% without bells and whistles, which is much better than the one-stage DSSD  and two-stage R-FCN .
4.2 Pascal Voc 2012
For VOC2012 task, we follow the setting of VOC2007 and with a few differences described here. We use 07++12 consisting of VOC2007 trainval, VOC2007 test, and VOC2012 trainval for training and VOC2012 test for testing. We see the same performance trend as we observed on VOC 2007 test. The results, as shown in Table 4, demonstrate the effectiveness of our models. Compared with SSD  and other variants, the proposed network is significantly better(+2.7% with ).
Compared with DSSD with ResNet-101 backbone, our model gets similar results with VGG-16 backbone. The most recently proposed RUN  improves the results of SSD with skip-connection and unified prediction. The method add several residual blocks to improve the non-linear ability before prediction. Compared with RUN, our model is more direct and with better detection performance. Our final result using ResNet-101 scores 81.1%, which is much better than the state-of-the-art methods.
4.3 Ms Coco
To further validate the proposed framework on a larger and more challenging dataset, we conduct experiments on MS COCO 
and report results from test-dev evaluation server. The evaluation metric of MS COCO dataset is different from PASCAL VOC. The average mAP over different IoU thresholds, from 0.5 to 0.95 (written as 0.5:0.95) is the overall performance of methods. We use the 80k training images and 40k validation images to train our model, and validate the performance on the test-dev dataset which contains 20k images. For ResNet-101 based models, we set batch-size as 32 and 20 for and model separately, due to the memory issue.
|method||train Data||input size||network||Average Precision|
With the standard COCO evaluation metric, SSD300 scores 25.1% AP, and our model improves it to 28.4% AP (+3.3%), which is also on par with DSSD with ResNet-101 backbone(28.0%). When change the backbone to ResNet-101, our model gets 31.3% AP, which is much better than the DSSD321(+3.3%). The accuracy of our model can be improved to 34.6% by using larger input size of , which is also better than the most recently proposed RetinaNet  that adds lateral connection and focal loss for better object detection.
Table 6 reports the multi-scale object detection results of our method under SSD framework using ResNet-101 backbone. It is observed that our method achieves better detection accuracies than SSD and DSSD for the objects of all scales.
A key issue for building feature pyramid representations under a ConvNet is to reconfigure and reuse the feature hierarchy. This paper deal with this problem with global-and-local transformations. This representation allows us to explicitly model the feature reconfiguration process for the specific scales of objects. We conduct extensive experiments to compare our method to other feature pyramid variations. Our study suggests that despite the strong representations of deep ConvNet, there is still room and potential to building better pyramids to further address multiscale problems.
Acknowledgement This work was jointly supported by the National Science Fundation of China(NSFC) and the German Research Foundation(DFG) joint project NSFC 61621136008/DFG TRR-169 and the National Natural Science Foundation of China(Grant No: 61327809,61210013).
-  Adelson, E.H., Anderson, C.H., Bergen, J.R., Burt, P.J., Ogden, J.M.: Pyramid methods in image processing. RCA engineer 29(6), 33–41 (1984)
-  Badrinarayanan, V., Kendall, A., Cipolla, R.: Segnet: A deep convolutional encoder-decoder architecture for image segmentation. IEEE transactions on pattern analysis and machine intelligence 39(12), 2481–2495 (2017)
Bell, S., Lawrence Zitnick, C., Bala, K., Girshick, R.: Inside-outside net: Detecting objects in context with skip pooling and recurrent neural networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. pp. 2874–2883 (2016)
-  Benenson, R., Mathias, M., Timofte, R., Van Gool, L.: Pedestrian detection at 100 frames per second. In: Computer Vision and Pattern Recognition (CVPR), 2012 IEEE Conference on. pp. 2903–2910. IEEE (2012)
-  Chen, Y., Wang, Z., Peng, Y., Zhang, Z., Yu, G., Sun, J.: Cascaded pyramid network for multi-person pose estimation. arXiv preprint arXiv:1711.07319 (2017)
-  Dai, J., Li, Y., He, K., Sun, J.: R-fcn: Object detection via region-based fully convolutional networks. In: Advances in neural information processing systems. pp. 379–387 (2016)
-  Dollár, P., Appel, R., Belongie, S., Perona, P.: Fast feature pyramids for object detection. IEEE Transactions on Pattern Analysis and Machine Intelligence 36(8), 1532–1545 (2014)
-  Dollar, P., Appel, R., Kienzle, W.: Crosstalk cascades for frame-rate pedestrian detection. In: Computer Vision–ECCV 2012, pp. 645–659. Springer (2012)
-  Dollar, P., Belongie, S.J., Perona, P.: The fastest pedestrian detector in the west. In: BMVC. vol. 2, p. 7 (2010)
-  Dollar, P., Wojek, C., Schiele, B., Perona, P.: Pedestrian detection: An evaluation of the state of the art. IEEE transactions on pattern analysis and machine intelligence 34(4), 743–761 (2012)
-  Everingham, M., Van Gool, L., Williams, C.K., Winn, J., Zisserman, A.: The pascal visual object classes (voc) challenge. International journal of computer vision 88(2), 303–338 (2010)
-  Felzenszwalb, P., McAllester, D., Ramanan, D.: A discriminatively trained, multiscale, deformable part model. In: Computer Vision and Pattern Recognition, 2008. CVPR 2008. IEEE Conference on. pp. 1–8. IEEE (2008)
-  Felzenszwalb, P.F., Girshick, R.B., McAllester, D., Ramanan, D.: Object detection with discriminatively trained part-based models. IEEE transactions on pattern analysis and machine intelligence 32(9), 1627–1645 (2010)
-  Felzenszwalb, P.F., Girshick, R.B., McAllester, D., Ramanan, D.: Object detection with discriminatively trained part-based models. IEEE transactions on pattern analysis and machine intelligence 32(9), 1627–1645 (2010)
-  Fu, C.Y., Liu, W., Ranga, A., Tyagi, A., Berg, A.C.: Dssd: Deconvolutional single shot detector. arXiv preprint arXiv:1701.06659 (2017)
-  Gidaris, S., Komodakis, N.: Object detection via a multi-region and semantic segmentation-aware cnn model. In: Proceedings of the IEEE International Conference on Computer Vision. pp. 1134–1142 (2015)
-  Girshick, R.: Fast r-cnn. In: Proceedings of the IEEE international conference on computer vision. pp. 1440–1448 (2015)
-  Girshick, R., Donahue, J., Darrell, T., Malik, J.: Rich feature hierarchies for accurate object detection and semantic segmentation. In: Proceedings of the IEEE conference on computer vision and pattern recognition. pp. 580–587 (2014)
-  He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. arXiv preprint arXiv:1703.06870 (2017)
-  He, K., Zhang, X., Ren, S., Sun, J.: Spatial pyramid pooling in deep convolutional networks for visual recognition. IEEE transactions on pattern analysis and machine intelligence 37(9), 1904–1916 (2015)
-  He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE conference on computer vision and pattern recognition. pp. 770–778 (2016)
-  Hoiem, D., Chodpathumwan, Y., Dai, Q.: Diagnosing error in object detectors. In: European conference on computer vision. pp. 340–353. Springer (2012)
-  Hu, J., Shen, L., Sun, G.: Squeeze-and-excitation networks. arXiv preprint arXiv:1709.01507 (2017)
-  Huang, J., Rathod, V., Sun, C., Zhu, M., Korattikara, A., Fathi, A., Fischer, I., Wojna, Z., Song, Y., Guadarrama, S., et al.: Speed/accuracy trade-offs for modern convolutional object detectors. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. pp. 7310–7311 (2017)
-  Jeong, J., Park, H., Kwak, N.: Enhancement of ssd by concatenating feature maps for object detection. arXiv preprint arXiv:1705.09587 (2017)
-  Kong, T., Sun, F., Yao, A., Liu, H., Lu, M., Chen, Y.: Ron: Reverse connection with objectness prior networks for object detection. In: IEEE Conference on Computer Vision and Pattern Recognition. vol. 1, p. 2 (2017)
-  Kong, T., Yao, A., Chen, Y., Sun, F.: Hypernet: Towards accurate region proposal generation and joint object detection. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. pp. 845–853 (2016)
-  Lee, K., Choi, J., Jeong, J., Kwak, N.: Residual features and unified prediction network for single stage detection. arXiv preprint arXiv:1707.05031 (2017)
-  Lin, M., Chen, Q., Yan, S.: Network in network. arXiv preprint arXiv:1312.4400 (2013)
-  Lin, T.Y., Dollár, P., Girshick, R., He, K., Hariharan, B., Belongie, S.: Feature pyramid networks for object detection. arXiv preprint arXiv:1612.03144 (2016)
-  Lin, T.Y., Goyal, P., Girshick, R., He, K., Dollár, P.: Focal loss for dense object detection. arXiv preprint arXiv:1708.02002 (2017)
-  Lin, T.Y., Maire, M., Belongie, S., Hays, J., Perona, P., Ramanan, D., Dollár, P., Zitnick, C.L.: Microsoft coco: Common objects in context. In: European conference on computer vision. pp. 740–755. Springer (2014)
-  Liu, S., Qi, L., Qin, H., Shi, J., Jia, J.: Path aggregation network for instance segmentation. arXiv preprint arXiv:1803.01534 (2018)
-  Liu, W., Anguelov, D., Erhan, D., Szegedy, C., Reed, S., Fu, C.Y., Berg, A.C.: Ssd: Single shot multibox detector. In: European conference on computer vision. pp. 21–37. Springer (2016)
-  Lowe, D.G.: Distinctive image features from scale-invariant keypoints. International Journal of Computer Vision 60(2), 91–110 (2004)
-  Paszke, A., Gross, S., Chintala, S., Chanan, G., Yang, E., DeVito, Z., Lin, Z., Desmaison, A., Antiga, L., Lerer, A.: Automatic differentiation in pytorch (2017)
-  Redmon, J., Divvala, S., Girshick, R., Farhadi, A.: You only look once: Unified, real-time object detection. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. pp. 779–788 (2016)
-  Redmon, J., Farhadi, A.: Yolo9000: better, faster, stronger. arXiv preprint arXiv:1612.08242 (2016)
-  Ren, S., He, K., Girshick, R., Sun, J.: Faster r-cnn: Towards real-time object detection with region proposal networks. In: Advances in neural information processing systems. pp. 91–99 (2015)
Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. International Journal of Computer Vision115(3), 211–252 (2015)
-  Sermanet, P., Eigen, D., Zhang, X., Mathieu, M., Fergus, R., LeCun, Y.: Overfeat: Integrated recognition, localization and detection using convolutional networks. arXiv preprint arXiv:1312.6229 (2013)
-  Shen, Z., Liu, Z., Li, J., Jiang, Y.G., Chen, Y., Xue, X.: Dsod: Learning deeply supervised object detectors from scratch. In: The IEEE International Conference on Computer Vision (ICCV). vol. 3, p. 7 (2017)
-  Shrivastava, A., Gupta, A., Girshick, R.: Training region-based object detectors with online hard example mining. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. pp. 761–769 (2016)
-  Shrivastava, A., Sukthankar, R., Malik, J., Gupta, A.: Beyond skip connections: Top-down modulation for object detection. arXiv preprint arXiv:1612.06851 (2016)
-  Wang, X., Han, T.X., Yan, S.: An hog-lbp human detector with partial occlusion handling. In: CVPR (2009)
-  Woo, S., Hwang, S., Kweon, I.S.: Stairnet: Top-down semantic aggregation for accurate one shot detection. arXiv preprint arXiv:1709.05788 (2017)
-  Yang, W., Li, S., Ouyang, W., Li, H., Wang, X.: Learning feature pyramids for human pose estimation. In: The IEEE International Conference on Computer Vision (ICCV). vol. 2 (2017)
-  Zhang, S., Wen, L., Bian, X., Lei, Z., Li, S.Z.: Single-shot refinement neural network for object detection. arXiv preprint arXiv:1711.06897 (2017)
-  Zhu, Y., Zhao, C., Wang, J., Zhao, X., Wu, Y., Lu, H.: Couplenet: Coupling global structure with local parts for object detection. In: Proc. of Int’l Conf. on Computer Vision (ICCV) (2017)