Log In Sign Up

ScarfNet: Multi-scale Features with Deeply Fused and Redistributed Semantics for Enhanced Object Detection

Convolutional neural network (CNN) has led to significant progress in object detection. In order to detect the objects in various sizes, the object detectors often exploit the hierarchy of the multi-scale feature maps called feature pyramid, which is readily obtained by the CNN architecture. However, the performance of these object detectors is limited since the bottom-level feature maps, which experience fewer convolutional layers, lack the semantic information needed to capture the characteristics of the small objects. In order to address such problem, various methods have been proposed to increase the depth for the bottom-level features used for object detection. While most approaches are based on the generation of additional features through the top-down pathway with lateral connections, our approach directly fuses multi-scale feature maps using bidirectional long short term memory (biLSTM) in effort to generate deeply fused semantics. Then, the resulting semantic information is redistributed to the individual pyramidal feature at each scale through the channel-wise attention model. We integrate our semantic combining and attentive redistribution feature network (ScarfNet) with baseline object detectors, i.e., Faster R-CNN, single-shot multibox detector (SSD) and RetinaNet. Our experiments show that our method outperforms the existing feature pyramid methods as well as the baseline detectors and achieve the state of the art performances in the PASCAL VOC and COCO detection benchmarks.


page 4

page 8


Feature Pyramid Networks for Object Detection

Feature pyramids are a basic component in recognition systems for detect...

MDSSD: Multi-scale Deconvolutional Single Shot Detector for small objects

In order to improve the detection accuracy for objects at different scal...

SFPN: Synthetic FPN for Object Detection

FPN (Feature Pyramid Network) has become a basic component of most SoTA ...

StairNet: Top-Down Semantic Aggregation for Accurate One Shot Detection

One-stage object detectors such as SSD or YOLO already have shown promis...

Learning Better Features for Face Detection with Feature Fusion and Segmentation Supervision

The performance of face detectors has been largely improved with the dev...

ASSD: Attentive Single Shot Multibox Detector

This paper proposes a new deep neural network for object detection. The ...

1 Introduction

Object detection refers to the task of deciding whether or not there are any instances of objects in the image and return the location and category of the objects [general], [survey]

. Historically, object detection has been one of the most challenging computer vision problems. Recently, deep learning has led an unprecedented advance in object detection techniques


. Convolutional neural network (CNN) can produce the hierarchy of abstract feature maps through a cascade of convolution operations followed by the nonlinear function. Using the CNN as a backbone network, the object detectors can effectively infer the location of the bounding box and the category of the instances based on the abstract feature maps. Thus far, various object detection network structures have been proposed in the literature. The CNN-based object detectors are roughly categorized into two groups: two-stage detectors and single-stage detectors. The two-stage detectors detect the objects using two separate networks; 1) the region proposal network for finding the bounding boxes containing the object and 2) the object classifier network for identifying the class of the objects. The well-known two-stage detectors include R-CNN

[rcnn], Fast R-CNN [fastrcnn], Faster R-CNN [fasterrcnn], and Mask R-CNN [maskrcnn]

. On the other hand, the single-stage detectors directly estimate the bounding boxes and the object classes from the feature maps in one shot. The single-stage detectors include SSD

[ssd], YOLO [yolov1], YOLOv2 [yolov2], and RetinaNet [retinanet].

The key ingredient of the recent advances in object detection is due to the CNN’s capability to produce the abstract features containing strong semantic cue. The deeper the convolutional layers are, the higher the level of abstraction is achieved for the resulting feature maps. As a result, the features produced at the end of the CNN pipeline (called top-level features) contain rich semantics but lack spatial resolution while the features placed at the input layers (called bottom-level features) lack semantic information but have detailed spatial information. The hierarchy of such multi-scale features constitutes the so-called feature pyramid, which is used to detect the objects of different scales in many object detectors (e.g. SSD [ssd], MS-CNN [cai2016unified], and RetinaNet [retinanet]). The structure for using such feature pyramid for object detection is described in Fig. 1 (a). Note that the attributes of the large objects tend to be captured on the top-level features while those of the small objects are well represented by the shallow bottom-level features.

One limitation of the aforementioned feature pyramid method is the disparity of the semantic information between the multi-scale feature maps used for object detection. The bottom-level features are not deep enough to exhibit high-level semantics underlying in the objects and their surroundings. This results in the accuracy loss in detecting the small objects. In order to address this problem, several approaches have been proposed, which attempted to reduce the semantic gap between the different scales. One notable direction is to provide the contextual information to the bottom-level features by generating the highly semantic features in the top-down pathway with latent connections. As illustrated in Fig. 1 (c), based on the top-level pyramidal feature obtained from the bottom-up network, the additional features are generated with the increased depth and resolution. In order to avoid losing the spatial information, lateral connections are used to bring the low-level bottom layer features and combine them with the high-level semantic features. Various object detectors including DSSD [dssd], FPN [fpn], and StairNet [woo2018stairnet] follow this principle and significant improvement has been reported in terms of detection accuracy.

Our work is motivated by the observation that the current architectures for generating top-down features might not be flexible enough to generate strong semantics for all scales. Thus, we propose a new framework for generating the deeply fused semantics for the multi-scale features for enhanced object detection. The proposed feature pyramid method, referred to as semantic combining and attentive redistribution feature network

(ScarfNet), fuses the multi-scale feature maps using the recurrent neural network and produces the new multi-scale feature maps by redistributing the learned semantics to each level. The structure of our ScarfNet is depicted in Fig.

2 (d). First, we fuse the multi-scale pyramidal features using the bidirectional long short term memory (biLSTM) [zhang2018attention]. Note that the biLSTM has the advantage in fusing the multi-scale features in that the number of the required weights is significantly reduced by the parameter sharing and the only relevant semantic information is selectively aggregated through the gating function of the biLSTM. The outputs of the biLSTM are concatenated and distributed through the channel-wise attention model to generate highly semantic features tailored for each pyramid scale. The final multi-scale feature maps for object detection are obtained by concatenating the output of the ScarfNet with the original pyramidal features. Note that our framework can be readily applied to various CNN architectures which are desperate for feature pyramid with strong semantics.

In our experiments, we integrate our ScarfNet to the baseline detectors, Faster R-CNN [fasterrcnn], SSD [ssd] and RetinaNet [retinanet]. Our evaluation conducted over PASCAL VOC [pascalvoc] and MS COCO [mscoco] datasets shows that our method offers significant improvement over the baseline detectors as well as other competitive detectors in terms of detection accuracy. Furthermore, the proposed ScarfNet-based RetinaNet achieves the state of the art performance in PASCAL VOC [pascalvoc] and COCO [mscoco] detection benchmarks. Our code will be publicly available. The contributions of our paper are summarized as follows

  • We introduce a new deep architecture for closing the semantic gaps between the multi-scale feature maps. The proposed ScarfNet generates the new multi-scale feature maps with the deeply fused and redistributed semantics. This is achieved by using the combination of the biLSTM and channel-wise attention model.

  • We are the first to use biLSTM to combine the multi-scale features to incorporate strong semantics for the feature pyramid. The biLSTM can produce the deeply fused semantic information using the recurrent connection over the different pyramid scales. Furthermore, our ScarfNet benefits from the selective information gating mechanism inherent in the biLSTM. Due to parameter sharing, the overhead due to ScarfNet is small. In addition, our ScarfNet is easy to train and end-to-end trainable.

Figure 1: Structure of several feature pyramid methods: In (a), the feature pyramid obtained from convolutional layers is used in the baseline detectors (e.g., SSD [ssd]). In (b), the multi-scale features are fused and converted into the single semantic feature map with the highest resolution. (c) shows the structure generating additional features in unidirectional way through the top-down structure with lateral connections. (d) shows the structure of the proposed ScarfNet, where the multi-scale features are fused in a bidirectional fashion and the learned semantics are propagated back to each scale.

2 Related Work

In this section, we review the basic object detectors and several existing feature pyramid methods for decreasing the semantic gap between the scales.

2.1 CNN-based Object Detectors

Recently, CNN has brought an order of magnitude performance improvement in object detection. Thus far, various CNN-based object detectors have been proposed. The current object detectors can be categorized into two groups: two-stage detectors and single-stage detectors. The two-stage detectors detect the objects in two steps; finding the region proposals based on the objectness of the regions and conducting the classification and bound regression for the detected region proposals. The R-CNN [rcnn] is the first CNN-based detector where the traditional selective search is employed to find the region proposals and the CNN is applied to the image patch in each region proposal. The fast RCNN [fastrcnn] and the faster RCNN [fasterrcnn] reduced the computation time of the R-CNN by using the region of interest (ROI) pooling for using full image feature maps and replacing the selective search with the region proposal network (RPN). The single-stage detectors directly perform classification and box regression based on the feature maps. These detectors compute the confidence score on the object category and the regression results for the candidate boxes while sweeping the feature maps spatially. The well-known single-stage detectors include SSD [ssd], YOLO [yolov1], and YOLOv2 [yolov2]. Recently, RetinaNet [retinanet] has achieved the state of the art performance using the ResNet [resnet] as a backbone and the various latest training tricks. Refer to [survey] for the comprehensive review of the contemporary object detectors.

2.2 Object Detectors Using Multi-scale Features

Several object detectors including SSD [ssd] and RetinaNet [retinanet] rely on the hierarchical feature pyramid to detect the objects of various sizes (see Fig. 1 (a)). One issue arising in using the multi-scale features directly produced by the CNN is the gap of the semantic information between them caused by the different depths of the layers passed by the input. Due to the relatively low level of abstraction for bottom-level features, detection accuracy for the small objects is often limited. Fig. 1 (b), (c), and (d) describe several strategies that have been proposed to overcome the aforementioned issue. Fig. 1 (b) depicts the strategy of combining the multi-scale features into the single high resolution feature map with strong semantics. HyperNet [kong2016hypernet] and ION [bell2016inside] improved the performance of the RPN by aggregating the hierarchical features with the appropriate resizing of the feature maps. Fig. 1 (c) shows the strategy of generating highly semantic features through the top-down pathway with lateral connections. Note that the semantic information is brought through top-down connections while the detailed spatial information is delivered through the lateral connections. Several detectors based on this structure include DSSD [dssd], StairNet [woo2018stairnet], TDM [shrivastava2016beyond], FPN [fpn], and RefineDet [zhang2018single]. DSSD [dssd] and StairNet [woo2018stairnet] use the deconvolutional layer-based top-down connections for the SSD baseline [ssd]. TDM [shrivastava2016beyond] employs the top-down structure specified for the RPN of the Faster R-CNN [fasterrcnn]. FPN [fpn] uses the simplified structure using the 2x upsampling and 1x1 convolution for top-down and lateral connections, respectively. RefineDet [zhang2018single] employs two-step cascade regression for the top-down connection.

Figure 2: The overall architecture of the proposed ScarfNet: The ScarfNet consists of two modules: ScNet and ArNet. The ScNet aggregates the pyramidal features obtained from the bottom-up CNN pipeline. Then, the ArNet distributes the fused semantics to each pyramid level. The final high-level semantic features are generated by channel-wise concatenation between the output of the ScarfNet and the original pyramidal features. The detailed structures of the matching block and attention block are depicted in the yellow boxes.

3 Proposed Object Detector

In this section, we introduce the details on the proposed ScarfNet architecture.

3.1 Existing Feature Pyramid Methods

The feature pyramid-based object detectors base the decision on the feature maps across the different pyramid levels in order to detect the various sizes of objects. As shown in Fig. 1 (a), the baseline detectors use the feature map at the th pyramidal level


where . Note that is the feature maps produced by the backbone network and is the bottom-up features from the subsequent convolutional layers. denotes the operation performed by the th convolutional layer and denotes the detection sub-network that often applies a single 3x3 convolutional layer to produce the output of classification and box regression. Due to the different depths from the input to each pyramidal feature, the shallow bottom-level features suffer from the lack of semantic information.

In order to reduce the semantic gap between different pyramid levels, several works proposed the top-down structure using lateral connections illustrated in Fig. 1 (c). This structure propagates the high-level semantics from top to bottom layers with the increased resolution while keeping the spatially high resolution through the lateral connections. The th feature map generated by this method is expressed as


where . Note that is the operation for the th lateral connection and is the operation for the th top-down connection. The operator represents the combining operation for two feature maps, e.g., channel-wise concatenation and addition. Different methods (e.g., DSSD [dssd], StairNet [woo2018stairnet], TDM [shrivastava2016beyond], FPN [fpn], and RefineDet [zhang2018single]) employs the slightly different structures for and . While these methods promote the abstraction level for the pyramidal features, they still have some limitations. Since the top-down connection propagates the semantic information in a unidirectional way, the semantics are not evenly distributed to all pyramid levels. As a result, the semantic gap between the pyramidal features still remains. Next, such uni-lateral processing of the features has the limited capacity to produce rich contextual information for increasing the semantic levels in all scales. In order to address these problems, we develop a new architecture that uses the biLSTM to generate the deeply fused semantics through bi-lateral connections between all pyramid scales. In the following subsections, we will present the details of our design.

3.2 ScarfNet: Overall Architecture

Our ScarfNet attempts to resolve the discrepancy of the semantic information in two steps; 1) combining the scattered semantic information using biLSTM and 2) redistributing the fused semantics back to each pyramid level using the channel-wise attention model. The overall architecture of the ScarfNet is depicted in Fig. 2. Taking the pyramidal features as input, the ScarfNet produces the new th pyramidal feature map as


where . As seen in (6), the ScarfNet consists of two sub-networks; semantic combining network (ScNet) and attentive redistribution network (ArNet). First, the ScNet merges the pyramidal features through the biLSTM and produces the output features with the fused semantics. Second, the ArNet collects the output features from the biSLTM and applies the channel-wise attention model to produce highly semantic multi-scale features, which are concatenated to the original pyramidal features. Finally, the resulting feature maps are individually processed by the detection sub-network to produce the results for object detection.

3.3 Semantic Combining Network (ScNet)

The feature maps produced by the ScNet is obtained


where is the output feature map for the th layer. Fig. 3 depicts the detailed structure of the ScNet. The ScNet uniformly fuse the semantics scattered in the different pyramid levels using the biLSTM. The biLSTM can selectively fuse the contextual information underlying in the multi-scale features through the gating function.

Figure 3: The structure of the ScNet: The matching block and biLSTM are applied to generate the fused feature map

. Note that the matching block applies bi-linear interpolation and 1x1 convolution to make the spatial and channel dimensions equal for the inputs to biLSTM.

As shown in Fig. 3, the ScNet consists of the matching block and the biLSTM block. The matching block first resizes the pyramidal features such that they have the same size as the largest pyramidal feature. Then, it adjusts the channel dimension of the input using the 1x1 convolutional layer. As a result, the matching block produces the feature maps of the same spatial and channel dimensions for the biLSTM. Note that resizing operation is performed by the bi-linear interpolation. The biLSTM used in the SCNet follows the structure of [xingjian2015convolutional], which has significantly saved computation by using the convolutional layers for the input connection and computing the gating parameters based on the result of global average pooling. Specifically, the operations performed by the biLSTM in[xingjian2015convolutional] is summarized as


where denotes the Hadamard product. The state update of the biLSTM is conducted in both forward and backward directions. Note that we only provide the forward update and the equations are similar for the backward update.

Figure 4: The structure of the ArNet: The ArNet concatenates the fused feature maps and applies the channel-wise attention. Then, the spatial and channel dimensions of the resulting feature maps are adjusted by the matching block.

width=0.6   Method Backbone Input size mAP (%) VOC 2007 VOC 2012 SSD300* [ssd] (baseline) VGG-16 77.5 75.8 SSD512* [ssd] (baseline) VGG-16 79.8 78.5 StairNet [woo2018stairnet] VGG-16 78.8 76.4 Faster R-CNN [fasterrcnn] VGG-16 73.2 70.4 ION [bell2016inside] VGG-16 76.5 76.4 SSD321 [dssd] ResNet-101 77.1 75.4 SSD513 [dssd] ResNet-101 80.6 79.4 DSSD321 [dssd] ResNet-101 78.6 76.3 DSSD513 [dssd] ResNet-101 81.5 80.0 R-FCN [rfcn] ResNet-101 80.5 77.6 [retinanet] (baseline) ResNet-101 83.0 - Proposed with SSD300 VGG-16 79.4 77.2 Proposed with SSD512 VGG-16 81.6 79.8 Proposed with RetinaNet500 ResNet-101 83.5 -  

Table 1: PASCAL VOC 07/12 detection results: The detection results for VOC 2017 are evaluated on VOC 2007 test set after trained on VOC 2007 trainval and VOC 2012 trainval. Those for VOC 2012 are evaluated on VOC 2012 test set when trained on VOC 2007 test, VOC2007 trainval, and VOC 2012 trainval sets.

3.4 Attentive Redistribution Network (ArNet)

The ArNet aims to produce the high-level semantic feature map, which is concatenated with the original pyramidal feature map as


where the operator denotes channel-wise concatenation. The detailed structure of ArNet is depicted in Fig. 4. The ArNet concatenates the outputs

of the biLSTM and apply the channel-wise attention to them. The attention weights are obtained by constructing the 1x1 vector using the global average pooling


and passing it through two fully connected layers followed by the sigmoid function. Note that this channel-wise attention model allows for selective propagation of the semantics to each pyramid level. Once the attention weights are applied, the matching block downsamples the resulting feature maps to the original size of the pyramidal features and applies 1x1 convolution to match the channel dimensions with those of the original pyramidal features. Finally, the output of the matching block is concatenated with the original feature

to produce the highly semantic feature .

4 Experiments

In this section, we evaluate the performance of the proposed ScarfNet. We compare our detector with the other methods and conduct the extensive performance analysis to understand the behavior of our architecture.

4.1 Experiment Setup

Our ScarfNet is applied to the the baseline object detectors, Faster R-CNN [fasterrcnn], SSD [ssd] and RetinaNet [retinanet]. In the case of Faster R-CNN and RetinaNet, we replace the original FPN part with the feature generation by our ScarfNet. We compare our method with the baseline detectors Faster R-CNN [fasterrcnn], SSD [ssd] and RetinaNet [retinanet] as well as the other competitive algorithms including ION [bell2016inside], R-FCN [rfcn], DSSD [dssd] and StairNet [woo2018stairnet]. We measure mean average precision (mAP) in % on the three widely used datasets for object detection benchmark; PASCAL VOC 2007, PASCAL VOC 2012 [pascalvoc] and MS COCO [mscoco].

width=0.9   [.5]0 Method Network Backbone Module Input size fps AP [.5]0 two-stage Faster R-CNN* [fasterrcnn] ResNeXt-101 FPN 15.3 37.6 59.1 40.7 19.2 41.8 52.3 ResNeXt-101 FPN 10.3 41.9 63.9 45.9 25.0 45.3 52.3 [.5] Scarf R-CNN (ours) ResNeXt-101 SCARF 13.8 38.5 59.9 41.5 19.1 42.9 54.1 ResNeXt-101 SCARF 8.9 42.8 64.3 47.1 26.0 45.7 52.9 [.5]0 one-stage SSD513 [dssd] ResNet-101 - 12.5 31.2 50.4 33.3 10.2 34.5 49.8 DSSD513 [dssd] ResNet-101 DSSD 10.0 33.2 53.3 35.2 13.0 35.4 51.1 [.5] Scarf SSD513 (ours) ResNet-101 SCARF 11.5 34.5 54.1 36.3 15.1 36.1 51.6 [.5] RetinaNet [retinanet] ResNet-101 FPN 15.4 34.4 53.1 36.8 14.7 38.5 49.1 ResNeXt-101 FPN 9.3 40.8 61.1 44.1 24.1 44.2 51.2 [.5] Scarf RetinaNet (ours) ResNet-101 SCARF 13.6 35.1 53.8 37.7 15.8 38.7 49.0 [.5] ResNeXt-101 SCARF 8.4 41.6 62.0 44.6 24.5 45.5 52.3  

Table 2: Detection results on MS COCO test-dev dataset: The symbol “*” indicates our re-implemented results. The expression “” means re-scaling of the input image introduced in the original RetinaNet paper.

4.2 Network Configuration

The advantage of our ScarfNet is that we do not have many hyper-parameters to be determined. Note that the spatial dimensions of the feature maps are readily determined based on those of the baseline detectors. The channel dimensions of the intermediate feature maps are fixed over the pipeline between two matching blocks in the ScNet and ArNet. Thus, we only need to choose for this channel dimension. According to the empirical results, we set the channel dimension to 256.

4.3 Performance Evaluation

4.3.1 PASCAL VOC Results

Training on PASCAL VOC 2007 Dataset: The object detectors under consideration are trained with the VOC 2007 trainval and the VOC 2012 trainval sets and evaluated with the VOC 2007 test

set. When the ScarfNet is combined with the SSD baseline, we train our model over 120k iterations (around 240 epochs). We set the learning rate to

for the first k iterations, decay the learning rate to for the next k iterations, and use the learning rate of for the last k iterations. The mini-batch size is set to , the momentum for the stochastic grandient descent (SGD) update is set to , and the weight decay is set to . When our method is combined with the RetinaNet baseline, we set the learning rate to for the first k iterations, decay the learning rate to for the next k iterations, and use the learning rate of for the last k iterations. Other parameters are equally set except for the weight decay of .
     Training on PASCAL VOC 2012 Dataset: The object detectors are trained with the VOC 2007 trainval, the VOC 2007 test and the VOC 2012 trainval sets and evaluated with the VOC 2012 test set. When our model is combined with the SSD baseline, a total of 200k iterations are run with the same training parameters as in the VOC 2007 case. Note that we use the learning rate of for the first k iterations, for the next k iterations, and for the rest.
     Performance Comparison: Table 1 shows the mAP performance of the object detectors under comparison evaluated on the PASCAL VOC 2007 and 2012 test sets. For both PASCAL 2007 and 2012 cases, we observe that the semantic features generated by our ScarfNet offer the significant performance gain over the baseline detectors. In the case of PASCAL VOC 2007, the proposed method achieves 1.9% and 1.8% mAP gains over the SSD300 and SSD512 baselines, respectively. The proposed method also outperforms the RetinaNet baseline by 0.5%. Since the RetinaNet baseline employs the top-down structure based on FPN [fpn], we can deduce that the features generated by our method are superior to those by the FPN. Our object detector also achieves better performance than the other competing algorithms including DSSD [dssd], ION [bell2016inside], R-FCN [rfcn]. As shown in Table 1, our ScarfNet detector achieves the state of the art performance for PASCAL VOC 2007 dataset. Through the detection accuracy with PASCAL VOC 2012 dataset slightly degrades as compared to PASCAL VOC 2017, the tendency of detection results observed for the PASCAL VOC 2007 remains. Note that the proposed detector maintains the performance gain of 1.4% and 1.3% mAP over the SSD300 and SSD500 baselines, respectively.

width=0.48   Method mAP [.5]0 Ablation study Basedline (SSD) 77.5 [.5] biLSTM 79.1 [.5] biLSTM + channel-wise attention 79.4 [.5]0 Other fusion strategy (used with channel-wise attention) 1x1 conv.-based fusion 78.9 [.5] uniLSTM 78.7 [.5] Top-down structure with lateral connections 78.6  

Table 3: Results of ablation study on VOC 2007 test dataset.

width=0.42   Semantic feature generation strategy   Addition Concat. 20cmChannel
64 78.3 78.8
128 78.6 79.1 256 79.1 79.4 512 79.5 79.2 1024 79.4 79.2  

Table 4: mAP (%) performance for various combinations of channel dimension and semantic feature generation strategy when evaluated on VOC 2007 test set

4.3.2 COCO Results

Training: The object detectors under comparison are trained with the MS COCO trainval35k split [bell2016inside] (union of 80k images from train and a random 35k subset of images from 40k image val split) and evaluate it with the MS COCO test-dev. For the training of proposed structure based on RetinaNet [retinanet], we set the learning rate to for the first k iterations, decay the learning rate to for the next k iterations, and use the learning rate of for the last k iterations. The mini-batch size is set to , the momentum is set to , and the weight decay is set to .
     Performance comparison: Table 2 provides the detection accuracy of the algorithms tested on MS COCO dataset. The experiment is conducted on the various baseline detectors and feature pyramid modules. We consider the performance comparison based on both two-stage detector and one-stage detector, and use the FPN [fpn] as the competing feature pyramid method. The proposed object detector achieves the performance gain over the Faster R-CNN [retinanet] baseline by 0.9%, 0.4%, and 1.2% for AP, , and , respectively. Also, our ScarfNet achieves 34.5% and 41.5% AP which is 1.3% and 0.8% higher than DSSD513 and RetinaNet baseline, respectively.

Figure 5: Visualization of the feature map: (top row) input image, (middle row) conv4_3 layer feature () from feature pyramid in the SSD300, (bottom row) conv4_3 layer feature () generated from the ScarfNet. Since the conv4_3 layer feature map is shallow, it fails to place strong activation properly on the objects. On the contrary, the semantic feature generated by our ScarfNet seems to capture the characteristics of the objects well.

4.4 Performance Analysis

4.4.1 Ablation Study

Benefits of biLSTM: It is worth investigating the effectiveness of the biLSTM for fusing the multi-scale features. Table 3 compares our method with the different fusion strategies including the 1x1 convolutional layer, the top-down structure, and the unidirectionalLSTM. Our biLSTM achieves better performance than the others. This seems why parameter sharing, gating units, and bilateral processing of the biLSTM effectively control high-level information to reduce the subtle semantic gap between the hierarchical features.
      Network Parameter Search As mentioned, we need to determine the channel dimension of the intermediate feature maps. We also wonder which strategy is better between the element-wise addition versus channel-wise concatenation to combine the output of the ScarfNet with the original feature pyramid. In Table 4, we evaluate the performance of our detector for various combinations of the channel dimensions (64, 128, 256, 512 versus 1024) and feature combining strategies (element-wise addition versus channel-wise concatenation). According to Table 4, the combination of the 512 channel dimension with element-wise addition leads to the best detection accuracy. However, using 512 channel significantly increase the computational complexity of the entire network, we choose the 256 channel dimension with channel-wise concatenation.

4.4.2 Feature Visualization

We investigate the effectiveness of the ScarfNet via feature visualization. Fig. 5 compares the original pyramidal feature map of the largest size (middle row) with the semantic feature map from our ScarfNet (bottom row). In order to obtain the heat map, we take the channel with the highest average activation in the spatial domain. Due to the lack of semantic cue in the original feature map , it often fails to activate on the objects properly. On the contrary, we observe that the feature map has strong activation on the whole region occupied by the objects, which would lead to the improvement in the overall detection performance.

5 Conclusions

In this paper, we proposed a deep architecture generating the multi-scale features with strong semantics to reliably detect the objects in various sizes. Our ScarfNet method transforms the pyramidal features produced by the baseline detector into evenly abstract features. To achieve this goal, the proposed ScarfNet fuses the pyramidal features using the biLSTM and distributes the semantics back to each multi-scale feature. We verified through the experiments conducted with PASCAL VOC and MS COCO datasets that the proposed ScarfNet offers a significant gain in detection performance over the baseline detectors. We also showed that our object detector achieves the state of the art performance on PASCAL VOC and COCO benchmark.