An Implementation of Faster RCNN with Study for Region Sampling

02/07/2017 ∙ by Xinlei Chen, et al. ∙ Carnegie Mellon University 0

We adapted the join-training scheme of Faster RCNN framework from Caffe to TensorFlow as a baseline implementation for object detection. Our code is made publicly available. This report documents the simplifications made to the original pipeline, with justifications from ablation analysis on both PASCAL VOC 2007 and COCO 2014. We further investigated the role of non-maximal suppression (NMS) in selecting regions-of-interest (RoIs) for region classification, and found that a biased sampling toward small regions helps performance and can achieve on-par mAP to NMS-based sampling when converged sufficiently.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

Code Repositories

tf-faster-rcnn

Tensorflow Faster RCNN for Object Detection


view repo

pytorch-faster-rcnn

I would appreciate if someone can retrain the model. The "finetune from scratch" result should be better because the training details have been changed. (And mobilenet training not supported yet.)


view repo
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Baseline Faster RCNN with Simplification

We adapted the join-training scheme of Faster RCNN detection framework111https://github.com/rbgirshick/py-faster-rcnn [6] from Caffe222https://github.com/BVLC/caffe to TensorFlow333https://github.com/tensorflow as a baseline implementation. Our code is made publicly available444https://github.com/endernewton/tf-faster-rcnn. During the implementation process, several simplifications are made to the original pipeline, with observations from ablation analysis that they are are either not affecting or even potentially improving the performance. The ablation analysis has the following default setup:

Base network.

Pre-trained VGG16 [8]. The feature map from conv5_3 are used for region proposals and fed into region-of-interest (RoI) pooling.

Datasets.

Both PASCAL VOC 2007 [2] and COCO 2014 [5]. For VOC we use the trainval split for training, and test for evaluation. For COCO we use train+valminusminival and minival, same as the published model.

Training/Testing.

The default end-to-end, single-scale training/testing scheme is copied from the original implementation. Learning rate starts with and is reduced after / iterations. Training finishes at / iterations. Following COCO challenge requirements, for each testing image, the detection pipeline provides at most detection results.

Evaluation.

We use evaluation toolkits provided by the respective dataset. The metrics are based on detection average precision/recall.

The first notable change follows Huang et al[4]. Instead of using the RoI pooling layer, we use the crop_and_resize operator, which crops and resizes feature maps to

, and then max-pool them to

to match the input size of fc6.

Second, we do not aggregate gradients from images and regions [3], instead we simply sample regions from images during a single forward-backward pass. Gradient accumulation across multiple batches is slow, and requires extra operators in TensorFlow. Note that is the number of regions sampled for training the region classifier, for training region proposal network (RPN) we still use the default regions.

Third, the original Faster RCNN removes small proposals (less than pixels in height or width in the original scale). We find this step redundant, hurting the performance especially for small objects.

Other minor changes that does not seem to affect the performance include: 1) double the learning rate for bias; 2) stop weight decay on bias; 3) remove aspect-ratio grouping (introduced to save memory); 4) exclude ground-truth bounding boxes in the RoIs during training, since they are not accessible during testing and can bias the input distribution for region classification.

For ablation analysis results on VOC 2007, please check at Table 1. Performance-wise, our implementation is in general on par with the original Caffe implementation. The crop_and_resize pooling appears to have a slight advantage over RoI pooling.

We further test the pipeline on COCO, see Table 2. We fix and only use crop_and_resize pooling, which in general gives better average recall than RoI pooling. Keeping the small region proposals also gives consistent boost on small objects. Overall our baseline implementation gives better AP () and AR () for small objects. As we vary , we find gives a good trade-off with the default training scheme, as further increasing causes potential over-fitting.

1.1 Training/Testing Speed

Ideally, our training procedure can almost cut the total time in half since gradient is only accumulated over image. However, the increased batch size and the use of crop_and_resize pooling slow each iteration a bit. Adding the underlying TensorFlow overhead, the average speed for a COCO net on a Titan X (non Pascal) GPU for training is ms per iteration, whereas for testing it is ms per image in our experimental environment.

 

Train Test mAP aero bike bird boat bottle bus car cat chair cow table dog horse mbike persn plant sheep sofa train tv

 

NMS NMS 70.9 67.5 78.4 67.0 53.4 58.9 78.2 85.1 84.4 49.2 82.1 66.7 77.3 84.3 75.4 77.3 46.2 71.0 66.6 75.2 73.5

 

ALL TOP 70.4 73.9 77.7 67.0 56.6 47.7 80.3 83.8 83.8 48.0 77.9 68.6 80.8 84.0 76.5 75.7 41.6 69.2 66.6 77.6 70.3
PRE TOP 71.1 72.7 79.0 67.3 58.8 53.3 80.9 85.2 84.8 50.6 80.3 66.4 80.1 83.5 74.2 77.6 44.3 69.7 65.7 76.9 70.9
POW TOP 71.0 73.9 78.5 67.1 57.7 53.1 80.1 85.8 83.6 50.0 80.0 65.6 80.6 80.5 75.4 76.8 44.4 70.6 66.0 78.3 72.6

 

NMS TOP 71.2 67.6 78.9 67.6 55.2 56.9 78.8 85.2 83.9 49.8 81.9 65.5 80.1 84.4 75.7 77.6 45.3 70.8 66.9 78.2 72.9

 

Table 3: VOC 2007 test object detection average precision (%). Analysis of different region sampling schemes for train/test combinations. Baseline (first row) uses NMS for both training and testing. Please refer to Section 2 for the detailed meaning of ALL, PRE, POW and TOP, none of which is based on NMS.

2 A Study of Region Sampling

 

Train Test stepsize itersize AP AP-.5 AP-.75 AP-S AP-M AP-L AR-1 AR-10 AR-100 AR-S AR-M AR-L

 

NMS NMS 26.5 46.7 27.2 11.8 30.4 37.5 24.9 36.3 37.1 17.3 42.1 52.4

 

ALL TOP 23.2 41.2 23.7 7.1 24.1 36.9 23.0 32.9 33.5 12.1 36.5 52.8
PRE TOP 25.1 44.1 25.7 9.0 27.4 38.8 24.4 35.1 35.7 14.1 39.5 55.0
POW TOP 25.2 44.6 25.6 9.6 28.3 37.6 24.4 35.5 36.4 14.9 40.5 55.5

 

NMS TOP 26.9 47.0 27.7 12.0 31.0 38.9 25.3 37.2 38.1 17.6 43.1 54.0

 

ALL TOP 25.0 43.5 25.4 7.8 26.1 39.5 24.2 34.4 35.1 13.1 38.2 55.1
PRE TOP 26.6 45.7 27.7 9.8 29.3 41.5 25.3 36.5 37.3 15.3 41.9 56.0
POW TOP 26.9 46.4 28.2 10.8 29.8 40.8 25.4 36.8 37.6 16.2 42.1 56.6

 

NMS NMS 27.9 48.2 29.0 11.8 31.8 40.3 26.0 37.5 38.3 17.6 43.4 55.4
NMS TOP 28.3 48.7 29.5 11.8 32.5 41.9 26.2 38.3 39.2 18.0 44.3 56.7

 

Table 4: COCO 2014 minival object detection average precision and recall (%) with provided evaluation tool. Baseline (first row) uses NMS for both training and testing. Please refer to Section 2 for the detailed meaning of ALL, PRE, POW and TOP, none of which is based on NMS. stepsize is the number of train iterations before the learning rate is reduced; and itersize is the total number of iterations.

We also investigated how the distribution of the region proposals fed into region classification can influence the training/testing process. In the original Faster RCNN, several steps are taken to select a set of regions:

  • First, take the top regions according to RPN score.

  • Then, non-maximal suppression (NMS) with overlapping ratio of is applied to perform de-duplication.

  • Third, top regions are selected as RoIs.

For training, and are used, and later regions are sampled for training the region classifier with pre-defined positive/negative ratio (); for testing and are used. We refer to this default setting as NMS.

In Ren et al[6], a comparable mean average precision (mAP) can be achieved when the top-ranked proposals are directly selected without NMS during testing. This suggests that NMS can be removed at the cost of evaluating more RoIs. However, it is less clear whether NMS de-duplication is necessary during training. On a related note, NMS is believed to be crucial for selecting hard examples for Fast RCNN [7]. Therefore, we want to check if it is also true for Faster RCNN in the joint-training setting.

Our first alternative (ALL) works by simply feeding all top regions for positive/negative sampling without NMS. While this alternative appears to optimize the same objective function as the one with NMS, there is a subtle difference: NMS implicitly biases the sampling procedure toward smaller regions. Intuitively, it is more likely for large regions to overlap than small regions, so large regions have a higher chance to be suppressed. A proper bias in sampling is known to help at least converge networks more quickly [1] and is actually also used in Faster RCNN: a fixed positive/negative ratio to avoid always learning on negative patches. To this end, we add two more alternatives for comparison. The first one (PRE) computes the final ratio of a pre-trained Faster RCNN model that uses NMS, and samples regions based on this final ratio. The second one (POW) simply fits the sampling ratio to the power law: where is ratio, is scale, and is a constant factor (set as ). While PRE still depends on a trained model with NMS, POW does not require NMS at all. To fit the target distribution, we keep all regions of the scale with the highest ratio in the distribution, and randomly select regions of other scales according to the relative ratio. e.g., if the distribution is for scales , then all the scale- regions are kept, and of the other two scales are later sampled. Note that for both of them we set ( is functioning as ) during training, since roughly half the regions are already thrown away.

Following Ren et al[6], we simply select top proposals for evaluation directly. With little or no harm on precision but direct benefit on recall, mAP generally increases as gets larger. We set trading off speed and performance. This testing scheme is referred as TOP.

We begin by showing results on VOC 2007 in Table 3. As can be seen, apart from ALL, other schemes with biased sampling all achieve the same level of mAP (around ). We also include results (last row) that uses NMS during training but switches to TOP for testing. Somewhat to our surprise, it achieves better performance. In fact, we find this advantage of TOP over NMS consistently exists when is sufficiently large.

A more thorough set of experiments were conducted on COCO, which are summarized in Table 4. Similar to VOC, we find biased sampling (NMS, PRE and POW) in general gives better results than uniform sampling (ALL). In particular, with iterations of training, NMS is able to offer a performance similar to PRE/POW after iterations. Out of curiosity, we also checked the model trained with NMS iterations, which is able to converge to a better AP ( on minival) with the TOP testing scheme. We did notice that with more iterations, the gap between NMS and POW narrows down from () to (), indicating the latter ones may catch up eventually. The difference to VOC suggests that iterations are not sufficient to fully converge on COCO. Extra experiments with longer training iterations are needed for a more conclusive note.

References