Repulsion Loss: Detecting Pedestrians in a Crowd

11/21/2017 ∙ by Xinlong Wang, et al. ∙ Megvii Technology Limited The University of Adelaide 0

Detecting individual pedestrians in a crowd remains a challenging problem since the pedestrians often gather together and occlude each other in real-world scenarios. In this paper, we first explore how a state-of-the-art pedestrian detector is harmed by crowd occlusion via experimentation, providing insights into the crowd occlusion problem. Then, we propose a novel bounding box regression loss specifically designed for crowd scenes, termed repulsion loss. This loss is driven by two motivations: the attraction by target, and the repulsion by other surrounding objects. The repulsion term prevents the proposal from shifting to surrounding objects thus leading to more crowd-robust localization. Our detector trained by repulsion loss outperforms all the state-of-the-art methods with a significant improvement in occlusion cases.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 4

page 8

page 12

page 13

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Occlusion remains one of the most significant challenges in object detection although great progress has been made in recent years [10, 9, 24, 19, 1, 20, 11, 3]. In general, occlusion can be divided into two groups: inter-class occlusion and intra-class occlusion. The former one occurs when an object is occluded by stuff or objects of other categories, while the latter one, also referred to as crowd occlusion, occurs when an object is occluded by objects of the same category.

Figure 1: Illustration of our proposed repulsion loss. The repulsion loss consists of two parts: the attraction term to narrow the gap between a proposal and its designated target, as well as the repulsion term to distance it from the surrounding non-target objects.

In pedestrian detection [31, 14, 6, 5, 7, 21], crowd occlusion constitutes the majority of occlusion cases. The reason is that in application scenarios of pedestrian detection, e.g., video surveillance and autonomous driving, pedestrians often gather together and occlude each other. For instance, in the CityPersons dataset [33], there are a total of pedestrian annotations in the validation subset, among which of them overlap with another annotated pedestrian whose Intersection over Union (IoU) is above . Moreover, of all pedestrians have considerable overlaps with another annotated pedestrian whose IoU is above . The highly frequent crowd occlusion severely harms the performance of pedestrian detectors.

The main impact of crowd occlusion is that it significantly increases the difficulty in pedestrian localization. For example, when a target pedestrian is overlapped by another pedestrian , the detector is apt to get confused since these two pedestrians have similar appearance features. As a result, the predicted boxes which should have bounded

will probably shift to

, leading to inaccurate localization. Even worse, as the primary detection results are required to be further processed by non-maximum suppression (NMS), shifted bounding boxes originally from may be suppressed by the predicted boxes of , in which turns into a missed detection. That is, crowd occlusion makes the detector sensitive to the threshold of NMS: a higher threshold brings in more false positives while a lower threshold leads to more missed detections. Such undesirable behaviors can harm most instance segmentation frameworks [11, 18], since they also require accurate detection results. Therefore, how to robustly localize each individual person in crowd scenes is one of the most critical issues for pedestrian detectors.

In state-of-the-art detection frameworks [9, 24, 3, 19], the bounding box regression technique is employed for object localization, in which a regressor is trained to narrow the gap between proposals and ground-truth boxes measured by some kind of distance metrics (e.g., or IoU). Nevertheless, existing methods only require the proposal to get close to its designated target, without taking the surrounding objects into consideration. As shown in Figure 1, in the standard bounding box regression loss, there is no additional penalty for the predicted box when it shifts to the surrounding objects. This observation makes one wonder whether the locations of its surrounding objects could be taken into account if we want to detect a target in a crowd?

Inspired by the characteristics of a magnet, i.e., magnets attract and repel, in this paper we propose a novel localization technique, referred to as repulsion loss (RepLoss). With RepLoss, each proposal is required not only to approach its designated target , but also to keep away from the other ground-truth objects as well as the other proposals whose designated targets are not . In other words, the bounding box regressor with RepLoss is driven by two motivations: attraction by the target and repulsion by other surrounding objects and proposals. For example, as demonstrated in Figure 1, the red bounding box shifting to will be given an additional penalty since it overlaps with a surrounding non-target object. Thus, RepLoss can prevent the predicted bounding box from shifting to adjacent overlapped objects effectively, which makes the detector more robust to crowd scenes. Our main contributions are as follows:

  • We first experimentally study the impact of crowd occlusion on pedestrian detection. Specifically, on the CityPersons benchmark [33] we analyze both false positives and missed detections caused by crowd occlusion quantitatively, which provides important insights into the crowd occlusion problem.

  • Two types of repulsion losses are proposed to address the crowd occlusion problem, namely RepGT Loss and RepBox Loss. RepGT Loss directly penalizes the predicted box for shifting to the other ground-truth objects, while RepBox Loss requires each predicted box to keep away from the other predicted boxes with different designated targets, making the detection results less sensitive to NMS.

  • With the proposed repulsion losses, a crowd-robust pedestrian detector is trained end-to-end, which outperforms all the state-of-the-art methods on both CityPerson and Caltech-USA benchmarks [7]. It should also be noted that the detector with repulsion loss significantly improves the detection accuracy for occlusion cases, highlighting the effectiveness of repulsion loss. Furthermore, our experiments on the PASCAL VOC [8] detection dataset show that the RepLoss is also beneficial for general object detection, besides pedestrians.

2 Related Work

Object Localization.

With the recent development of convolutional neural networks (CNNs) 

[16, 26, 12], great progress has been made in object detection, in which object localization is generally framed as a regression problem that relocates an initial proposal to its designated target. In R-CNN [10]

, a linear regression model is trained with respect to the Euclidean distance of coordinates of a proposal and its target. In 

[9], the Loss is proposed to replace the Euclidean distance used in R-CNN for bounding box regression. [24] proposes the region proposal network (RPN), in which bounding box regression is performed twice to transform predefined anchors into final detection boxes. Densebox [15] proposes an anchor-free, fully convolutional detection framework. IoU Loss is proposed in [29] to maximize the IoU between a ground-truth box and a predicted box. We note that a method proposed by Desai et al[4] also exploits the attraction and repulsion between objects to capture the spatial arrangements of various object classes, still, it is to address the problem of object classification via a global model. In this work, we will demonstrate the effectiveness of the Repulsion Loss for object localization in crowd scenes.

Pedestrian Detection. Pedestrian detection is the first and an critical step for many real-world applications. Traditional pedestrian detectors, such as ACF [5], LDCF [22] and Checkerboard [32], exploit various filters on Integral Channel Features (IDF[6] with sliding window strategy to localize each target. Recently, the CNN-based detectors [17, 30, 21, 14, 28] show great potential in dominating the field of pedestrian detection. In [28, 30], features from a Deep Neural Network rather than hand-crafted features are fed into a boosted decision forest. [21] proposes a multi-task trained network to further improve detection performance. Also in [23, 27, 34], a part-based model is utilized to handle occluded pedestrians. [13] works on improving the robustness of NMS, but it ends up relying on an additional network for post-processing. In fact, few of previous works focus on studying and overcoming the impact of crowd occlusion.

3 What is the Impact of Crowd Occlusion?

To provide insights into the crowd occlusion problem, in this section, we experimentally study how much crowd occlusion influences pedestrian detection results. Before delving into our analysis, first we introduce the dataset and the baseline detector that we use.

3.1 Preliminaries

Dataset and Evaluation Metrics.

CityPersons [33] is a new pedestrian detection dataset on top of the semantic segmentation dataset CityScapes [2], of which images are captured in several cities in Germany. A total of persons with an additional ignored regions, both bounding box annotation of all persons and annotation of visible parts are provided. All of our experiments involved CityPersons are conducted on the reasonable train/validation sets for training and testing, respectively. For evaluation, the log miss rate is averaged over the false positive per image (FPPI) range of () is used (lower is better).

Detector. Our baseline detector is the commonly used Faster R-CNN [24] detector modified for pedestrian detection, generally following the settings in Zhang et al[31] and Mao et al[21]. The difference between our implementation and theirs is that we replace the VGG-16 backbone with the faster and lighter ResNet-50 [12] network. It is worth noting that ResNet is rarely used in pedestrian detection, since the down-sampling rate at convolution layers is too large for the network to detect and localize small pedestrians. To handle this, we use dilated convolution and the final feature map is of input size. The ResNet-based detector achieves on the validation set, which is sightly better than the reported result ( ) in [33].

Figure 2: Missed detection numbers and scores of our baseline on the reasonable, reasonable-occ, reasonable-crowd subsets. Of all missed detection in reasonable-occ subset, crowd occlusion accounts for , making it a main obstacle for addressing occlusion issues.

3.2 Analysis on Failure Cases

Figure 3: Errors analysis of our baseline and RepGT. (a) The number of missed detections in reasonable-crowd subset under different detection scores. (b) The proportion of false positives caused by crowd occlusion of all false positives. RepGT Loss effectively reduces missed detections and false positives caused by crowd occlusion.

Missed Detections. With the results of the baseline detector, we first analyze missed detections caused by crowd occlusion. Since the bounding box annotation of the visible part of each pedestrian is provided in CityPersons, the occlusion can be calculated as . We define a ground-truth pedestrian whose as an occlusion case, and one whose and with any other annotated pedestrian as a crowd occlusion case. By definition, from the total non-ignored pedestrian annotations in the reasonable validation set, two subsets are extracted: the reasonable-occ subset, consisting of occlusion cases () and the reasonable-crowd subset, consisting of crowd occlusion cases (). Obviously the reasonable-crowd subset is also a subset of reasonable-occ subset.

In Figure 2, we report the numbers of missed detections and on the reasonable, reasonable-occ and reasonable-crowd subsets. We observe that the performance drops significantly from on the reasonable set to on the reasonable-occ subset; of all missed detections at 20, 100, and 500 false positives, occlusion makes up approximately , indicating that it is a main factor which harms the performance of the baseline detector. Of missed detections in the reasonable-occ subset, the proportion of crowd occlusion stands at nearly , making it a main obstacle for addressing occlusion issues in pedestrian detection. Moreover, the miss rate on the reasonable-crowd subset () is even higher than the reasonable-occ subset (), indicating that crowd occlusion is an even harder problem than inter-class occlusions; when we lower the threshold from 100 to 500 false positives, the portion of missed detections caused by crowd occlusion becomes larger (from to ). It implies that missed detections caused by crowd occlusion are hard to be rescued by lowering the threshold.

In Figure 3, the red line shows how many ground-truth pedestrians are missed in the reasonable-crowd subset with different detection scores. As in real-world applications, only predicted bounding boxes with high confidence will be considered, the large number of missed detections on the top of the curve implies we are far from saturation for real-world applications.

False Positives. We also analyze how many false positives are caused by crowd occlusion. We cluster all false positives into three categories: background, localization and crowd error. A background error occurs when a predicted bounding box has with any ground-truth pedestrian, while a localization error has with only one ground-truth pedestrian. Crowd errors are those who have with at least two ground-truth pedestrians.

After that we count the number of crowd errors and calculate its proportion of all false positives. The red line in Figure 3 shows that crowd errors contribute to a relative large proportion (about ) of all false positives. Through visualization in Figure 4, we observe that the crowd errors usually occur when a predict box shifts slightly or dramatically to neighboring non-target ground-truth objects, or bounds the union of several overlapping ground-truth objects together. Moreover, the crowd errors usually have relatively high confidences thus leading to top-ranked false positives. It indicates that to improve the robustness of detectors to crowd scenes, more discriminative loss is needed when performing bounding box regression. More visualization examples can be found in supplementary material.

Conclusion. The analysis on failure cases validates our observation: pedestrian detectors are surprisingly tainted by crowd occlusion, as it constitutes the majority of missed detections and results in more false positives by increasing the difficulty in localization. To address these issues, in Section 4, the repulsion loss is proposed to improve the robustness of pedestrian detectors to crowd scenes.

Figure 4: The visualization examples of the crowd errors. Green boxes are correct predicted bounding boxes, while red boxes are false positives caused by crowd occlusion. The confidence scores outputted by detectors are also attached. The errors usually occur when a predict box shifts slightly or dramatically to neighboring ground-truth object (e.g., top-right one), or bounds the union of several overlapping ground-truth objects (e.g., bottom-right one).

4 Repulsion Loss

In this section we introduce the repulsion loss to address the crowd occlusion problem in detection. Inspired by the characteristics of magnet, i.e., magnets attract and repel, the Repulsion Loss is made up of three components, defined as:

(1)

where is the attraction term which requires a predicted box to approach its designated target, while and are the repulsion terms which require a predicted box to keep away from other surrounding ground-truth objects and other predicted boxes with different designated targets, respectively. Coefficients and act as the weights to balance auxiliary losses.

For simplicity we consider only two-class detection in the following, assuming all ground-truth objects are from the same category. Let and be the proposal bounding box and ground-truth bounding box which are represented by their coordinates of left-top points as well as their widths and heights, respectively. is the set of all positive proposals (those who have a high IoU (e.g., ) with at least one ground-truth box are regarded as positive samples, while negative samples otherwise), and is the set of all ground-truth boxes in one image.

Attraction Term. With the objective to narrow the gap between predicted boxes and ground-truth boxes measured by some kind of distance metrics111Here the distance is simply a measurement of difference of two bounding boxes. It may not satisfy triangle inequality., e.g., Euclidean distance [10], distance [9] or IoU [29], attraction loss has been commonly adopted in existing bounding box regression techniques. To make a fair comparison, in this paper we adopt distance for the attraction term as in [21, 33]. We set smooth parameter in as . Given a proposal , we assign the ground-truth box who has the maximum IoU as its designated target: . is the predicted box regressed from proposal . Then the attraction loss could be calculated as:

(2)

Repulsion Term (RepGT). The RepGT Loss is designed to repel a proposal from its neighboring ground-truth objects which are not its target. Given a proposal , its repulsion ground-truth object is defined as the ground-truth object with which it has the largest IoU region except its designated target:

(3)

Inspired by IoU Loss in [29], the RepGT Loss is calculated to penalize the overlap between and . The overlap between and is defined by Intersection over Ground-truth (IoG): . As , we define RepGT Loss as:

(4)

where

(5)

is a smoothed function which is continuously differentiable in , and

is the smooth parameter to adjust the sensitiveness of the repulsion loss to the outliers. Figure 

5 shows its curve with different . From Eqn. 4 and Eqn. 5 we can see that the more a proposal tends to overlap with a non-target ground-truth object, a larger penalty will be added to the bounding box regressor by the RepGT Loss. In this way, the RepGT Loss could effectively stop a predicted bounding box from shifting to its neighboring objects which are not its target.

Repulsion Term (RepBox). NMS is a necessary post-processing step in most detection frameworks to merge the primary predicted bounding boxes which are supposed to bound the same object. However, the detection results will be affected significantly by NMS especially for the crowd cases. To make the detector less sensitive to NMS, we further propose the RepBox Loss whose objective is to repel each proposal from others with different designated targets. We divide the proposal set into mutually disjoint subsets based on the target of each proposal: . Then for two proposals randomly sampled from two different subsets, and where and , we expect that the overlap of predicted box and will be as small as possible. Therefore, the RepBox Loss is calculated as:

(6)

where is the identity function and is a small constant in case divided by zero. From Eqn. 6 we can see that to minimize the RepBox Loss, the IoU region between two predicted boxes with different designated targets needs to be small. That means, the RepBox Loss is able to reduce the probability that the predicted bounding boxes with different regression targets are merged into one after NMS, which makes the detector more robust to the crowd scenes.

Figure 5: The curves of under different smooth parameter . The smaller is, the less sensitive loss is to the outliers.

4.1 Discussion

Distance Metric. It is worth noting that we choose the IoG or IoU rather than metric to measure the distance between two bounding boxes in the repulsion term. The reason is that the values of IoG and IoU are bounded in range while metric is boundless, i.e., if we use metric in the repulsion term, in the RepGT Loss for example, it will require the predicted box to keep away from its repulsion ground-truth object as far as possible. On the contrary, IoG criteria only requires the predicted box to minimize the overlap with its repulsion ground-truth object, which better fits our motivation.

In addition, IoG is adopted in RepGT Loss rather than IoU because, in the IoU-based loss, the bounding box regressor may learn to minimize the loss by simply enlarging the bounding box size to increase the denominator . Therefore, we choose IoG whose denominator is a constant for a particular ground-truth object to minimize the overlap directly.

Smooth Parameter . Compared to [29] which directly uses

as loss function, we introduce a smoothed

function and a smooth parameter in both RepGT Loss and RepBox Loss. As shown in Figure 5, we can adjust the sensitiveness of the repulsion loss to the outliers (the pair of boxes with large overlap) by the smooth parameter . Since the predicted boxes are much denser than the ground-truth boxes, a pair of two predicted boxes are more likely to have a larger overlap than a pair of one predicted box and one ground-truth box. It means that there will be more outliers in RepBox than in RepGT. So, intuitively, RepBox Loss should be less-sensitive to outliers (with small ) than RepGT Loss. More detailed studies about the smooth parameter as well as the auxiliary loss weights and are provided Section 5.2.

5 Experiments

The experiment section is organized as follows: we first introduce the basic experiment settings as well as the implementation details of repulsion loss in Section 5.1; then the proposed RepGT Loss and RepBox Loss are evaluated and analyzed on the CityPersons [33] benchmark respectively in Section 5.2; finally, in Section 5.3, the detector with repulsion loss is compared with the state-of-the-art methods side-by-side on both CityPersons [33] and Caltech-USA [7].

5.1 Experiment Settings

Datasets. Besides the CityPersons [33] benchmark introduced in Section 3, we also carry out experiments on the Caltech-USA dataset [7]. As one of several predominant datasets and benchmarks for pedestrian detection, Caltech-USA has witnessed inspiring progress in this field. A total of 2.5-hour video is divided into training and testing subsets with frames and frames respectively. In  [31], Zhang et al. provide refined annotations, in which training data are refined automatically while testing data are meticulously re-annotated by human annotators. We conduct all experiments related to Caltech-USA on the new annotations unless otherwise stated.

Training Details.

Our framework is implemented on our self-built fast and flexible deep learning platform. We train the network for

iterations and iterations, with the base learning rate set to 0.016 and decreased by a factor of 10 after the first and

iterations for CityPersons and Caltech-USA, respectively. The Stochastic Gradient Descent (SGD) solver is adopted to optimize the network on 4 GPUs. A mini-batch involves 1 image per GPU. Weight decay and momentum are set to

and . Multi-scale training/testing are not applied to ensure fair comparisons with previous methods. For Caltech-USA, we use the x set ( frames) for training. Online Hard Example Mining (OHEM) [25] is used to accelerate convergence.

Improvement
0 0.5 1.0 0 0.5 1.0
RepGT 14.3 14.5 13.7 +0.3 +0.1 +0.9
RepBox 13.7 14.2 14.3 +0.9 +0.4 +0.3
Table 1: The of RepGT and RepBox Losses and their improvements with different smooth parameters on the validation set of CityPersons.
(RepGT) 0.3 0.4 0.5 0.6 0.7
(RepBox) 0.7 0.6 0.5 0.4 0.3
13.9 13.9 13.2 13.3 14.1
Table 2: We balance the RepGT and RepBox Losses by adjusting the weights and . Empirically, and yields the best performance. The results are obtained on CityPersons validation subset.
Figure 6: Results with RepBox Loss across various NMS thresholds at . The curve of RepBox is smoother than that of baseline, indicating it is less sensitive to the NMS threshold.
Method +RepGT +RepBox +Segmentation Scale Reasonable Heavy Partial Bare
Zhang et al[33] 15.4 55.0 18.9 9.3
14.8 - - -
12.8 - - -
Baseline 14.6 60.6 18.6 7.9
RepLoss 13.7 57.5 17.3 7.2
13.7 59.1 17.2 7.8
13.2 56.9 16.8 7.6
11.6 55.3 14.8 7.0
10.9 52.9 13.4 6.3
Table 3: Pedestrian detection results using RepLoss evaluated on the CityPersons [33]. Models are trained on train set and tested on validation set. We use ResNet-50 as our back-bone architecture. The best 3 results are highlighted in red, blue and green, respectively.

5.2 Ablation Study

RepGT Loss. In Table 1, we report the results of RepGT Loss with different parameter for loss. When set as , adding RepGT Loss yields the best performance of in terms of reasonable evaluation setup. It outperforms the baseline with an improvement of . Setting that means we directly sum over with no smooth at all, similar to the loss function used in IoU Loss [29].

We also provide comparisons on missed detections and false positives between RepGT and baseline. In Figure 3, adding RepGT Loss effectively decreases the number of missed detections in the reasonable-crowd subset. The curve of RepGT is consistently lower than that of baseline when the threshold of detection score is rather high, but two curves agree when the score is at . The saturation points of curves are both at , also a commonly used threshold for real applications, where we reduce the quantity of missed detections by relatively . In Figure 3, false positives produced by RepGT Loss due to crowd occlusion cover less proportion than the baseline detector. This demonstrates that RepGT Loss is effective on reducing missed detections and false positives in crowd scenes.

RepBox Loss. For RepBox Loss, we experiment with a different smooth parameter , reported in the fourth line of Table 1. When setting as , RepBox Loss yields the best performance of , on par with RepGT with . Setting as 0 means we completely smooth a function into a linear function and sum over IoU. We conjure that RepBox Loss tends to have more outliers than RepGT Loss since predicted boxes are much denser than ground-truth boxes.

As mentioned in Section 1, detectors in crowd scenes are sensitive to the NMS threshold. A high NMS threshold may lead to more false positives, while a low NMS threshold may lead to more missed detections. In Figure 6 we show our results with RepBox Loss across various NMS thresholds at . In general, the performance of detector with RepBox Loss is smoother than baseline. It is worth noting that at the NMS threshold of , the gap between baseline and RepBox is points, indicating that the latter is less sensitive to NMS threshold. Through visualization in Figure 7, there are fewer predictions lying in between two adjacent ground-truths of RepBox, which is desirable in crowd scenes. More examples are shown in supplementary material.

Balance of RepGT and RepBox The introduced RepGT and RepBox Loss help detectors do better in crowd scenes when added alone, but we have yet studied how to balance these two losses. Table 2 shows our results with different settings of and . Empirically, and yields the best performance.

Method Reasonable
IoU=0.5 IoU=0.75
Zhang et al[33] 5.8 30.6
Mao et al[21] 5.5 43.4
Zhang et al[33]* 5.1 25.8
Baseline 5.6 28.7
+RepGT 5.0 27.1
+RepBox 5.3 26.2
+RepGT & RepBox 5.0 26.3
+RepGT & RepBox* 4.0 23.0
Table 4: Results on Calech-USA test set (reasonable), evaluated on the new annotations [31]. On a strong baseline, we further improve the state-of-the-art to a remarkable under IoU threshold. The consistent gain when increasing IoU threshold to demonstrates effectiveness of repulsion loss. *: indicates pre-training network using CityPersons dataset.

5.3 Comparisons with State-of-the-art Methods

To demonstrate our effectiveness under different occlusion levels, we divide the reasonable subset (occlusion 35%) into the reasonable-partial subset (10% occlusion 35%), denoted as Partial subset, and the reasonable-bare subset (occlusion 10%), denoted as Bare subset. For annotations whose occlusion is above (not in the reasonable set), we denote them as Heavy subset. Table 3 summarizes our results on CityPersons. In general, RepGT Loss and RepBox Loss show improvement across all evaluation subsets. Combined together, our proposed repulsion loss achieves , which is an absolute -point improvement over our baseline. In terms of different occlusion levels, performance with RepLoss on the Heavy subset is boosted by a remarkably large margin of points, and on the Partial subset by a relatively smaller margin of points, while causing non-obvious improvement on the Bare subset. It is in accordance with our intention that RepLoss is specifically designed to address the occlusion problem.

Figure 7: Visualized comparison of predicted bounding boxes before NMS of baseline and RepBox. In the results of RepBox, there are fewer predictions lying in between two adjacent ground-truths, which is desirable in crowd scenes. More examples are shown in supplementary material.

We also evaluate RepLoss on new Caltech-USA dataset. Results are shown in Table 4. On a strong reference, RepLoss achieves of at IoU matching threshold and at IoU matching threshold. The consistent and even larger gain when increasing IoU threshold demonstrates the ability of our framework to handle occlusion problem, for it that occlusion is known for its tendency of being more sensitive at a higher matching threshold. Result curves are shown in Figure 8.

Figure 8: Comparisons with state-of-the-art methods on the new Caltech test subset.

6 Extensions: General Object Detection

Our RepLoss is a generic loss function for object detection in crowd scenes and can be used in applications other than pedestrian detection. In this section, we apply the repulsion loss to general object detection.

We conduct our experiments on the PASCAL VOC dataset [8], a common evaluation benchmark for general object detection. This dataset consists of over 20 object categories. Standard evaluation metric for VOC dataset is mean Average Precision (mAP) over all categories. We adopt the vanilla Faster R-CNN [24]

framework, using ImageNet-pretrained ResNet-101 

[12] as the backbone. The NMS threshold is set as . The model is trained on the train and validation subsets of PASCAL VOC 2007 and PASCAL VOC 2012, and is evaluated on the test subset of PASCAL VOC 2007. Our re-implemented baseline is better than original one by mAP.

Results are shown in Table 5. The gain over the entire dataset is not significant. Nevertheless, when evaluated on the crowd subset (objects have intra-class IoU greater than 0.1), RepLoss outperforms the baseline by mAP. These results demonstrate that our method is generic and can be extended to general object detection.

Method mAP mAP on Crowd
Faster R-CNN [12] 76.4 -
Faster R-CNN (ReIm) 79.5 38.7
+ RepGT 79.8 40.8
Table 5: General object detection results evaluated on PASCAL VOC 2007 [8] benchmark. ReIm is our re-implemented Faster R-CNN. Crowd subset contains ground-truth objects who has overlaps above IoU region with at least another ground-truth object of the same category. Our RepGT Loss outperforms baseline by mAP on crowd subset.

7 Conclusion

In this paper, we have carefully designed the repulsion loss (RepLoss) for pedestrian detection, which improves detection performance, particularly in crowd scenes. The main motivation of the repulsion loss is that the attraction-by-target loss alone may not be sufficient for training an optimal detector, and repulsion-by-surrounding can be very beneficial.

To implement the repulsion energy, we have introduced two types of repulsion losses. We have achieved the best reported performance on two popular datasets: Caltech and CityPersons. Significantly, our result on CityPersons without using pixel annotation outperforms the previously best result [33] that uses pixel annotation by about 2%. Detailed experimental comparison have demonstrated the value of the proposed RepLoss, which improves detection accuracy by a large margin in occlusion scenarios. Results on generic object detection (PASCAL VOC) further show its usefulness. We expect wide application of the proposed loss in many other object detection tasks.

References

  • [1] Z. Cai, Q. Fan, R. S. Feris, and N. Vasconcelos. A unified multi-scale deep convolutional neural network for fast object detection. In

    European Conference on Computer Vision

    , pages 354–370. Springer, 2016.
  • [2] M. Cordts, M. Omran, S. Ramos, T. Rehfeld, M. Enzweiler, R. Benenson, U. Franke, S. Roth, and B. Schiele.

    The cityscapes dataset for semantic urban scene understanding.

    In

    Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition

    , pages 3213–3223, 2016.
  • [3] J. Dai, Y. Li, K. He, and J. Sun. R-fcn: Object detection via region-based fully convolutional networks. In Advances in neural information processing systems, pages 379–387, 2016.
  • [4] C. Desai, D. Ramanan, and C. C. Fowlkes. Discriminative models for multi-class object layout. International journal of computer vision, 95(1):1–12, 2011.
  • [5] P. Dollár, R. Appel, S. Belongie, and P. Perona. Fast feature pyramids for object detection. IEEE Transactions on Pattern Analysis and Machine Intelligence, 36(8):1532–1545, 2014.
  • [6] P. Dollár, Z. Tu, P. Perona, and S. Belongie. Integral channel features. 2009.
  • [7] P. Dollár, C. Wojek, B. Schiele, and P. Perona. Pedestrian detection: A benchmark. In Computer Vision and Pattern Recognition, 2009. CVPR 2009. IEEE Conference on, pages 304–311. IEEE, 2009.
  • [8] M. Everingham, L. Van Gool, C. K. Williams, J. Winn, and A. Zisserman. The pascal visual object classes (voc) challenge. International journal of computer vision, 88(2):303–338, 2010.
  • [9] R. Girshick. Fast r-cnn. In The IEEE International Conference on Computer Vision (ICCV), December 2015.
  • [10] R. Girshick, J. Donahue, T. Darrell, and J. Malik. Rich feature hierarchies for accurate object detection and semantic segmentation. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2014.
  • [11] K. He, G. Gkioxari, P. Dollár, and R. Girshick. Mask r-cnn. In The IEEE International Conference on Computer Vision (ICCV), 2017.
  • [12] K. He, X. Zhang, S. Ren, and J. Sun. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 770–778, 2016.
  • [13] J. Hosang, R. Benenson, and B. Schiele. Learning non-maximum suppression. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), July 2017.
  • [14] J. Hosang, M. Omran, R. Benenson, and B. Schiele. Taking a deeper look at pedestrians. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 4073–4082, 2015.
  • [15] L. Huang, Y. Yang, Y. Deng, and Y. Yu. Densebox: Unifying landmark localization with end to end object detection. arXiv preprint arXiv:1509.04874, 2015.
  • [16] A. Krizhevsky, I. Sutskever, and G. E. Hinton. Imagenet classification with deep convolutional neural networks. In Advances in neural information processing systems, pages 1097–1105, 2012.
  • [17] J. Li, X. Liang, S. Shen, T. Xu, J. Feng, and S. Yan. Scale-aware fast r-cnn for pedestrian detection. IEEE Transactions on Multimedia, 2017.
  • [18] Y. Li, H. Qi, J. Dai, X. Ji, and Y. Wei. Fully convolutional instance-aware semantic segmentation. In IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), pages 2359–2367, 2017.
  • [19] T.-Y. Lin, P. Dollár, R. Girshick, K. He, B. Hariharan, and S. Belongie. Feature pyramid networks for object detection. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2017.
  • [20] T.-Y. Lin, P. Goyal, R. Girshick, K. He, and P. Dollár. Focal loss for dense object detection. In The IEEE International Conference on Computer Vision (ICCV), 2017.
  • [21] J. Mao, T. Xiao, Y. Jiang, and Z. Cao. What can help pedestrian detection? In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2017.
  • [22] W. Nam, P. Dollár, and J. H. Han. Local decorrelation for improved detection. arXiv preprint arXiv:1406.1134, 2014.
  • [23] W. Ouyang and X. Wang. A discriminative deep model for pedestrian detection with occlusion handling. In Computer Vision and Pattern Recognition (CVPR), 2012 IEEE Conference on, pages 3258–3265. IEEE, 2012.
  • [24] S. Ren, K. He, R. Girshick, and J. Sun. Faster r-cnn: Towards real-time object detection with region proposal networks. In C. Cortes, N. D. Lawrence, D. D. Lee, M. Sugiyama, and R. Garnett, editors, Advances in Neural Information Processing Systems 28, pages 91–99. Curran Associates, Inc., 2015.
  • [25] A. Shrivastava, A. Gupta, and R. Girshick. Training region-based object detectors with online hard example mining. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 761–769, 2016.
  • [26] K. Simonyan and A. Zisserman. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556, 2014.
  • [27] Y. Tian, P. Luo, X. Wang, and X. Tang. Deep learning strong parts for pedestrian detection. In Proceedings of the IEEE international conference on computer vision, pages 1904–1912, 2015.
  • [28] B. Yang, J. Yan, Z. Lei, and S. Z. Li. Convolutional channel features. In Proceedings of the IEEE international conference on computer vision, pages 82–90, 2015.
  • [29] J. Yu, Y. Jiang, Z. Wang, Z. Cao, and T. Huang. Unitbox: An advanced object detection network. In Proceedings of the 2016 ACM on Multimedia Conference, pages 516–520. ACM, 2016.
  • [30] L. Zhang, L. Lin, X. Liang, and K. He. Is faster r-cnn doing well for pedestrian detection? In European Conference on Computer Vision, pages 443–457. Springer, 2016.
  • [31] S. Zhang, R. Benenson, M. Omran, J. Hosang, and B. Schiele. How far are we from solving pedestrian detection? In IEEE Conference on Computer Vision and Pattern Recognition. IEEE Computer Society, 2016.
  • [32] S. Zhang, R. Benenson, and B. Schiele. Filtered channel features for pedestrian detection. In 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 1751–1760. IEEE, 2015.
  • [33] S. Zhang, R. Benenson, and B. Schiele. Citypersons: A diverse dataset for pedestrian detection. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2017.
  • [34] C. Zhou and J. Yuan. Multi-label learning of part detectors for heavily occluded pedestrian detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 3486–3495, 2017.

References

  • [1] Z. Cai, Q. Fan, R. S. Feris, and N. Vasconcelos. A unified multi-scale deep convolutional neural network for fast object detection. In European Conference on Computer Vision, pages 354–370. Springer, 2016.
  • [2] M. Cordts, M. Omran, S. Ramos, T. Rehfeld, M. Enzweiler, R. Benenson, U. Franke, S. Roth, and B. Schiele. The cityscapes dataset for semantic urban scene understanding. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 3213–3223, 2016.
  • [3] J. Dai, Y. Li, K. He, and J. Sun. R-fcn: Object detection via region-based fully convolutional networks. In Advances in neural information processing systems, pages 379–387, 2016.
  • [4] C. Desai, D. Ramanan, and C. C. Fowlkes. Discriminative models for multi-class object layout. International journal of computer vision, 95(1):1–12, 2011.
  • [5] P. Dollár, R. Appel, S. Belongie, and P. Perona. Fast feature pyramids for object detection. IEEE Transactions on Pattern Analysis and Machine Intelligence, 36(8):1532–1545, 2014.
  • [6] P. Dollár, Z. Tu, P. Perona, and S. Belongie. Integral channel features. 2009.
  • [7] P. Dollár, C. Wojek, B. Schiele, and P. Perona. Pedestrian detection: A benchmark. In Computer Vision and Pattern Recognition, 2009. CVPR 2009. IEEE Conference on, pages 304–311. IEEE, 2009.
  • [8] M. Everingham, L. Van Gool, C. K. Williams, J. Winn, and A. Zisserman. The pascal visual object classes (voc) challenge. International journal of computer vision, 88(2):303–338, 2010.
  • [9] R. Girshick. Fast r-cnn. In The IEEE International Conference on Computer Vision (ICCV), December 2015.
  • [10] R. Girshick, J. Donahue, T. Darrell, and J. Malik. Rich feature hierarchies for accurate object detection and semantic segmentation. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2014.
  • [11] K. He, G. Gkioxari, P. Dollár, and R. Girshick. Mask r-cnn. In The IEEE International Conference on Computer Vision (ICCV), 2017.
  • [12] K. He, X. Zhang, S. Ren, and J. Sun. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 770–778, 2016.
  • [13] J. Hosang, R. Benenson, and B. Schiele. Learning non-maximum suppression. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), July 2017.
  • [14] J. Hosang, M. Omran, R. Benenson, and B. Schiele. Taking a deeper look at pedestrians. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 4073–4082, 2015.
  • [15] L. Huang, Y. Yang, Y. Deng, and Y. Yu. Densebox: Unifying landmark localization with end to end object detection. arXiv preprint arXiv:1509.04874, 2015.
  • [16] A. Krizhevsky, I. Sutskever, and G. E. Hinton. Imagenet classification with deep convolutional neural networks. In Advances in neural information processing systems, pages 1097–1105, 2012.
  • [17] J. Li, X. Liang, S. Shen, T. Xu, J. Feng, and S. Yan. Scale-aware fast r-cnn for pedestrian detection. IEEE Transactions on Multimedia, 2017.
  • [18] Y. Li, H. Qi, J. Dai, X. Ji, and Y. Wei. Fully convolutional instance-aware semantic segmentation. In IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), pages 2359–2367, 2017.
  • [19] T.-Y. Lin, P. Dollár, R. Girshick, K. He, B. Hariharan, and S. Belongie. Feature pyramid networks for object detection. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2017.
  • [20] T.-Y. Lin, P. Goyal, R. Girshick, K. He, and P. Dollár. Focal loss for dense object detection. In The IEEE International Conference on Computer Vision (ICCV), 2017.
  • [21] J. Mao, T. Xiao, Y. Jiang, and Z. Cao. What can help pedestrian detection? In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2017.
  • [22] W. Nam, P. Dollár, and J. H. Han. Local decorrelation for improved detection. arXiv preprint arXiv:1406.1134, 2014.
  • [23] W. Ouyang and X. Wang. A discriminative deep model for pedestrian detection with occlusion handling. In Computer Vision and Pattern Recognition (CVPR), 2012 IEEE Conference on, pages 3258–3265. IEEE, 2012.
  • [24] S. Ren, K. He, R. Girshick, and J. Sun. Faster r-cnn: Towards real-time object detection with region proposal networks. In C. Cortes, N. D. Lawrence, D. D. Lee, M. Sugiyama, and R. Garnett, editors, Advances in Neural Information Processing Systems 28, pages 91–99. Curran Associates, Inc., 2015.
  • [25] A. Shrivastava, A. Gupta, and R. Girshick. Training region-based object detectors with online hard example mining. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 761–769, 2016.
  • [26] K. Simonyan and A. Zisserman. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556, 2014.
  • [27] Y. Tian, P. Luo, X. Wang, and X. Tang. Deep learning strong parts for pedestrian detection. In Proceedings of the IEEE international conference on computer vision, pages 1904–1912, 2015.
  • [28] B. Yang, J. Yan, Z. Lei, and S. Z. Li. Convolutional channel features. In Proceedings of the IEEE international conference on computer vision, pages 82–90, 2015.
  • [29] J. Yu, Y. Jiang, Z. Wang, Z. Cao, and T. Huang. Unitbox: An advanced object detection network. In Proceedings of the 2016 ACM on Multimedia Conference, pages 516–520. ACM, 2016.
  • [30] L. Zhang, L. Lin, X. Liang, and K. He. Is faster r-cnn doing well for pedestrian detection? In European Conference on Computer Vision, pages 443–457. Springer, 2016.
  • [31] S. Zhang, R. Benenson, M. Omran, J. Hosang, and B. Schiele. How far are we from solving pedestrian detection? In IEEE Conference on Computer Vision and Pattern Recognition. IEEE Computer Society, 2016.
  • [32] S. Zhang, R. Benenson, and B. Schiele. Filtered channel features for pedestrian detection. In 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 1751–1760. IEEE, 2015.
  • [33] S. Zhang, R. Benenson, and B. Schiele. Citypersons: A diverse dataset for pedestrian detection. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2017.
  • [34] C. Zhou and J. Yuan. Multi-label learning of part detectors for heavily occluded pedestrian detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 3486–3495, 2017.