Large-Scale Object Detection in the Wild from Imbalanced Multi-Labels

by   Junran Peng, et al.

Training with more data has always been the most stable and effective way of improving performance in deep learning era. As the largest object detection dataset so far, Open Images brings great opportunities and challenges for object detection in general and sophisticated scenarios. However, owing to its semi-automatic collecting and labeling pipeline to deal with the huge data scale, Open Images dataset suffers from label-related problems that objects may explicitly or implicitly have multiple labels and the label distribution is extremely imbalanced. In this work, we quantitatively analyze these label problems and provide a simple but effective solution. We design a concurrent softmax to handle the multi-label problems in object detection and propose a soft-sampling methods with hybrid training scheduler to deal with the label imbalance. Overall, our method yields a dramatic improvement of 3.34 points, leading to the best single model with 60.90 mAP on the public object detection test set of Open Images. And our ensembling result achieves 67.17 mAP, which is 4.29 points higher than the best result of Open Images public test 2018.



There are no comments yet.


page 2

page 4


Active Terahertz Imaging Dataset for Concealed Object Detection

Concealed object detection in Terahertz imaging is an urgent need for pu...

BigDetection: A Large-scale Benchmark for Improved Object Detector Pre-training

Multiple datasets and open challenges for object detection have been int...

MLMA-Net: multi-level multi-attentional learning for multi-label object detection in textile defect images

For the sake of recognizing and classifying textile defects, deep learni...

Solution for Large-Scale Hierarchical Object Detection Datasets with Incomplete Annotation and Data Imbalance

This report demonstrates our solution for the Open Images 2018 Challenge...

Interactron: Embodied Adaptive Object Detection

Over the years various methods have been proposed for the problem of obj...

The Open Brands Dataset: Unified brand detection and recognition at scale

Intellectual property protection(IPP) have received more and more attent...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Data is playing a primary and decisive role in deep learning. With the advent of ImageNet dataset 


, deep neural network 

[15] becomes well exploited for the first time, and an unimaginable number of works in deep learning sprung up. Some recent works [24, 39] also prove that larger quantities of data with labels of low quality(like hashtags) could surpass the state-of-the-art methods by a large margin. Throughout the history of deep learning, it could be easily learned that the development of an area is closely related to the data.

In the past years, great progresses have also been achieved in the field of object detection. Some generic object detection datasets with annotations of high quality like Pascal VOC [9] and MS COCO [21] greatly boost the development of object detection, giving birth to plenty of amazing methods [29, 28, 22, 20]. However, these datasets are quite small in today’s view, and begin to limit the advancement of object detection area to some degree. Attempts are frequently made to focus on atomic problems on these datasets instead of exploring object detection in harder scenarios.

Recently, Open Images dataset is published in terms of 1.7 million images with 12.4 million boxes annotated of 500 categories. This unseals the limits of data-hungry methods and may stimulate research to bring object detection to more general and sophisticated situations. However, accurately annotating data of such scale is labor intensive that manual labeling is almost infeasible. The annotating procedure of Open Images dataset is completed with strong assistance of deep learning that candidate labels are generated by models and verified by humans. This inevitably weakens the quality of labels because of the uncertainty of models and the knowledge limitation of human individuals, which leads to several major problems.

Objects in Open Image dataset may explicitly or implicitly have multiple labels, which differs from the traditional object detection. The object classes in Open Images form a hierarchy that most objects may hold a leaf label and all the corresponding parent labels. However, due to the annotation quality, there are cases that objects are only labeled as parent classes and leaf classes are absent. Apart from hierarchical labels, objects in Open Images dataset may also hold several leaf classes like car and toy. Another annoying case is that objects of similar classes are frequently annotated as each other in both training and validation set, for example, torch and flashlight, as shown in 1.

As images of Open Images dataset are collected from open source in the wild, the label distribution is extremely imbalanced that both very frequent and infrequent classes are included. Hence, methods for balancing label distribution are requested to be applied to train detectors better. Nevertheless, earlier methods like over-sampling tends to impose over-fitting on infrequent categories and fail to fully use the data of frequent categories.

In this work, we engage in solving these major problems in large-scale object detection. We design a concurrent softmax to deal with explicitly and implicitly multi-label problem. We propose a soft-balance method together with a hybrid training scheduler to mitigate the over-fitting on infrequent categories and better exploit data of frequent categories. Our methods yield a total gain of 3.34 points, leading to a 60.90 mAP single model result on the public test-challenge set of Open Images, which is 5.09 points higher than the single model result of the first place method  [1] on the public test-challenge set last year. More importantly, our overall system achieves a 67.17 mAP, which is 4.29 points higher than their ensembled results.

2 Related Works

Object Detection.

Generic object detection is a fundamental and challenging problem in computer vision, and plenty of works  

[29, 28, 22, 20, 7, 23, 3, 19, 27] in this area come out in recent years. Faster-RCNN [29] first proposes an end-to-end two-stage framework for object detection and lays a strong foundation for successive methods. In [7], deformable convolution is proposed to adaptively sample input features to deal with objects of various scales and shapes.  [19, 27] utilize dilated convolutions to enlarge the effective receptive fields of detectors to better recognize objects of large scale.  [3, 23] focus on predicting boxes of higher quality to accommodate the COCO metric.

However, most works are still exploring in datasets like Pascal VOC and MS COCO, which are small by modern standards. Only a few works [25, 38, 1, 10] have been proposed to deal with large-scale object detection dataset like Open Images [16]. Wu et al. [38] proposes a soft box-sampling method to cope with the partial label problem in large scale dataset. In [25], a part-aware sampling method is designed to capture the relations between entity and parts to help recognize part objects.

Multi-Label Recognition.

There have been many amazing attempts [11, 34, 40, 18, 33, 37, 5] to solve the multi-label classification problem from different aspects. One simple and intuitive approach [2] is to transform the multi-label classification into multiple binary classification problems and fuse the results, but this neglects relationships between labels. Some works [13, 17, 33, 36] embed dependencies among labels with deep learning to improve the performance of multi-label recognition. In [18, 33, 37, 5], graph structures are utilized to model the label dependencies. Gong et al. [11] uses a ranking based learning strategy and reweights losses of different labels to achieve better accuracy. Wang et al. [34] proposes a CNN-RNN framework to embed labels into latent space to capture the correlation between them.

Imbalanced Label Distribution.

There have been many efforts on handling long-tailed label distribution through data based resampling strategies [31, 10, 4, 26, 41, 24] or loss-based methods [6, 14, 35]. In [31, 10, 26], class-aware sampling is applied that each mini-batch is filled as uniform as possible with respect to different classes.  [4, 41] expand samples of minor classes through synthesizing new data. Dhruv et al. [24] computes a replication factor for each image based on the distribution of hashtags and duplicates images the prescribed number of times.

As for loss-based methods, loss weights are assigned to samples of different classes to match the imbalanced label distribution. In [14, 35], samples are re-weighted by inverse class frequency, while Yin et al. [6] calculates the effective number of each class to re-balance the loss. In OHEM [32] and focal loss [20], difficulty of samples are evaluated in term of losses and hard samples are assigned higher loss weights.

Figure 2: The imbalance magnitude of Open Images and MS COCO dataset. Imbalance magnitude means the number of the images of the largest category divided by the smallest. (best viewed on high-resolution display)
(a) We select the top- confused category pairs and show their concurrent rates.
(b) We show the ratio of parent annotations without leaf label and total parent annotations.
Figure 3: Implicit multi-label problem caused by confused categories and absence of leaf classes.

3 Problem Setting

Open Images dataset is currently the largest released object detection dataset to the best of our knowledge. It contains 12 million annotated instances for 500 categories on 1.7 million images. Considering its large size, it is unfeasible to manually annotate such huge number of images on 500 categories Owing to its scale size and annotation styles, we argue that there are three major problems regarding this kind of dataset, apart from the missing annotation problem which has been discussed in  [38, 25].

Objects may explicitly have multiple labels. As objects in physical world always contain rich categorical attributes on different levels, the 500 categories in Open Images dataset form a class hierarchy of 5 levels, with 443 of the nodes as leaf classes and 57 of the nodes as parent classes. It is likely that some objects hold multiple labels including leaf labels and parent labels, like apple and fruit as shown in Figure 0(b). Another case is that an object could easily have multiple leaf labels. For instance, a car-toy is labeled as toy and car at the same time as shown in Figure 0(a). This happens frequently in dataset and all leaf labels are requested to be predicted during evaluation. Different from previous single-label object detection, how to deal with multiple labels is one of the crucial factors for object detection in this dataset.

Objects may implicitly have multiple labels. Other than the explicit multi-label problem, there is also the implicit multi-label problem caused by the limited and inconsistent knowledge of human annotators. There remain many pairs of leaf categories that are hard to distinguish, and labels of these pairs are mixed up randomly. We analyze the proportion that an object of a leaf class is labeled as another, and find that there are at least 115 pairs of severely confused categories(confusion ratio ). We display the top- confused pairs in Figure 2(a), and find many categories heavily confused. For instance, nearly of the torches are labeled as flashlight and of the leopards are labeled as cheetah.

Besides, labels of leaf and parent classes are always not complete so that a large amount of objects are only annotated with parent labels without leaf label. As shown in Figure 0(c), an apple is sometimes labeled as apple and sometimes labeled as fruit without the leaf annotation. We demonstrate the ratio of leaf annotations and parent annotations lacking leaf annotations in Figure 2(b). This implicit co-existence phenomenon also happens frequently and needs to be taken into consideration, otherwise the detectors may learn the false signals.

Imbalanced label distribution. To build such a huge dataset, images are collected from open source in the wild. As on can expect, Open Images dataset suffers from extremely imbalanced label distribution that both infrequent and very frequent categories are included. As shown in Figure 2, the most frequent category owns nearly k times the training images of the most infrequent category. Naive re-balance strategy, such as widely used class-aware sampling [10] which uniformly samples training images of different categories could not cope with such extreme imbalance, and may lead to two consequences:

1) For frequent categories, they are not trained sufficiently, for the reason that most of their training samples have never been seen and are wasted.

2) For infrequent categories, the excessive over-sampling on them may cause severe over-fitting and degrade the generalizability of recognition on these classes.

Once adopting the class-aware sampling, category like person is extremely undersampled that of the instances are neglected, while category like pressure cooker is immensely oversampled that each instance is seen for

more times averagely within an epoch.

4 Methodology

In this part, we explore methods to deal with the label related problems in large scale object detection. First, we design a concurrent softmax to handle both the explicit and implicit multi-label issues jointly. Second, we propose a soft-balance sampling together with a hybrid training scheduler to deal with the extremely imbalanced label distribution.

4.1 Multi-Label Object Detection

As one of the most widely used loss function in deep learning, the form of softmax loss about a bounding box

is presented as follows:


where denotes the response of the -th class, denotes the label and means the number of categories. It behaves well in single label recognition where . However, things are different when it comes to multi-label recognition.

In the conventional object detection training scheme, each bounding box is assigned only one label during training ignoring other ground-truth labels. If we force to assign all the () ground-truth labels that belongs to to bounding box during training, scores of multiple labels would restrain each other. When computing the gradient of each ground-truth label, it looks like below:


When for , is optimized to become lower even if is one of the ground-truth labels, which is the wrong optimization direction.

4.1.1 Concurrent Softmax

The concurrent softmax is designed to help solve the problem of recognizing objects with multiple labels in object detection. During training, the concurrent softmax loss of a predicted box is presented as follows:


where denotes the label of class regarding the box , and denotes the concurrent rate of class to class . And output of concurrent softmax during training is defined as:


Unlike in softmax that responses of the ground-truth categories are suppressing all the others, we remove the suppression effects between explicitly coexisting categories in concurrent softmax. For instance, a bounding box is assigned multiple ground-truth labels during training. When computing the score of class , influences of all the other ground-truth classes are neglected because of the term, and the score of each correct class is boosted. This avoids the unnecessary large losses due to the multi-label problem, and the gradients could focus on more valuable knowledge.

Apart from the explicit co-existence cases, there are still implicit concurrent relationships remain to be settled. We define a concurrent rate

as the probability that an object of class

is labeled as class . The is calculated based on the class annotations of training set and Figure 2(a) shows the concurrent rates of confusion pairs. For hierarchical relationships, is set 0 when is leaf node with as its parent, and vice versa. With the term, suppression effects between confusing pairs are weakened.

The influence of multi-label object detection is also prominent during inference. Different from the conventional multi-label recognition tasks, the evaluation metric of object detection is mean average precision(mAP). For each category, detection results of all images are firstly collected and ranked by scores to form a precision-recall curve, and the average area of precision-recall curve is defined as the mAP. In this way, the absolute value of box score matters, because it may influence the rank of predicted box over the entire dataset. Thus we also apply the concurrent softmax during inference, and present it as follows:


where we abandon the term and keep the concurrent rate term. Scores of categories in a hierarchy and scores of similar categories would not suppress each other, and are boosted effectively, which is desirable in object detection task.

4.1.2 Compared with BCE loss

BCE is always a popular solution to mutl-label recognition, but it does not work well on multi-label detection task. We argue that sigmoid function fails to normalize scores and declines the suppression effect between categories which is desired when evaluated with mAP metric. We have tried BCE loss and focal loss, but it turns out that they yield much worse result even than the original softmax cross-entropy.

4.2 Soft-balance Sampling with Hybrid Training

As detailedly illuminated in 3, Open Images dataset suffers from severely imbalanced label distribution. We denote by the number of categories, the number of total training images, and the number of images containing objects of the -th class. Conventionally, images are sampled in sequence without replacement for training in each epoch, and the original probability of class being sampled is denoted as , which may greatly degrade the recognition capability of model for infrequent classes. A widely used technique, i.e., class-aware sampling mentioned in [31, 10, 26] is a naive solution to handle the class imbalance problem, in which categories are sampled uniformly in each batch. The class-aware sampling probability of class becomes . Yet this may cause heavy over-fitting on infrequent categories and insufficient training on frequent categories as aforementioned.

To alleviate the problems above, we firstly adjust the sampling probability based on number of samples, which we call soft-balance sampling. We first define the as the approximation of non-balance sampling probability for convenience. Then the sampling probability of class-aware balance can be reformulated as:


where the can be regarded as a balance factor that is inversely proportional to the number of categories and the original sampling probability.

To reconcile the frequent and infrequent categories, we introduce soft-balance sampling by adjusting the balance factor with a new hyper-parameter :


Note that corresponds to non-balance sampling and corresponds to class-aware balance. The normalized probability is:


This sampling strategy guarantees more sufficient training on dominate categories and decreases the excessive sampling frequency of infrequent categories.

Even with the soft-balance method, there are still many samples of the frequent categories that are not sampled. Thus we propose a hybrid training scheduler to further mitigate this problem. We firstly train detector using the conventional strategy, which is sampling training images in sequence without replacement, and the equivalent sampling probability is . Then we finetune the model with soft-balance strategy to cover categories with very few samples. This hybrid training schema exploits the effectiveness of pretrained model for object detection task from Open Images itself rather than ImageNet. It ensures that all the images have been seen during training, and endows the model with a better generalization ability.

5 Experiments

5.1 Dataset

To analyze the proposed concurrent softmax loss and soft-balance with hybrid training, we conduct experiment on Open Images challenge 2019 dataset. As an object detection dataset in the wild, it contains 1.7 million images with 12.4 million boxes of 500 categories in its challenge split. The scale of training images is times of the MS COCO [21] and times of the second largest object detection dataset Object365 [30].

Considering the huge size of Open Images dataset, we split a mini Open Images dataset for our ablation study. The mini Open Images dataset contains 115K training images and 5K validation images named as mini-train and mini-val. All the images are sampled from Open Images challenge 2019 dataset with the ratio of each category unchanged. Final results on full-val and public test-challenge in Open Images challenge 2019 dataset are also reported. We follow the metric used in Open Images challenge which is a variant mAP at IoU 0.5, as all false positives not existing in image-level labels111In Open Images dataset, image-level labels consist of verified-exist labels and verified-not-exist labels. The unverified categories are ignored are ignored.

5.2 Implementation Details

We train our detector with ResNet-50 backbone armed with FPN. For the network configuration, we follow the setting mentioned in Detectron. We use SGD with momentum 0.9 and weight decay 0.0001 to optimize parameters. The initial learning rate is set to 0.00125 batch size, and then decreased by a factor of 10 at 4 and 6 epoch for schedule which has total 7 epochs. The input images are scaled so that the length of the shorter edge is 800 and the longer side is limited to 1333. Horizontal flipping is used as data augmentation and sync-BN is adopted to speed up the convergence.

5.3 Concurrent Softmax

We explore the influence of concurrent softmax in training and testing stage respectively in this ablation study. All models are trained with mini-train and evaluated on mini-val.

The impacts of concurrent softmax during training. Table 1 shows the results of the proposed concurrent softmax compared with the vanilla softmax and other existing methods during training stage. Concurrent softmax could outperform softmax by 1.13 points with class-aware sampling and 0.98 points with non-balance sampling. It is also found that sigmoid with BCE and focal loss behaves poorly in this case. We guess that they are incompatible with the mAP metric in object detection as mentioned in 4.1.2. Our method also outperforms dist-CE loss [24] and Co-BCE loss [1].

Loss Type Balance mAP
Focal Loss [20] 50.18
BCE Loss 54.29
Co-BCE Loss [1] 55.74
dist-CE Loss [24] 55.90
Softmax Loss 38.16
Concurrent Softmax Loss 39.14
Softmax Loss 55.45
Concurrent Softmax Loss 56.58
Table 1: The comparison of different loss functions method. Models are trained in mini-train and evaluated on mini-val.

The impacts of concurrent softmax during testing. We also show results to demonstrate the effectiveness of concurrent softmax in testing stage in Table 2. Solely applying concurrent softmax brings 0.36 mAP improvement during inference, while applying it in both training and testing stage yields 1.50 points improvement totally. This also proves the fact that suppression effects between leaf and parent categories or confusing categories are harming the performance of object detection in Open Images.

Train Method Test Method mAP
Softmax Softmax 55.45
Softmax Concurrent Softmax 55.77
Concurrent Softmax Softmax 56.58
Concurrent Softmax Concurrent Softmax 56.95

Table 2: The effectiveness of concurrent softmax during testing. Models are trained in mini-train and evaluated on mini-val.

5.4 Soft-balance Sampling

Methods mAP
Non-balance - 38.16
Class-aware Sampling [10] - 55.45
Effective Number [6] - 45.72
Soft-balance 0.3 50.69
0.5 56.19
0.7 57.04
1.0 55.45
1.5 52.41
Table 3: The comparison of different sampling methods. Models are trained in mini-train and evaluated on mini-val.
Figure 4: Training curves of the proposed soft-balance sampling. Soft-balance with achieves the best performance.
(a) Non-balance (blue) versus Class-aware Sampling (orange) sorted by the number of images for most frequent 100 categories (left) and most infrequent 100 categories (right).
(b) Class-aware Sampling (blue) versus Soft-balance with (orange) sorted by the number of images for most frequent 100 categories (left) and most infrequent 100 categories (right).
Figure 5: The comparison of sampling strategy among categories. (Best viewed on high-resolution display)

Results. Table 3 presents the results of the proposed soft-balance sampling and other balance method. As Open Images is a long-tailed dataset, many categories have few samples, so that non-balance training only achieves 38.16 mAP. Class-aware sampling simply samples all categories data uniformly at random, it remedy the data imbalance problem to a great extent and boost the performance to 55.45. The effective number [6] is used to re-weight the classification loss with the purpose of harmonizing the gradient contribution from different categories. Comparing to the non-balance method, the effective number improves the results by 7.56 points, but is worse than the class-aware sampling. We argue that it is because the balance strategy applied on data level is more efficient than that of loss level. Soft-balance with hyper-parameter allows us to transfer from non-balance () to class-aware sampling (). Thus, we can find a point at which sufficient infrequent categories data could be sampled to train the model and the over-fitting problem does not happen yet. The soft-balance with outperforms the class-aware sampling by 1.59 points.

The impacts of soft-balance during training. To investigate why soft-balance is better, we show the training curves of soft-balance with different in Figure 4. We can learn that the small is hard to achieve a good performance due to the data imbalance problem. But too large also fails on accomplishing the best performance comparing with the relatively smaller setting. Note that the mAP of is much higher than that of in the first learning rate stage (before epoch 4), but this situation reverses in subsequent train progress. This comparison proves that provides more sufficient rare categories data to train the model and achieve better performance in the beginning, however, it run into a severe over-fitting in the convergence stage. The results of and validate this rules again.

The impacts of soft-balance among categories. We further study the performance of , , and among categories in Figure 5, in which the challenge validation results of 500 categories are arranged from large to few by their number of images. As shown in Figure 4(a), the (orange line) outperforms the (blue line) on the later half categories which have few image samples. Although solves the data insufficiency of infrequent categories, it under-samples the frequent categories and causes the performance dropping on the former half categories. Figure 4(b) shows that (orange line) alleviates the excessive under-sampling of the major categories comparing to (blue line). On the other hand, it mitigates the over-fitting problem of infrequent categories. Therefore, the performance of is almost always better than on the full category space.

5.5 Hybrid Training Scheduler

Method Pretrain Epochs mAP
Non-balance ImageNet 7 56.06
ImageNet 11 59.12
ImageNet 14 59.85
ImageNet 16 59.95
Scratch 20 60.70
Class-aware Sampling ImageNet 7 64.68
ImageNet 14 62.85
Non-balance I14 14+7 65.60
Non-balance S20 20+7 65.92
Soft-balance Non-balance S20 20+7 67.09
Soft-balance Non-balance S20 20+7 68.23
Table 4: The effect of training scheduler. The of the soft-balance is set to 0.7. Non-balance I14 denotes the model of epoch 14 trained with non-balance strategy from ImageNet pretrain. Non-balance S20 denotes the model of epoch 20 trained with non-balance strategy from scratch. Soft-balance means that concurrent softmax is adopted in both training and testing stage. Models are trained on full-train and evaluated on full-val.

Table 4 summarizes the results of ResNeXt152 on Open Images Challenge dataset trained with different training scheduler. For the non-balance setting, the more epochs the model trained, the better performance the model achieves. And training a model from scratch yields better results than finetuning from ImageNet pretrained model. These observations match similar conclusion in [12].

While class-aware sampling significantly boosts the performance by 8.62 points using the ImageNet pretraining in 7 epochs setting, it suffers from over-fitting problem, as the mAP of model trained with 14 epochs is lower than that of 7 epochs. And frequent categories are still intensely under-sampled even applying the balance sampling. With hybrid training, class-aware sampling can achieve better performance in both non-balance ImageNet pretraining and non-balance scratch pretraining setting. Note that these improvements are not caused by more training epochs, because longer training schedule will decreases the performance if with only ImageNet pretraining. By further using soft-balance strategy, the hybrid training with non-balance scratch is improved from 65.92 to 67.09 mAP.

5.6 Extension Results on Test-challenge Set

Methods Ensemble Public Test
2018 1st [1] 55.81
Ours 60.90
2018 1st [1] 62.88
2018 2nd [10] 62.16
2018 3rd 61.70
Ours 67.17
Baseline (ResNeXt-152) 53.88
+Class-aware Sampling 57.56
+Concurrent Softmax Loss 58.60
+Soft-balance 59.86
+Hybrid Training Scheduler 60.90
+Other Tricks 62.34
+Ensemble 67.17
Table 5: Results with bells and whistles on Open Images public test-challenge set.

With the proposed concurrent softmax, soft-balance and hybrid training scheduler, we achieve 67.17 mAP and 4.29 points absolute improvement compared to the first place entry on the public test-challenge set last year, as detailed in Table 5. We train a ResNeXt-152 FPN with multi-scale training and testing as our baseline which achieves 53.88 mAP. After using class-aware balance, the performance is boosted to 57.56. With the help of proposed concurrent softmax, the model achieves 58.60 mAP. The soft-balance and the hybrid training scheduler lead to mAP gains of 1.26 and 1.04 points, respectively. By further using other tricks including data augmentation, loss function search, and heavier head, we achieve a best single model with a mAP of 62.34. We use ResNeXt-101, ResNeXt-152, and EfficientNet-B7 with various tricks for model ensembling. The final mAP on Open Images public test-challenge set is 67.17.

6 Conclusion

In this paper, we investigate the multi-label problem and the imbalanced label distribution problem in large-scale object detection dataset , and introduce a simple but powerful solution. We propose the concurrent softmax function to deal with explicit and implicit multi-label problem in both training and testing stage. Our soft-balance method together with hybrid training scheduler could effectively deal with the extremely imbalanced label distribution.

7 Acknowledgements

This work was supported in part by the Major Project for New Generation of AI (No.2018AAA0100400), the National Natural Science Foundation of China (No.61836014, No.61761146004, No.61773375, No.61602481), the Key R&D Program of Shandong Province (Major Scientific and Technological Innovation Project) (NO.2019JZZY010119), and CAS-AIR. We also thank Changbao Wang, Cunjun Yu, Guoliang Cao and Buyu Li for their precious discussion and help.


  • [1] T. Akiba, T. Kerola, Y. Niitani, T. Ogawa, S. Sano, and S. Suzuki (2018) PFDet: 2nd place solution to open images challenge 2018 object detection track. arXiv preprint arXiv:1809.00778. Cited by: §1, §2, §5.3, Table 1, Table 5.
  • [2] M. R. Boutell, J. Luo, X. Shen, and C. M. Brown (2004)

    Learning multi-label scene classification

    Pattern recognition 37 (9), pp. 1757–1771. Cited by: §2.
  • [3] Z. Cai and N. Vasconcelos (2018) Cascade r-cnn: delving into high quality object detection. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 6154–6162. Cited by: §2.
  • [4] N. V. Chawla, K. W. Bowyer, L. O. Hall, and W. P. Kegelmeyer (2002) SMOTE: synthetic minority over-sampling technique.

    Journal of artificial intelligence research

    16, pp. 321–357.
    Cited by: §2.
  • [5] Z. Chen, X. Wei, P. Wang, and Y. Guo (2019) Multi-label image recognition with graph convolutional networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 5177–5186. Cited by: §2.
  • [6] Y. Cui, M. Jia, T. Lin, Y. Song, and S. Belongie (2019) Class-balanced loss based on effective number of samples. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 9268–9277. Cited by: §2, §2, §5.4, Table 3.
  • [7] J. Dai, H. Qi, Y. Xiong, Y. Li, G. Zhang, H. Hu, and Y. Wei (2017) Deformable convolutional networks. In Proceedings of the IEEE international conference on computer vision(ICCV), Cited by: §2.
  • [8] J. Deng, W. Dong, R. Socher, L. Li, K. Li, and L. Fei-Fei (2009) Imagenet: a large-scale hierarchical image database. In 2009 IEEE conference on computer vision and pattern recognition, pp. 248–255. Cited by: §1.
  • [9] M. Everingham, L. Van Gool, C. K. Williams, J. Winn, and A. Zisserman (2010) The pascal visual object classes (voc) challenge. International journal of computer vision 88 (2), pp. 303–338. Cited by: §1.
  • [10] Y. Gao, X. Bu, Y. Hu, H. Shen, T. Bai, X. Li, and S. Wen (2018) Solution for large-scale hierarchical object detection datasets with incomplete annotation and data imbalance. arXiv preprint arXiv:1810.06208. Cited by: §2, §2, §3, §4.2, Table 3, Table 5.
  • [11] Y. Gong, Y. Jia, T. Leung, A. Toshev, and S. Ioffe (2013) Deep convolutional ranking for multilabel image annotation. arXiv preprint arXiv:1312.4894. Cited by: §2.
  • [12] K. He, R. Girshick, and P. Dollár (2018) Rethinking imagenet pre-training. arXiv preprint arXiv:1811.08883. Cited by: §5.5.
  • [13] H. Hu, G. Zhou, Z. Deng, Z. Liao, and G. Mori (2016) Learning structured inference neural networks with label relations. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2960–2968. Cited by: §2.
  • [14] C. Huang, Y. Li, C. Change Loy, and X. Tang (2016) Learning deep representation for imbalanced classification. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 5375–5384. Cited by: §2, §2.
  • [15] A. Krizhevsky, I. Sutskever, and G. E. Hinton (2012)

    Imagenet classification with deep convolutional neural networks

    In Advances in neural information processing systems, pp. 1097–1105. Cited by: §1.
  • [16] A. Kuznetsova, H. Rom, N. Alldrin, J. Uijlings, I. Krasin, J. Pont-Tuset, S. Kamali, S. Popov, M. Malloci, T. Duerig, et al. (2018) The open images dataset v4: unified image classification, object detection, and visual relationship detection at scale. arXiv preprint arXiv:1811.00982. Cited by: §2.
  • [17] Q. Li, M. Qiao, W. Bian, and D. Tao (2016) Conditional graphical lasso for multi-label image classification. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2977–2986. Cited by: §2.
  • [18] X. Li, F. Zhao, and Y. Guo (2014) Multi-label image classification with a probabilistic label enhancement model.. In UAI, Vol. 1, pp. 3. Cited by: §2.
  • [19] Y. Li, Y. Chen, N. Wang, and Z. Zhang (2019) Scale-aware trident networks for object detection. arXiv preprint arXiv:1901.01892. Cited by: §2.
  • [20] T. Lin, P. Goyal, R. Girshick, K. He, and P. Dollár (2017) Focal loss for dense object detection. In Proceedings of the IEEE international conference on computer vision(ICCV), Cited by: §1, §2, §2, Table 1.
  • [21] T. Lin, M. Maire, S. Belongie, J. Hays, P. Perona, D. Ramanan, P. Dollár, and C. L. Zitnick (2014) Microsoft coco: common objects in context. In European conference on computer vision, pp. 740–755. Cited by: §1, §5.1.
  • [22] W. Liu, D. Anguelov, D. Erhan, C. Szegedy, S. Reed, C. Fu, and A. C. Berg (2016) Ssd: single shot multibox detector. In Proceedings of the European conference on computer vision(ECCV), Cited by: §1, §2.
  • [23] X. Lu, B. Li, Y. Yue, Q. Li, and J. Yan (2019) Grid r-cnn. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 7363–7372. Cited by: §2.
  • [24] D. Mahajan, R. Girshick, V. Ramanathan, K. He, M. Paluri, Y. Li, A. Bharambe, and L. van der Maaten (2018) Exploring the limits of weakly supervised pretraining. In Proceedings of the European Conference on Computer Vision (ECCV), pp. 181–196. Cited by: §1, §2, §5.3, Table 1.
  • [25] Y. Niitani, T. Akiba, T. Kerola, T. Ogawa, S. Sano, and S. Suzuki (2019) Sampling techniques for large-scale object detection from sparsely annotated objects. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 6510–6518. Cited by: §2, §3.
  • [26] W. Ouyang, X. Wang, C. Zhang, and X. Yang (2016) Factors in finetuning deep model for object detection with long-tail distribution. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 864–873. Cited by: §2, §4.2.
  • [27] J. Peng, M. Sun, Z. Zhang, T. Tan, and J. Yan (2019) POD: practical object detection with scale-sensitive network. In Proceedings of the IEEE International Conference on Computer Vision, pp. 9607–9616. Cited by: §2.
  • [28] J. Redmon, S. Divvala, R. Girshick, and A. Farhadi (2016) You only look once: unified, real-time object detection. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 779–788. Cited by: §1, §2.
  • [29] S. Ren, K. He, R. Girshick, and J. Sun (2015) Faster r-cnn: towards real-time object detection with region proposal networks. In Advances in neural information processing systems, pp. 91–99. Cited by: §1, §2.
  • [30] S. Shao, Z. Li, T. Zhang, C. Peng, G. Yu, X. Zhang, J. Li, and J. Sun (2019) Objects365: a large-scale, high-quality dataset for object detection. In Proceedings of the IEEE International Conference on Computer Vision, pp. 8430–8439. Cited by: §5.1.
  • [31] L. Shen, Z. Lin, and Q. Huang (2016)

    Relay backpropagation for effective learning of deep convolutional neural networks

    In European conference on computer vision, pp. 467–482. Cited by: §2, §4.2.
  • [32] A. Shrivastava, A. Gupta, and R. Girshick (2016) Training region-based object detectors with online hard example mining. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 761–769. Cited by: §2.
  • [33] M. Tan, Q. Shi, A. van den Hengel, C. Shen, J. Gao, F. Hu, and Z. Zhang (2015) Learning graph structure for multi-label image classification via clique generation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4100–4109. Cited by: §2.
  • [34] J. Wang, Y. Yang, J. Mao, Z. Huang, C. Huang, and W. Xu (2016) Cnn-rnn: a unified framework for multi-label image classification. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 2285–2294. Cited by: §2.
  • [35] Y. Wang, D. Ramanan, and M. Hebert (2017) Learning to model the tail. In Advances in Neural Information Processing Systems, pp. 7029–7039. Cited by: §2, §2.
  • [36] Z. Wang, T. Chen, G. Li, R. Xu, and L. Lin (2017) Multi-label image recognition by recurrently discovering attentional regions. In Proceedings of the IEEE international conference on computer vision, pp. 464–472. Cited by: §2.
  • [37] J. Wu, A. Guo, V. S. Sheng, P. Zhao, Z. Cui, and H. Li (2017)

    Adaptive low-rank multi-label active learning for image classification

    In Proceedings of the 25th ACM international conference on Multimedia, pp. 1336–1344. Cited by: §2.
  • [38] Z. Wu, N. Bodla, B. Singh, M. Najibi, R. Chellappa, and L. S. Davis (2018) Soft sampling for robust object detection. arXiv preprint arXiv:1806.06986. Cited by: §2, §3.
  • [39] Q. Xie, E. Hovy, M. Luong, and Q. V. Le (2019) Self-training with noisy student improves imagenet classification. External Links: 1911.04252 Cited by: §1.
  • [40] J. Zhang, Q. Wu, C. Shen, J. Zhang, and J. Lu (2018) Multilabel image classification with regional latent semantic dependencies. IEEE Transactions on Multimedia 20 (10), pp. 2801–2813. Cited by: §2.
  • [41] Y. Zou, Z. Yu, B. Vijaya Kumar, and J. Wang (2018) Unsupervised domain adaptation for semantic segmentation via class-balanced self-training. In Proceedings of the European Conference on Computer Vision (ECCV), pp. 289–305. Cited by: §2.