Many-shot from Low-shot: Learning to Annotate using Mixed Supervision for Object Detection

by   Carlo Biffi, et al.

Object detection has witnessed significant progress by relying on large, manually annotated datasets. Annotating such datasets is highly time consuming and expensive, which motivates the development of weakly supervised and few-shot object detection methods. However, these methods largely underperform with respect to their strongly supervised counterpart, as weak training signals often result in partial or oversized detections. Towards solving this problem we introduce, for the first time, an online annotation module (OAM) that learns to generate a many-shot set of reliable annotations from a larger volume of weakly labelled images. Our OAM can be jointly trained with any fully supervised two-stage object detection method, providing additional training annotations on the fly. This results in a fully end-to-end strategy that only requires a low-shot set of fully annotated images. The integration of the OAM with Fast(er) R-CNN improves their performance by 17% mAP, 9% AP50 on PASCAL VOC 2007 and MS-COCO benchmarks, and significantly outperforms competing methods using mixed supervision.



There are no comments yet.


page 9

page 22

page 23


Weakly-supervised Any-shot Object Detection

Methods for object detection and segmentation rely on large scale instan...

Mixed Supervised Object Detection by Transferring Mask Prior and Semantic Similarity

Object detection has achieved promising success, but requires large-scal...

Detecting Human-Object Interaction with Mixed Supervision

Human object interaction (HOI) detection is an important task in image u...

Mixed supervision for surface-defect detection: from weakly to fully supervised learning

Deep-learning methods have recently started being employed for addressin...

EHSOD: CAM-Guided End-to-end Hybrid-Supervised Object Detection with Cascade Refinement

Object detectors trained on fully-annotated data currently yield state o...

Single-Shot Object Detection with Enriched Semantics

We propose a novel single shot object detection network named Detection ...

Progressive Object Transfer Detection

Recent development of object detection mainly depends on deep learning w...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Object detection is an essential building block of many computer vision systems

[27]. State-of-the-art (SOTA) methods mainly rely on large scale datasets with manually annotated bounding boxes to train fully supervised CNN-based models [8, 18, 17, 13, 3]. However, the prohibitive cost and time requirements associated with data annotation reduce the applicability of SOTA detection models in real life scenarios. This has motivated research on object detection strategies with reduced data annotation requirements. Amongst the most popular low data regimes, we distinguish Weakly Supervised Object Detection (WSOD), which aims to train object detectors using only image-level annotations [2, 19, 22, 1, 26], and Few-Shot or Low-Shot Object Detection (FSOD/LSOD), training supervised models with only a handful of training examples on all (LSOD) or only a subset of novel test classes (FSOD) [11, 25, 5]. FSOD and in particular WSOD have been the focus of a large body of work with innovative strategies obtaining promising performance. Nonetheless, these models typically fall far short of their strongly supervised counterparts. Numerical performance gaps are attributed to the low quality of bounding-box annotations produced, e.g. by WSOD methods, that often manifest as partial or oversized boxes. Such results are not reliable enough for use in real-world scenarios and can be observed to cause deterioration of detection performance when used in fully supervised models training. This can be attributed to weak training signals requiring very large and curated datasets (WSOD) or very representative and carefully selected annotated examples (FSOD).

Figure 1: Weak and low data detection strategies and our proposed mixed supervision-based setting. First row: Weakly Supervised Object Detection (WSOD) models learn to annotate images with image-level annotations which are then used to train fully supervised models. WSOD annotations are often partial or oversized, resulting in poor detector performance. Second row: Few-shot or low-shot object detection (LSOD) trains models on only a handful of training examples. Research mainly focuses on situations where only a specific subset of novel classes have limited training data. Bottom row: Mixed Supervision for Object Detection (MSOD) combines a low shot set of images containing object annotations with a large volume of images comprising only image-level annotations. We train an online annotation module to generate a many-shots set which, at the same time, is used to train a fully supervised model.

To address the aforementioned challenges, we focus on a recent training paradigm relying on Mixed Supervision for Object Detection (MSOD) [15, 7]. The distinction between this protocol and the previously introduced weak and low data settings is illustrated in Fig. 1. The objective of MSOD is to exploit and combine the complementary advantages provided by WSOD and LSOD; weak (image-level) supervision affords the construction of large databases with minimal effort, while low-shot supervision provides information rich, fully annotated ground truth examples. The MSOD paradigm has, only very recently, been initially investigated in two related works. Fang et al[7] propose a cascaded architecture yielding performance competitive with fully supervised counterparts yet using a significant fraction of the full training data to achieve comparable performance. Pan et al[15] use low-shot examples to refine bounding box annotations obtained from a pre-trained WSOD model [20], resulting in a method intrinsically linked to the performance and drawbacks of WSOD techniques.

In this work, we approach the MSOD scenario from a different angle. Due to the sparsity of rich training information provided, we expect a MSOD model to output annotations of variable quality, especially for images containing crowded scenes or objects with appearance substantially dissimilar to the training data. In contrast to existing MSOD models we introduce an Online Annotation Module (OAM), trained with mixed supervision, that can be used in conjunction with any two-stage fully supervised object detection method to improve its performance (e.g. Fast(er) R-CNN family [8, 18]). Our OAM generates, on the fly, additional reliable automated annotations obtained from a larger set of weakly annotated images (containing only image-level class labels). Furthermore, we exploit prediction stability to reason about annotation reliability resulting in associated confidence scores. Generated annotations are used to train, concurrently to the OAM, a fully supervised detector that shares the same encoding features. This produces an intrinsic training curriculum for the standard detector model; only simple images, labelled with high confidence will be presented to the model at the outset. Compared to previous MSOD work, our OAM strategy provides increased robustness against mislabeled crowded and ambiguous training images as only confident MSOD annotations are exploited for fully supervised training. Furthermore, our joint MSOD and fully supervised training provides intrinsic regularisation for both tasks, allowing the learning of higher quality and more discriminative feature extractors.

Experiments show that our strategy allows effective training of standard detection algorithms with only minimal annotation requirements and significantly outperforms WSOD and competitive MSOD approaches on PASCAL VOC 2007 and MS-COCO benchmarks. Additionally, we report competitive performance in comparison to fully supervised alternatives, illustrating the ability of our OAM to annotate a many-shot set of (weakly labelled) images that can be leveraged to improve the fully supervised model performance.

In summary, we propose a new direction using Mixed Supervision for Object Detection (MSOD). Our main contributions are the following:

  • We introduce a novel Online Annotation Module (OAM), trained using mixed supervision. This module allows expansion of the low-shot training set of fully annotated images by generating reliable annotations from a larger volume of weakly labelled images.

  • Training our OAM concurrently with any two-stage object detection model introduces a strategy for object detection performance improvement due to the generated annotation. We report on the benefits of intrinsic regularisation afforded to both tasks when common encoding features are shared.

  • The integration of the OAM with Fast(er) R-CNN improves their performance by mAP, AP50 on PASCAL VOC 2007 and MS-COCO benchmarks, and significantly outperforms MSOD approaches.

2 Related work

Weakly Supervised Object Detection. A large body of recent work, considering WSOD, couples CNN feature extractors with Multiple Instance Learning (MIL) frameworks, thus casting weakly supervised object detection as a multi-label classification problem. Each image is typically represented as a bag of pre-computed proposals (e.g. Selective Search [21], Edge Boxes [28], etc.) and the objective is to identify proposals that are most relevant for bag classification  [2, 19, 22, 26]. Being framed as a classification task, MIL WSOD models typically focus on proposals that comprise of either the most discriminative object parts or image regions that define the presence of an object category. They therefore struggle to detect full object extent (e.g. human faces in contrast to an entire human body) or group multiple object instances of the same object within a single bounding box [26, 15]. In order to address this issue, recent work has focused on bounding-box refinement strategies using cascaded refinements of MIL classifications [20, 19], using saliency maps [24, 26], adopting continuation strategies [22, 23] and modelling uncertainty [1]. However, the ill-posed nature of the WSOD problem and insufficient statistics provided by the PASCAL VOC dataset (on which these approaches are usually evaluated) has lead to the development of ad-hoc training strategies and parameter sensitive methods to cope with the weak training signal, which substantially reduce generalisability across datasets. In this paper, we argue that including a handful of labelled samples yields accuracy and stability model improvements at only minimal annotation cost. Usually, all the images annotated by MIL WSOD methods are used, in a second step, to train fully supervised models [19, 22, 26]. Further previous work has also focused on alternating between the pseudo-labelling of images and, in conjunction, training a fully supervised model [10, 5]. In this work, we generate bounding box annotations on the fly from mixed supervision and we concurrently train a fully supervised detector only on the images annotated with high confidence.

Few-Shot and Mixed Supervision Object Detection. Few-Shot Object Detection (FSOD) considers a fully supervised training set, and aims to achieve strong performance on a set of novel classes comprising of only annotated training images per class. To date only a handful of works have focused on FSOD [11, 25, 12, 4]. Such approaches typically adapt few-shot classification techniques to the object detection setting, exploring metric learning [12] or meta-learning [25] strategies. Mixed Supervision for Object Detection (MSOD) enhances a WSOD training set containing only image-level labels with a small set of fully annotated (strong) images (e.g. images per class, analogous to an FSOD scenario) and aims to achieve strong performance on all the training classes. Pan et al. recently propose BCNet [15], which learns to refine the output of a pre-trained WSOD model using a small set of strong images. The definition of small set explored in their work ranges from shots to of the entire dataset ( images in PASCAL VOC 2007 training set). This approach provides a strong performance increase with respect to WSOD methods, however remains highly dependent on the original WSOD model detections as input. If detections are originally missed by the pre-trained model, the approach cannot recover. Moreover, BCNet requires the training of two independent models which makes the adaption of WSOD parameters, i.e. training for new datasets, challenging. In this work, we instead propose a one-stage approach relying on an adaptive pool of annotations, updated dynamically as training progresses. EHSOD [7] and BAOD [16] focus on larger data regimes (e.g. to ) and aim to reduce the data required to reach fully supervised performance using a cascaded MIL model and a student-teacher setup trained on weak and strong annotations, respectively. In contrast to all outlined methods, we propose instead to learn, and annotate on the fly, only a subset of weak images that can be labelled with high confidence. These additional samples are then used together with strong images to train an object detector and thus improve performance.

3 Method

Let be a set of training images annotated with image-level supervision. Under our mixed supervision paradigm, a subset of these images, with , is further annotated with bounding box annotations. We refer to the images contained in as strong training images, while the images in , that have only image-level annotations, are referred to as weak training images. An overview of our proposed method is reported in Fig. 2. Our model comprises two branches with shared encoder backbone, and employs an ROI pooling layer to compute a fixed-length feature representation for each image bounding box proposal. The first branch of our model employs both weak and strong training images to learn an Online Annotation Module (OAM) for weak training images. The OAM generates bounding box annotations, with associated confidence scores, on the fly, for every weak training image. Annotated weak images are added to a third set of images, , if they have been annotated with high confidence, and can be subsequently removed if their annotation confidence drops during training. Images contained in are referred to as semi-strong training images throughout the paper. The second branch of our model is designed as a standard fully supervised component and trained, in parallel, in an end-to-end manner using strong and semi-strong images. At testing, only the fully supervised model is used for object detection.

Given an input image, we first compute a set of candidate proposals , using either an unsupervised method (e.g. Selective Search [21] or Edge Boxes [28]) or a Region Proposal Network (RPN) [18]

, and their associated feature vectors

. These feature vectors are obtained using a standard CNN backbone and ROI Pooling layer and provide a common input to both of our model branches: the OAM and the fully supervised branch.

Figure 2: Architecture of the proposed approach. Our model comprises two branches with a shared encoder. Our Online Annotation Module (OAM) is trained on weakly () and strongly () annotated images to generate, on the fly, confident annotations for images which are added to a pool of semi-strong () images. The model’s second branch uses and images to train a standard fully supervised detection model.

3.1 Online Annotation Module

Our OAM is designed to jointly exploit weak and strong supervision in an efficient manner. It comprises three main components: 1) a joint detection module exploiting weak and strong labels in a single, common architecture to predict bounding boxes and their classes, 2) an online bounding box augmentation step that generates refined bounding box proposals, 3) a supervision generator, identifying confident annotations to be used as supervision. We next describe all three components in detail.

Joint detection module. Similarly to the strategy proposed in [7], we combine a multiple instance learning (MIL) type image-level classification task with a fully supervised joint classification and regression task. Our joint detection module hence comprises three parallel, fully connected layers focusing on three different subtasks: proposal scoring, classification and regression (Fig. 2, online annotation module block). Proposal scoring and classification are obtained by applying the softmax function to the output of their layers along both dimensions, independently, (classes for , proposals for ). After this operation,

represents the probability that the

-th proposal belongs to class , while represents the proportional contribution that proposal

provides to the image being classified as class

. Following [2], these layers are trained by exploiting the image-level supervision of both strong and weak images. In particular, a proposal score , per class, is obtained by combining them where is a Hadamard product. Then, summing these scores over proposals,

, enables the use of a binary cross-entropy loss as image-level loss function:


where is the label indicating the presence or absence of class in an image.

Similar to traditional object detectors, we use strong images to compute bounding box regression and classification via the corresponding fully connected layers. We therefore combine weak and strong supervision by providing direct supervision to proposal-level class prediction . For regression, each bounding box is parametrised as a four-tuple that specifies its center coordinate and its height and width . For each proposal classified as foreground in a strong image, this regression branch predicts the offset of these coordinates . Hence, for every strong image, the following additional loss is computed on a batch of proposals:




Parameters and constitute the predicted and target proposal classes respectively, and the predicted and target bounding box offsets respectively and is a smooth loss function [8].

The loss function of the joint detection module is hence on strong images, while the loss function on weak images is . Enforcing synergy between the two types of supervision regularises the low-shot task thanks to the statistical information provided by weak images. Moreover, due to the instance-level annotations provided by strong images, this also constrains the MIL task and encoder to learn stronger discriminative features between full and partial-extent object proposals.

Online Bounding Box Augmentation Strategy. Learning to update and improve bounding box spatial regions via low-shot regression is highly challenging. When initial inference and ground-truth box overlap is small, large corrections (spatial offsets) are required. Previous work (BCNet [15]) actively elects to exclude such challenging samples, further reducing already highly limited data. We alternatively fully exploit available annotations and push our regression branch output through a second forward pass of our OAM (red arrow in Fig. 2).

More specifically; after the first forward pass, we select the top scoring proposals, per class, corresponding to image-level ground-truth.

is defined as half the size of the proposal batch used to train the strongly supervised component. This accounts for the presence of irrelevant background proposals and allows us to fix this hyperparameter. Once regression branch offsets have been applied, our ROI pooling layer ingests the proposals and yields a new set of bounding box features. Loss functions are evaluated using the updated boxes features and combined with the first pass loss. The overall loss function of our OAM branch is then:

, where superscripts and indicate the first and second pass, respectively. At every iteration, a batch with the same number of weak and strong images is used.

Motivation for our second pass is two-fold. Firstly augmentation is intrinsically provided as new sets of proposal candidates are generated for regression and classification task training. In contrast pre-computed proposals (predominant in WSOD), that lack additional external augmentation strategies, provide only static input, reducing sample variability during training. Secondly, our regression task is regularised as any weak proposals receiving modifications that hinder correct image-level classification are penalised.

Figure 3: Proposed online pseudo-supervision generation strategy. At each iteration, a new set of bounding boxes is computed via classification, regression and NMS of the features from previous set . If bounding box predictions converge at iteration , and all proposal classes agree with the image-level label, the weak image is annotated.

Online Pseudo-Supervision Generation. The key objective of our OAM is to generate reliable annotations on a large set of weakly labelled images in order to guide the training of a fully supervised second branch. As the OAM is trained concurrently with the second branch, it is critical to identify and add only reliable annotations to the pool of training images. Our rationale is that only these images should be used to train the final supervised detection network, while images that the joint detection module struggles to annotate with high confidence should not be used for model training, as they may hurt the training process and deteriorate detector performance.

During early stages of the training process, uncertainty regarding both the class of bounding box proposals and the related regression refinement of box coordinates will be high. As training progresses and model predictive quality improves, confidence, accuracy and stability will increase. This results in an increasingly difficult set of images being accurately annotated. We propose to exploit this behaviour by introducing a supervision generator that is able to reliably identify annotated images, creating a set we refer to as semi-strong images , that are used to train the fully supervised branch. Intuitively, will comprise “easy” images in early stages of training (e.g

. single instances, uniform colour backgrounds) and sample diversity will progressively increase as the model becomes more accurate (examples of images annotated by our OAM at different training epochs are reported in Fig.


Figure 4: Examples of semi-strong images. First row: annotated semi-strong images at epoch , with iterations required for convergence (see text for details). Second row: examples of semi-strong annotation at pairs of early and late epochs. Magenta color: OAM annotation (class, bounding box score). Yellow: OICR [20] annotation. The results are obtained from a model trained on PASCAL VOC 2007 with 10 shot strong supervision.

In order to build a set of semi-strong images , with bounding boxes and associated annotation confidence scores, we propose the following mechanism. Given a weak image , we obtain a set of bounding boxes after Non-Maximum Supression (NMS) is performed on the output of the joint detection module, where and correspond to the class label and coordinates of box respectively. at every iteration , using as input candidate proposals. More specifically, the bounding boxes obtained at the previous iteration are fed again to the RoI Pooling Layer, providing a new set of image features allowing to compute new proposal coordinates. The process iterates until bounding box prediction stabilises and is stopped when for three consecutive iterations, i.e. for each bounding box , there exists a corresponding box such that and have intersection-over-union (IoU) and possess matching class predictions (i.e. a standard criterion for characterising object equivalence in detection methods). We assign a global confidence weight , per image, where is defined as the first of three iterations in which . Pseudo-code for the OAM algorithm is found in Supplementary Materials A.

The set of proposals obtained at iteration constitute the final bounding box annotations. Each box is weighted (box level confidence) by its average IoU with the best matching box at all subsequent iterations. Boxes absent at a given iteration (IoU ) are, by definition, down weighted due to being assigned an overlap of at that iteration (Fig. 3 shows an example). Images that do not reach convergence by iterations, or that fail to find any foreground proposals at iteration , are considered to be annotated with low confidence and are not added to the semi-strong pool. We set the maximum number of updates , to prevent large sets of iterations and observe that large (e.g. ) would only occur during early stage training in practice. Finally, the image is only added to the semi-strong pool if the set of obtained annotations contains all classes pertaining to the image-level label. We highlight that images requiring large iteration count for convergence are assigned low confidence scores by design and therefore have limited influence on the training procedure of the second branch. As weak images get annotated by the proposed OAM during training; the semi-strong set expands, while at the same time refining annotations and confidence as the model improves. At a given training step, a weak image that is not successfully annotated, and yet was present in the pool of semi-strong images, will be removed. In this way, the set of semi-strong images has the ability to both expand and contract during training.

3.2 Fully Supervised Branch

Concurrently to OAM training, the obtained strong and semi-strong sets of images are used to train a fully supervised second branch, that comprises both bounding box classification and regression modules on the proposal features in a similar fashion to Fast(er) R-CNN [8] style methods. In particular, at every training iteration a batch with the same number of strong and semi-strong images is used. The loss function for this branch is:


where is the ROI class predictions, is the predicted offset between ROIs and targets, is the class label and is the target offset. Only ROIs with foreground labels contribute to the regression loss, . The loss constitutes a weighted cross-entropy for each image:


where the proposals in each batch, contributing to the loss, are indexed by , the confidence for GT proposal is denoted and the image-level annotation confidence score is denoted . Strong images are assigned image and proposal-level weights of . In the early stages of the training process, the semi-strong annotations present some localisation inaccuracies, but are nonetheless highly informative to learn foreground vs background proposals. As training progresses, our OAM improves annotation quality with tighter object coverage and these additional high accuracy annotations will more often contain proposals of exactly full object extent. Such annotations reinforce and strengthen a base signal, provided by strong images alone, towards better bounding-box classification. We also explored utilising semi-strong images to improve bounding-box regression, analogously. In practice, however, this produced slightly worse results. We hypothesise that the discrete problem, associated with the bounded classification loss, affords more robustness to (early-stage) imperfect semi-strong annotations and therefore compute bounding box regression on only strong images in our final model. To conclude, collecting the introduced components results in the complete loss function for our model: . At testing, only this fully supervised model is deployed.

4 Results

4.1 Datasets and Implementation Details

We evaluate the performance of our proposed method on two common detection benchmarks: the PASCAL VOC 2007 [6] and the MS-COCO dataset [14], referred to as VOC07 and COCO14. VOC07 has training and testing images across categories. COCO14 has training and testing images across 80 categories. Following evaluation strategies used in the literature, we evaluate detection accuracy on VOC07 using mean Average Precision (mAP), while we employ the COCO metrics, and , on the COCO dataset. In the reported experiments, reference to of labelled images dictates that of all images have bounding box annotations while the remaining have image-level labels. This corresponds to images in VOC07, images in COCO14. With reference to our “-shot” experimental setup, we define each class to have access to images possessing bounding box annotations. All the experiments on VOC07 use the same data splits provided by BCNet [15], experiments on COCO14 use random selection.

method backbone aero bike bird boat bottle bus car cat chair cow table dog horse moto person plant sheep sofa train tv mAP(%)
10% images
Fast R-CNN VGG 47.9 62.9 45.5 34.2 23.0 54.6 70.8 65.5 27.2 61.1 39.8 60.6 70.0 63.3 64.2 14.7 52.9 43.0 55.7 49.5 50.3
BAOD VGG 51.6 50.7 52.6 41.7 36.0 52.9 63.7 69.7 34.4 65.4 22.1 66.1 63.9 53.5 59.8 24.5 60.2 43.3 59.7 46.0 50.9
BCNet VGG 64.7 73.1 55.2 37.0 39.1 73.3 74.0 75.4 35.9 69.8 56.3 74.7 77.6 71.6 66.9 25.4 61.0 61.4 73.8 69.3 61.8
Ours VGG 65.6 73.1 59.0 49.4 42.5 72.5 78.3 76.4 35.4 72.3 57.6 73.6 80.0 72.5 71.1 28.3 64.6 55.3 71.4 66.2 63.3
EHSOD ResNet 60.6 65.2 55.0 35.4 32.8 66.1 71.3 75.3 38.4 54.1 26.5 71.7 65.0 67.8 63.0 27.7 52.6 48.6 70.9 57.3 55.3
BCNet ResNet 68.3 72.0 61.2 48.1 40.8 73.3 73.4 77.8 37.0 69.7 58.3 78.2 80.0 67.5 70.5 27.4 62.9 63.6 73.4 63.6 63.4
Ours ResNet 62.3 73.2 61.8 56.2 44.3 75.4 76.7 80.5 39.5 73.7 61.7 78.8 82.8 71.5 74.3 27.0 67.4 62.7 71.2 64.4 65.3
10 shots
BCNet VGG 59.7 69.1 44.6 29.4 40.1 69.2 73.2 72.9 32.9 58.1 53.3 66.7 71.3 66.0 61.7 24.6 53.0 62.0 67.2 67.4 57.1
Ours VGG 60.2 71.6 51.5 45.6 43.5 71.1 75.8 72.2 33.8 62.9 54.0 70.0 72.9 67.5 67.4 23.6 61.5 59.1 63.6 66.7 59.7
BCNet ResNet 63.4 69.4 54.7 39.5 35.9 70.6 71.8 71.8 33.5 64.6 50.0 65.3 72.7 62.5 61.6 29.2 54.5 63.3 66.7 69.4 58.5
Ours ResNet 61.7 72.3 56.5 52.0 37.2 71.3 74.6 77.8 36.0 67.1 58.3 78.1 77.6 68.0 71.8 25.5 63.6 62.4 72.7 61.2 62.3
Table 1: Detailed detection performance (%) on VOC07 dataset. In all the setting, the same BCNet data splits were employed [15].

We employ popular network backbones VGG16 and ResNet101 in our experiments to retain consistency with recent approaches. We combine our OAM with Fast R-CNN [8] (using Edge Boxes [28]) and Faster R-CNN using a trainable RPN [18]. Optimisation of all models is performed using SGD with weight decay and momentum . For experiments concerning the VOC07 dataset, models are trained for epochs. The initial learning rate is (first epochs) and reduced to for the final epochs. Analogously for MS COCO experiments; models are trained for 12 epochs, with learning rate in the first 9 epochs and then reduced to for the final 3 epochs. Remaining model hyper-parameters follow the values reported in [15]. For data augmentation, we apply the same augmentation strategy as BCNet [15] for fair comparison, i.e. we bilinearly resize images to induce a minimum side length and, for fully supervised training, uniformly crop image regions with a fixed

window. All experiments are implemented in PyTorch using a single GeForce GTX 1080 GPU.

4.2 Comparisons with State-of-the-art

Baselines: We evaluate our model with respect to two SOTA WSOD methods, PCL [19] and WSOD [26], that were evaluated on both VOC07 and COCO14. We further compare to three MSOD approaches: the two level approach of BCNet [15], end-to-end methods BAOD [16] and EHSOD [7]. To the best of our knowledge, these are the only three methods adopting mixed supervision. All three methods were evaluated on VOC07. Results for BCNet, the best performing baseline on VOC07, were not available for the COCO dataset. The approach requires training two models (OICR and BCNet) with two separate sets of parameters that need to be adapted to the new dataset, making it highly challenging and time consuming to provide a fair comparison, hence we were not able to provide it. Similarly, EHSOD was evaluated only on the COCO 2017 database with a much larger set of annotated training images (approx. ), making results not directly comparable to our experiments and different from the low-shot setting studied in this work. Finally, we compare our results with respect to Fast R-CNN and Faster R-CNN trained with full supervision (our upper bounds) and low-shot supervision (i.e. and -shot training data), using the same augmentation strategy as all previous models.

Method type Method 10-shots/WSOD 10% images
AP (%)
person class
mAP (%)
AP (%)
person class
mAP (%)
fully supervised Fast R-CNN 58.0 42.1 64.2 50.3
fully supervised Faster R-CNN 54.3 37.7 55.7 46.7
WSOD PCL 17.8 43.5 - -
WSOD PCL + Fast R-CNN 15.8 44.2 - -
WSOD WSOD 21.9 53.6 - -
MSOD BAOD - - 59.8 50.9
MSOD EHSOD (ResNet + FPN) - - 63.0 55.3
MSOD BCNet 61.7 57.1 66.9 61.8
MSOD Ours 67.4 59.7 71.1 63.3
MSOD Ours + RPN 64.3 54.6 68.9 60.5
fully supervised Fast-RCNN 100 % images (Ours upper bound) 76.8 (person), 71.6
fully supervised Faster-RCNN 100 % images (Ours + RPN upper bound) 75.6 (person), 67.0
Table 2: Comparison to SOTA on VOC07 dataset. A VGG backbone is used unless specified. Gray rows correspond to methods learning an RPN (vs methods using precomputed proposals).

PASCAL VOC 2007: We report detailed per-class results, compared to competing MSOD approaches in Tab. 8 using 10% annotated training images, and 10 shots. We consistently outperform all competing methods in terms of mAP, with an improvement of up to with respect to BCNet in the 10 shot scenario (ResNet), and

with respect to EHSOD in the 10% images scenario. We further highlight that BCNet constitutes a two-level WSOD dependent method. The influence of the chosen WSOD component is clearly visible; object classes where their method excels, and surpasses our per-class performance, are the same classes for which their adopted WSOD component (OICR) provides best initial bounding box estimations 

[20]. In Tab. 2, we provide more comparisons in the 10 shots and 10% images scenarios using precomputed proposals (white rows) and an RPN [18] (grey rows). We highlight that we use an off-the-shelf RPN without parameter optimisation, and expect performance to be worse, and not directly comparable to strategies relying on pre-computed proposals. We further compare with top performing WSOD methods and Fast(er)-RCNN approaches and highlight our performance on the “person” class, often reported as one of the most challenging classes for WSOD methods due to the large intra-class variability in terms of appearance [15, 26]. We significantly outperform all SOTA methods, and substantially improve with respect to WSOD methods, in particular for the person class, with only minor additional labelling cost. Comparing to Fast(er)-RCNN methods, we highlight that our OAM improves upon models trained on 10% data and 10 shots by a large margin ( and respectively), reaching performance close to the fully supervised upper bound.

Method type Method AP@.50 AP@[.50,.95]
fully supervised Fast R-CNN - 10 shots 22.1 10.0
fully supervised Faster R-CNN - 10 shots 16.1 6.7
WSOD PCL 19.4 8.5
WSOD PCL+ Fast R-CNN 19.6 9.2
WSOD WSOD^2 22.7 10.8
MSOD Ours - 10 shots 31.2 14.9
MSOD Ours + RPN - 10 shots 24.9 10.2
fully supervised Fast R-CNN - 100% data 49.9 29.0
fully supervised Faster R-CNN - 100% data 42.1 20.5
Table 3: Comparison with the SOTA on MS-COCO14 with 10-shot training examples (VGG backbone). Gray rows correspond to methods learning an RPN (vs methods using precomputed proposals).

MS-COCO14: We provide further comparison to additional benchmark datasets in order to highlight model generalisability. We note that contemporary WSOD methods mainly focus on detection datasets of modest size such as VOC07. COCO14 is significantly larger, and constitutes a more challenging dataset due to both the increased size and variability expressed in image content. Tab. 7 reports comparisons between our method (precomputed and RPN proposals) and WSOD approaches PCL and WSOD on COCO14 using 10 shots labelled images. As we compare solely to WSOD methods, we limit our experiments to the 10 shots setting, as 10% annotated examples provide a very significant advantage compared to WSOD methods. We additionally provide comparison to Fast(er) R-CNN methods trained on 10 shots as well as their fully supervised equivalent on 100% images. We highlight that our method maintains robust performance and significantly outperforms the WSOD methods and 10 shots Fast(er)-RCNN models (). This provides evidence in support of our claim that the strategy of providing mixed supervision significantly improves generalisation ability in settings that entail more difficult tasks with higher variability.

10 shots AP (%)
SE BBA OAM 1B 2B aero bike bird boat bottle bus car cat chair cow table dog horse moto person plant sheep sofa train tv mAP(%)
42.0 57.1 40.2 34.2 30.3 62.6 69.0 62.5 23.2 63.8 33.0 58.5 72.2 63.3 62.9 20.8 54.9 44.2 54.3 55.2 50.2
30.9 53.2 35.8 27.8 19.9 51.6 65.8 54.7 19.3 48.3 27.8 46.3 57.7 54.3 58.0 14.9 49.1 37.5 43.8 44.7 42.1
44.3 60.2 40.4 37.8 28.1 67.0 72.8 64.1 24.2 64.6 40.9 60.5 70.5 61.6 63.5 16.1 55.0 46.2 57.5 58.0 51.7
47.3 62.1 42.4 35.2 28.2 67.0 72.8 65.1 21.7 65.3 43.4 61.4 70.6 63.5 63.0 16.5 57.6 45.8 58.7 54.7 52.1
50.3 67.3 49.8 44.1 35.9 64.3 72.7 70.3 32.6 57.7 44.5 66.3 65.6 68.3 62.8 25.2 60.0 48.8 62.6 64.5 55.7
61.4 71.0 48.5 42.9 37.8 69.8 75.6 72.8 34.0 63.2 47.6 71.9 71.1 71.1 64.6 25.7 63.4 55.6 61.9 65.8 58.8
57.9 71.4 48.2 42.7 38.0 71.4 75.5 75.5 34.0 67.1 54.0 71.4 74.3 69.4 65.7 23.7 61.6 56.1 61.0 65.0 59.2
60.2 71.6 51.5 45.6 43.5 71.1 75.8 72.2 33.8 62.9 54.0 70.0 72.9 67.5 67.4 23.6 61.5 59.1 63.6 66.7 59.7
Table 4: Ablative analysis of our method on VOC07 for the 10 shot scenario. SE: shared encoder, OAM: second branch training also on OAM generated semi-strong images, BBA: bounding box augmentation strategy. 1B: first branch output, 2B: second branch output.

4.3 Ablation Studies

We conduct experiments to understand the different contributions and assignment of credit for our OAM components using the VOC07 dataset and a VGG backbone. Tab. 5 shows ablative results for the 10 shots scenario while additional results for the 10% images scenario are reported in supplementary materials. Studied components are: SE: shared encoder (i.e. no SE entails independent branch training); OAM: fully supervised branch is also trained on semi-strong images generated by the OAM; BBA: online bounding box augmentation strategy. For each configuration, we report mAP with respect to the output of the OAM (first branch; 1B) as well as the output of the fully supervised branch (second branch; 2B). We experimentally verify the importance of each component; performance consistently improves as new components are integrated. We note that the shared encoder strongly improves the fully supervised branch, while the OAM, and communication between branches, affords mutual branch improvement. Both performance gains can be attributed to the more discriminative full vs. partial object proposal features learned by the shared encoder.

5 Conclusion

We have introduced a novel online annotation module (OAM), trained using mixed supervision, that learns to generate annotations on the fly and thus affords concurrent training for fully supervised object detection. The OAM can be combined with any two-stage object detector and provides an intrinsic curriculum to improve the training procedure. Extensive experiments on two popular benchmarks show SOTA performance in the mixed supervision scenario, and significant improvement of two-stage detection methods in low-shot settings. Moreover, our method has the potential to increase performance on rare, long tail classes that typically only possess a handful of annotated examples.

Appendix A Online Pseudo-Supervision Generation algorithm

1:Input: Initial set of N detections , stopping criterion , image feature vector , OAM layers parameters .
2:Output: M output detections with confidence weights , number of iterations required for convergence .
3:Initialise variables: ,
4:For t = 1 to K :
5: RoIPooling()
6: =
7:if is empty : No detections
9:if : where and .
11: if :
12: First of three iterations where
13: averageOverlap
Algorithm 1 Online Pseudo-Supervision Generation algorithm

Appendix B Ablation study: 10% data scenario

In Tab. 5, we report ablation study results for the proposed model (VGG16 backbone) where of images from VOC07 provide strong supervision. Results for the analogous shot scenario were reported in the main paper, Sec. . Considered ablation components are SE: presence of shared encoder (i.e. no SE entails independent branch training); OAM: the fully supervised branch is additionally trained on semi-strong images (generated by the OAM); BBA: online bounding box augmentation strategy. For each configuration, we report mAP with respect to the output of the OAM (first branch; 1B) as well as the output of the fully supervised branch (second branch; 2B).

As was also observed for the shot scenario (reported in Sec. of the main paper), the performance increases as additional components are added, providing further evidence for component validity and contribution. The performance gaps between differing ablations are smaller than our analogous main paper experiment due to the increased strong supervision available in the current case. Congruent with the results reported in Sec. 4.3, this ablation highlights that the shared encoder strongly improves the fully supervised branch, while the OAM and communication between branches, afford mutual branch improvement.

10 % AP (%)
SE BBA OAM 1B 2B aero bike bird boat bottle bus car cat chair cow table dog horse moto person plant sheep sofa train tv mAP(%)
56.7 69.9 52.5 42.7 36.7 72.9 76.4 70.6 31.8 72.6 48.2 66.9 77.7 68.9 67.1 22.9 59.9 55.5 62.8 63.2 58.8
47.9 62.9 45.5 34.2 23. 54.6 70.8 65.5 27.2 61.1 39.8 60.6 70. 63.3 64.2 14.7 52.9 43. 55.7 49.5 50.3
57.3 67.4 51.4 42. 37.2 72.2 77.2 72.5 31.7 69.5 52.8 71.1 76.5 67.8 67.4 21.8 57.7 54.6 64.5 62.3 58.7
57.5 68.2 53. 41.8 37.4 70.1 77.2 73.2 33. 69.3 54.8 71.8 78.4 69. 67.7 22.2 59.4 54.3 66.1 62.3 59.3
64.3 69.7 56.1 48.3 39.8 71.4 78.1 76.5 37.8 71.1 56.4 76.5 76.5 70.9 68.4 25.7 62.1 55.7 70.2 65.8 62.1
67.1 70.3 56.2 48.4 42.1 71.7 76.9 76.7 39.2 71.5 60.1 74.1 79.6 71.3 70.9 26.3 61.6 56.4 71.1 66.1 62.9
66.4 71.8 57.3 50.3 41.5 72.6 78.5 77.3 38.4 71.6 59.8 74.3 79.4 71.5 71.4 26.1 61.8 57.6 72.3 66.5 63.3
65.6 73.1 59. 49.4 42.5 72.5 78.3 76.4 35.4 72.3 57.6 73.6 80. 72.5 71.1 28.3 64.6 55.3 71.4 66.2 63.3
Table 5: Ablative analysis of our method using VOC07 in the 10% scenario. SE: shared encoder, OAM: second branch trained also using OAM generated semi-strong images, BBA: bounding box augmentation strategy. 1B: first branch output, 2B: second branch output.

Appendix C Sensitivity to the selected annotation

In order to test the sensitivity of our method, with respect to annotated image-subset selection variance, we perform a five-fold experiment, under the shot scenario. We test using VOC07 and a standard VGG16 backbone architecture. This scenario represents the setting most susceptible and sensitive to image subset selection as the pool of strong images is the smallest among all considered scenarios (including MS-COCO experiments). It can be observed in Table 6 that image selection variance is small. Varying the selected image subset has only minor effect on final mAP, providing evidence towards the robustness of our proposed approach. This variance intuitively reduces further in cases where the model is trained using a larger number of fully annotated images.

SPLIT aero bike bird boat bottle bus car cat chair cow table dog horse moto person plant sheep sofa train tv mAP()
1 64.1 73.7 53.0 49.2 46.8 73.4 75.1 70.5 33.1 73.4 46.9 75.1 72.4 69.8 63.8 31.0 62.6 52.2 69.2 62.5 60.9
2 60.2 71.6 51.5 45.6 43.5 71.1 75.8 72.2 33.8 62.9 54.0 70.0 72.9 67.5 67.4 23.6 61.5 59.1 63.6 66.7 59.7
3 62.5 73.9 60.1 42.0 40.0 74.1 74.7 75.2 33.7 74.5 51.4 71.4 79.9 71.9 64.6 30.3 63.6 55.8 64.8 66.8 61.6
4 62.2 75.1 56.1 42.7 38.9 73.4 75.3 75.0 32.1 68.1 46.3 69.6 75.3 71.1 62.5 26.4 59.3 54.3 69.4 63.4 59.8
5 64.0 73.5 60.1 50.6 38.9 72.6 75.6 70.3 32.7 70.1 55.4 73.9 75.1 70.2 64.3 25.6 62.6 49.2 67.9 65.3 60.8
mean 62.6 73.6 56.2 46.0 41.6 72.9 75.3 72.6 33.1 69.8 50.8 72.0 75.1 70.1 64.5 27.4 61.9 54.1 67.0 64.9 60.6
std 1.6 1.3 3.9 3.8 3.4 1.1 0.4 2.4 0.7 4.6 4.1 2.4 2.3 1.7 1.8 3.1 1.6 3.7 2.6 1.9 0.8
Table 6:

Five-fold experiment for the 10 shot scenario using VOC07 and a standard VGG16 backbone. Fold mean and standard deviation statistics are reported in the final rows. The second split is the split used in

[15], and the split used for all our remaining experiments.

Appendix D MS-COCO 2017 comparisons

The EHSOD [7] method reported results using the COCO17 dataset, corresponding to a training data scenario. We thus report here comparison between our method (considering both pre-computed and RPN [18] proposal setups) and the EHSOD mixed supervision approach. We also provide additional comparison to both Fast and Faster-RCNN methods, trained using the same of COCO17 images, as well as their fully supervised equivalent; using of the training images. Results are found in Tab. 7. We note this setting corresponds to approximately fully annotated images, a much larger set than the ones used in all other experiments.

It can be observed that, in this setting, our model performs on-par with EHSOD when using RPN proposals, while significantly outperforms their approach when pre-computed (Edge Boxes) proposals are employed. Moreover, we observe that our method also performs on-par with the Fast(er)-RCNN baselines in the images scenario. Interestingly we note only a reasonably modest gap between Fast(er)-RCNN performance with regard to the considered and baselines. This suggests that the gap between the and

setting can be closed by providing the network with images containing object class appearance outliers or by images containing difficult, crowded scenes. As a consequence, the problem, in this setting, can be considered to have a greater affinity with a fully supervised task than with a low-shot setting. This observation provides some explanation towards why our method provides limited improvement in this setting. Images required to improve the detector performance (high information content) may not be annotated with high confidence and therefore not considered for object detector training. As highlighted in our future work discussion (main paper; Sec. 5), we believe active learning strategies may prove fruitful in such cases.

Method type Method AP@.50 AP@[.50,.95]
fully supervised Fast RCNN - 10% data 53.7 31.6
fully supervised Faster RCNN - 10% data 46.3 25.6
MSOD EHSOD - 10% data 46.8 -
MSOD Ours - 10% data 54.2 31.6
MSOD Ours + RPN - 10% data 46.0 25.4
fully supervised Fast RCNN - 100% data 61.6 48.0
fully supervised Faster RCNN - 100% data 51.1 28.8
Table 7: Comparison with state of the art on COCO17. All the models were trained with a ResNet101 backbone [9], while EHSOD uses FPN [lin2017feature]. Gray rows correspond to methods learning an RPN [18] (vs. methods using precomputed proposals).

Appendix E Additional PASCAL VOC 07 results

We report here detailed per-class detection results and compare competing MSOD approaches using both annotated training images and shot scenarios. Results are found in Tab. 8. We consistently outperform all competing methods in terms of mAP, with an improvement of up to with respect to BCNet in the 20 shot scenario (ResNet101 [9] backbone). We highlight that in the training image scenario, we report both EHSOD and BAOD results, trained using of training images as only these results were available. This highlights the ability of our method to outperform these competing models even in the case where we have access to fewer training examples.

method backbone aero bike bird boat bottle bus car cat chair cow table dog horse moto person plant sheep sofa train tv mAP(%)
BCNet ResNet101 66.5 67.6 56.7 40.5 40.4 72.8 71.3 76.6 39.4 65.0 54.1 71.4 72.9 66.6 66.0 26.1 59.0 65.5 67.7 67.6 60.7
Ours ResNet101 66.2 73.3 57.0 53.2 42.8 76.0 76.0 79.1 38.6 74.6 61.1 79.9 77.4 70.2 73.1 26.7 64.3 65.7 67.6 64.5 64.4
16% images
BCNet VGG16 63.7 77.2 62.9 48.0 39.7 73.3 76.0 78.0 39.4 72.9 56.1 75.4 79.9 69.5 70.2 31.0 60.6 62.2 75.0 68.6 64.0
Ours VGG16 66.5 76.2 59.1 53.0 49.2 77.1 79.4 76.9 41.4 75.4 63.7 80.2 80.9 71.6 73.0 35.7 67.5 64.0 73.5 68.9 66.7

ResNet101 57.0 62.2 60.0 46.6 46.7 60.0 70.8 74.4 40.5 71.9 30.2 72.7 73.8 64.7 69.8 37.2 62.9 48.4 64.1 59.1 58.6
BCNet ResNet101 67.3 74.2 65.2 51.7 40.8 74.1 72.7 77.2 39.2 70.3 59.9 77.2 78.5 69.9 68.6 30.6 60.0 68.2 75.9 66.8 64.4
ResNet101 65.5 72.3 66.7 45.6 50.8 72.2 77.8 82.2 44.3 73.1 44.8 79.3 76.0 73.0 73.8 35.5 63.0 62.1 74.0 65.5 64.9
Ours ResNet101 65.8 78.8 63.7 55.3 49.7 73.0 79.6 84.5 42.7 75.0 61.6 84.7 83.3 71.8 75.1 33.9 64.6 64.9 73.3 66.2 67.4
Table 8: Detailed per-class detection performance (%) on VOC07. For each instance of our model, identical data splits, from the BCNet paper [15] were consistently used. Method rows marked indicate models trained using of images, due to the availability of comparable results, c.f. only .

Appendix F Additional visual results

f.1 Annotated Semi-Strong Images

In Fig. 5 we provide additional examples of images annotated by our OAM, named semi-strong images, during progressive training epochs . These online annotations are obtained by our model using VOC07 data with shot strong supervision (other examples of semi-strong images are reported in the main manuscript, Fig. 4). We observe that typically uncomplicated and simple images are labelled with high confidence when training begins (for example at epoch rows ). During later training stages (here ), more complex images with increased appearance diversity and also with multiple, overlapping object instances are added to the pool by our OAM. In general, ranged from 1-10 (first 5 epochs) to 1-3 (end of training); and the semi-strong set contained approx. 10% (first epochs) to 45-60% (end of training) of annotated weak images

Furthermore, we compare the annotations obtained by our method (magenta) with annotations generated by a popular Weakly Supervised Object Detection (WSOD) approach; OICR [20] (yellow detections). We highlight that, from early epochs, our method is providing better, more reliable annotations that are then employed for concurrent object detector training. Moreover, our annotations cover the full extent of the object of interest. This can be explained due to the high quality information being distilled from the low-shot fully annotated images (strong images), while the WSOD method annotations exhibit the well understood problem of tending to focus on object parts and on (only) the most discriminative object in the image.

Figure 5: Examples of semi-strong images at epoch with iterations required for OAM convergence (definition in the main paper, Sec. 3). Magenta: our OAM annotation (class, bounding box score). Yellow: OICR [20] (WSOD) annotations. Results are obtained using model trained on VOC07 with shot strong supervision.

f.2 Examples of Detections

Further exemplar test-time detections, obtained by our method with shot strong supervision, are shown in Fig. 6 and Fig. 7 for VOC07 and COCO14 test images respectively. Due to the low-shot set of fully annotated images, that are leveraged by our model, we observe that obtained detections cover full object extent, even for classes typically difficult for WSOD (e.g. person). In comparison with WSOD approaches, our method avoids enclosing only the most discriminative object parts. Moreover, multiple instances of the same class within a single image can now be captured. This is usually problematic when training a model by relying only on image-level supervision, as in WSOD.

Figure 6: Detection results on VOC07 test. Results are obtained from a model trained on VOC07 with shot strong supervision, VGG16 backbone.
Figure 7: Detection results on COCO14 test. Results are obtained from a model trained on COCO14 with shot strong supervision, VGG16 backbone.

Appendix G Common Modes of Failure

We conducted additional investigation to identify instances of detection failures for our model trained with shot supervision. For both datasets (VOC07, COCO14) considered in our work, the most common mode of failure is represented by multiple detection for an object of interest. Given that the model is only trained with shot, we partially attribute such failures to the (weakly-learned) bounding box regressor. In corroboration with competing work [15, 7] we note bounding box regression is an intrinsically difficult task, especially in cases when limited training data is available or where substantial background pixels need be included to provide an optimal object bounding box, such as for objects with elongated or articulated shapes. As discussed in the main paper (Sec. 5), additional future work may explore strengthening of regression task performance.

Figure 8: Example detection failures obtained from our proposed model. Images are obtained from a model trained on VOC07 (left-most two images) and on COCO14 (right-most three images) with shot supervision.


  • [1] A. Arun, C. Jawahar, and M. P. Kumar (2019) Dissimilarity coefficient based weakly supervised object detection. In

    Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition

    pp. 9432–9441. Cited by: §1, §2.
  • [2] H. Bilen and A. Vedaldi (2016) Weakly supervised deep detection networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2846–2854. Cited by: §1, §2, §3.1.
  • [3] Z. Cai and N. Vasconcelos (2019) Cascade r-cnn: high quality object detection and instance segmentation. IEEE Transactions on Pattern Analysis and Machine Intelligence. Cited by: §1.
  • [4] H. Chen, Y. Wang, G. Wang, and Y. Qiao (2018) Lstd: a low-shot transfer detector for object detection. In

    Thirty-Second AAAI Conference on Artificial Intelligence

    Cited by: §2.
  • [5] X. Dong, L. Zheng, F. Ma, Y. Yang, and D. Meng (2018) Few-example object detection with model communication. IEEE transactions on pattern analysis and machine intelligence 41 (7), pp. 1641–1654. Cited by: §1, §2.
  • [6] M. Everingham, S. A. Eslami, L. Van Gool, C. K. Williams, J. Winn, and A. Zisserman (2015) The pascal visual object classes challenge: a retrospective. International journal of computer vision 111 (1), pp. 98–136. Cited by: §4.1.
  • [7] L. Fang, H. Xu, Z. Liu, S. Parisot, and Z. Li (2020) EHSOD: CAM-Guided End-to-End Hybrid-Supervised Object Detection with cascade refinement. In Proceedings of the 29th International Joint Conference on Artificial Intelligence, pp. xxx–yyy. Cited by: Appendix D, Appendix G, Appendices, §1, §2, §3.1, §4.2.
  • [8] R. Girshick (2015) Fast R-CNN. In Proceedings of the IEEE international conference on computer vision, pp. 1440–1448. Cited by: §1, §1, §3.1, §3.2, §4.1.
  • [9] K. He, X. Zhang, S. Ren, and J. Sun (2016) Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 770–778. Cited by: Table 7, Appendix E.
  • [10] Z. Jie, Y. Wei, X. Jin, J. Feng, and W. Liu (2017) Deep self-taught learning for weakly supervised object localization. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1377–1385. Cited by: §2.
  • [11] B. Kang, Z. Liu, X. Wang, F. Yu, J. Feng, and T. Darrell (2019) Few-shot object detection via feature reweighting. In Proceedings of the IEEE International Conference on Computer Vision, pp. 8420–8429. Cited by: §1, §2.
  • [12] L. Karlinsky, J. Shtok, S. Harary, E. Schwartz, A. Aides, R. Feris, R. Giryes, and A. M. Bronstein (2019) RepMet: representative-based metric learning for classification and few-shot object detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 5197–5206. Cited by: §2.
  • [13] T. Lin, P. Goyal, R. Girshick, K. He, and P. Dollár (2017) Focal loss for dense object detection. In Proceedings of the IEEE international conference on computer vision, pp. 2980–2988. Cited by: §1.
  • [14] T. Lin, M. Maire, S. Belongie, J. Hays, P. Perona, D. Ramanan, P. Dollár, and C. L. Zitnick (2014) Microsoft coco: common objects in context. In European conference on computer vision, pp. 740–755. Cited by: §4.1.
  • [15] T. Pan, B. Wang, G. Ding, J. Han, and J. Yong (2019) Low shot box correction for weakly supervised object detection. In Proceedings of the 28th International Joint Conference on Artificial Intelligence, pp. 890–896. Cited by: Table 6, Table 8, Appendix G, §1, §2, §2, §3.1, §4.1, §4.1, §4.2, §4.2, Table 1.
  • [16] A. Pardo, M. Xu, A. Thabet, P. Arbelaez, and B. Ghanem (2019) BAOD: budget-aware object detection. arXiv preprint arXiv:1904.05443. Cited by: §2, §4.2.
  • [17] J. Redmon, S. Divvala, R. Girshick, and A. Farhadi (2016) You only look once: Unified, real-time object detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 779–788. Cited by: §1.
  • [18] S. Ren, K. He, R. Girshick, and J. Sun (2015) Faster r-cnn: towards real-time object detection with region proposal networks. In Advances in neural information processing systems, pp. 91–99. Cited by: Table 7, Appendix D, §1, §1, §3, §4.1, §4.2.
  • [19] P. Tang, X. Wang, S. Bai, W. Shen, X. Bai, W. Liu, and A. L. Yuille (2018) Pcl: proposal cluster learning for weakly supervised object detection. IEEE transactions on pattern analysis and machine intelligence. Cited by: §1, §2, §4.2.
  • [20] P. Tang, X. Wang, X. Bai, and W. Liu (2017) Multiple instance detection network with online instance classifier refinement. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2843–2851. Cited by: Figure 5, §F.1, §1, §2, Figure 4, §4.2.
  • [21] J. R. Uijlings, K. E. Van De Sande, T. Gevers, and A. W. Smeulders (2013) Selective search for object recognition. International journal of computer vision 104 (2), pp. 154–171. Cited by: §2, §3.
  • [22] F. Wan, C. Liu, W. Ke, X. Ji, J. Jiao, and Q. Ye (2019) C-mil: continuation multiple instance learning for weakly supervised object detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2199–2208. Cited by: §1, §2.
  • [23] F. Wan, P. Wei, J. Jiao, Z. Han, and Q. Ye (2019) Min-entropy latent model for weakly supervised object detection. IEEE Trans. Pattern Anal. Mach. Intell.. Cited by: §2.
  • [24] Y. Wei, Z. Shen, B. Cheng, H. Shi, J. Xiong, J. Feng, and T. Huang (2018) Ts2c: tight box mining with surrounding segmentation context for weakly supervised object detection. In Proceedings of the European Conference on Computer Vision (ECCV), pp. 434–450. Cited by: §2.
  • [25] X. Yan, Z. Chen, A. Xu, X. Wang, X. Liang, and L. Lin (2019) Meta r-cnn: towards general solver for instance-level low-shot learning. In Proceedings of the IEEE International Conference on Computer Vision, pp. 9577–9586. Cited by: §1, §2.
  • [26] Z. Zeng, B. Liu, J. Fu, H. Chao, and L. Zhang (2019) WSOD2: learning bottom-up and top-down objectness distillation for weakly-supervised object detection. In Proceedings of the IEEE International Conference on Computer Vision, pp. 8292–8300. Cited by: §1, §2, §4.2, §4.2.
  • [27] Z. Zhao, P. Zheng, S. Xu, and X. Wu (2019)

    Object detection with deep learning: a review


    IEEE transactions on neural networks and learning systems

    30 (11), pp. 3212–3232.
    Cited by: §1.
  • [28] C. L. Zitnick and P. Dollár (2014) Edge boxes: locating object proposals from edges. In European conference on computer vision, pp. 391–405. Cited by: §2, §3, §4.1.