Few-shot Object Detection via Feature Reweighting

12/05/2018 ∙ by Bingyi Kang, et al. ∙ 0

This work aims to solve the challenging few-shot object detection problem where only a few annotated examples are available for each object category to train a detection model. Such an ability of learning to detect an object from just a few examples is common for human vision systems, but remains absent for computer vision systems. Though few-shot meta learning offers a promising solution technique, previous works mostly target the task of image classification and are not directly applicable for the much more complicated object detection task. In this work, we propose a novel meta-learning based model with carefully designed architecture, which consists of a meta-model and a base detection model. The base detection model is trained on several base classes with sufficient samples to offer basis features. The meta-model is trained to reweight importance of features from the base detection model over the input image and adapt these features to assist novel object detection from a few examples. The meta-model is light-weight, end-to-end trainable and able to entail the base model with detection ability for novel objects fast. Through experiments we demonstrated our model can outperform baselines by a large margin for few-shot object detection, on multiple datasets and settings. Our model also exhibits fast adaptation speed to novel few-shot classes.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 4

page 8

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

The recent success of deep convolutional networks in computer vision [15, 25, 17] relies heavily on a huge amount of labeled training data [7]

. When the labeled data are scarce, neural networks can severely overfit and fail to generalize well. In contrast, human vision systems tend to exhibit strong performance in such task: children can learn a visual concept quickly from very few given examples. Such ability to learn from few examples is also important for computer vision systems, since for some object categories labels are naturally scarce (e.g., California Firetrucks, or endangered animals). More crucially, accurate labels are extremely expensive to obtain in some situations, for example in certain medical problems

[37].

Few-shot learning [22, 20, 19] aims to solve this problem. Various methods have been explored for this research topic in prior works. For example, metric learning-based methods [19, 43] learn to compare a target image with a set of few-shot labeled images and output the class that gives the highest similarity score; meta-optimization based methods [32, 13] learn a model that can quickly adapt and converge for a new few-shot task. However, most prior works only consider the image classification task on small-scale datasets such as Omniglot [20], MiniImageNet [43], and the few-shot tasks are often very simple, e.g., 5-way classification [43, 13].

Figure 1: We propose a novel few-shot detection model that trains a basic detection model and a meta-model on base classes with sufficiently many samples in the first phase. In the second phase the meta-model is trained to fast adapt the detection model to detect novel objects by reweighing its intermediate features to be more discriminative and sensitive to the novel classes with a few examples.

In this work, we consider a more challenging computer vision problem for real-world images, namely few-shot object detection where only a few examples with annotated bounding boxes are provided. Object detection is much more difficult as it involves not only class predictions but also localization of the objects. Therefore, conventional methods for few-shot learning are not directly applicable. Other settings where the labels are limited are explored for object detection, for example, the weakly-supervised setting [3, 8, 40] (only image-level labels), or the semi-supervised setting [28, 45, 9] (given large amount of unlabeled images). However, these problem settings are not as challenging as the few-shot learning one, and sometimes even collecting that many labels can be unrealistic.

To solve the few-shot object detection problem, we propose a novel meta-learning based framework as shown in Figure 1. The framework is motivated by fully exploring knowledge learned from some base objects to help detect object from novel categories with a few examples. We find when training a preliminary deep CNN based object detection model on a few base classes with abundant examples, the model could learn intermediate features at its top layers that are specific for certain object attributes. These features can implicitly compose high-level representations for different objects. Therefore, instead of learning the representation from the lowest level for novel objects that consumes great data resource, our framework pursues to learn how to adjust the intermediate features from base categories and detect novel objects accordingly.

Our proposed framework thus consists of a detection model that offers basic features and a light-weight meta-model that learns to adapt these features to novel objects with reference to a few examples (Figure 2). To make the adaptation fast, we only require the meta-model to learn to predict reweighting coefficients over the basic features and adjust their contributions to final detection. The meta-model takes in classes of examples, predicts

feature reweightings, each responsible for detecting the corresponding class, through end-to-end training. With different feature reweightings, the confidence score and bounding box coordinates are sufficiently adjusted. A final softmax layer is applied on the confidence scores for calibration. As illustrated in

Figure 1, the whole few-shot detection model is trained within two phases: first learning representations on base classes and then fine-tuning for adapting to novel classes. The feature reweighting module is trained in both phases to meta-learn how to reweigh the features for both base and novel classes. Our proposed few-shot detector outperforms competitive baseline methods on multiple datasets and various settings. Besides, our model also demonstrates good transferability from one dataset to another different one through the feature reweighting.

Our contributions can be summarized as follows: 1. We study the problem of few-shot object detection, which is of great practical values but a less explored task than image classification in the few-shot learning literature. 2. We design a meta-learning based model, where the feature importance in the detector are predicted by a meta-model based on the object category to detect. The model is simple and highly data-efficient. 3. We demonstrate through experiments that our model can outperform baseline methods by a large margin, especially when the number of labels is extremely low. Our model can also adapt to novel classes significantly faster than baselines. 4. Through ablation studies we analyze the effects of certain components of our proposed model and reveal some good practices towards solving the few-shot detection problem.

2 Related Work

General Object Detection

Deep learning based object detection systems can be divided into two categories: proposal-based and proposal-free. RCNN series detectors fall into the first category. RCNN [15]

uses pre-trained convolutional networks to classify the region proposals generated by selective search

[42] . SPP-Net [18] and Fast-RCNN [14] improve RCNN with a Region of Interest (RoI) pooling layer to extract features on the convolutional feature maps. Faster-RCNN [36] introduces a region-proposal-network (RPN) to improve the efficiency of generating proposals. In contrast, YOLO [33] is a proposal-free method, which uses a single fully-convolutional network to directly perform class and bounding box predictions. SSD [24] improves YOLO by using default boxes (anchors) to adjust to various object shapes. YOLOv2 [34] improves YOLO with a series of techniques, e.g., multi-scale training, new network architecture (DarkNet-19). Compared with proposal-based methods, proposal-free methods do not require a per-region classifier, thus are conceptually simpler and significantly faster. Our few-shot detector is built on the YOLOv2 architecture.

Few-shot Learning

Few-shot learning refers to learning from just a few training examples per class, a task that human usually performs better than traditional machine learning algorithms.

[22]

uses Bayesian inference to generalize knowledge from a pre-trained model to tackle the one-shot learning problem.

[21] proposes a Hierarchical Bayesian one-shot learning system that exploits compositionality and causality. [16, 44] introduces image hallucination techniques to augment the training data and improve generalization from base categories to few-shot novel categories. [26] considers the problem of adapting to novel classes in a new domain. [10] assumes abundant unlabeled images and adopts label propagation in a semi-supervised setting.

An increasingly popular solution for few-shot learning is meta-learning (also referred as learning to learn), which can further be divided into three categories: a) Metric learning. Siamese Networks [19] rank feature similarities between inputs to make predictions on unknown classes. Matching Networks [43] learn the task of finding the most similar class for the target image among a small set of labeled images. Prototypical Networks [39] extend Matching Networks by producing a linear classifier instead of weighted nearest neighbor for each class. Relation Networks [41] learn a distance metric to compare the target image to a few labeled images. b) Optimization for fast adaptation. [32] propose an LSTM meta-learner that is trained to quickly converge a learner classifier in new few-shot tasks. Model-Agnostic Meta-Learning (MAML) [13] optimizes a task-agnostic network so that a few gradient updates on its parameters would lead to good performance on new few-shot tasks. c) Parameter prediction. Learnet [2] dynamically learns the parameters of factorized weight layers based on a single example of each class to realize one-shot learning. Weight imprinting [30] sets weights for a new category using a scaled embedding of labeled examples. [30] predicts weights from activations to adapt to novel categories without training. These previous works only tackle image classification task, while our work focuses on object detection, a more challenging and realistic problem in computer vision.

Object Detection with Limited Labels

There are a number of prior works on detection focusing on settings with limited labels. The weakly-supervised setting [3, 8, 40] considers the problem of training object detectors with only image-level labels, but without bounding box annotations, which are more expensive to obtain. Few example object detection [28, 45, 9] assumes only a few labeled bounding boxes per class, but relies on abundant unlabeled images to generate trustworthy pseudo annotations for training. Zero-shot object detection [1, 31, 48] aims to detect previously unseen object categories, thus usually requires external information such as relations between classes. Different from those settings, our few-shot detector uses very few bounding box annotations (1-10) for each novel class, without the need for unlabeled images or external knowledge. [4]

studies a similar setting but only in a transfer learning context, where the target domain images only contains novel classes without base classes.

Figure 2: The architecture of our proposed few-shot detection model. It consists of a base feature extractor and a meta-model. The base model follows the one-stage detector architecture and directly regresses the objectness score (), bounding box location () and classification score (). The Metanet is trained to map classes of inputs to feature reweighting coefficients, each responsible for adjusting the basis features from the Darknet to detect the objects from the corresponding class. A softmax based classification score normalization is imposed on the final output.

3 Few-shot Detection via Feature Reweighting

In this section we describe the problem setup, our model design and training phases in detail.

3.1 Few-shot Detection Setting

In this work, we define a novel and realistic setting for few-shot object detection, in which there are two kinds of data available for training, i.e., the base classes and the novel classes. For the base classes, abundant annotated data are available, while only a few labeled samples are given to the novel classes [16]. This setting is worth exploring since it aligns well with a practical situation—one may expect to deploy a pre-trained detector for new classes with only a few labeled samples. More specifically, Large-scale object detection datasets (e.g., VOC, COCO) are available to pre-train a detection model. However, the number of object categories therein is quite limited, especially compared to the vast object categories in real world. Thus, solving this few-shot object detection problem is heavily desired.

To instantiate such a problem setting and effectively evaluate different solutions, we propose to split the objects provided by a detection dataset into base and novel classes, For base classes, we retain all the bounding box labels. For novel classes, we sample a set of images so that each class has exactly bounding box annotations. The usage of the sampled data will become clear when we introduce the model training.

3.2 Feature Reweighting for Detection

Our proposed few-shot detection model is based on proposal-free detection framework YOLOv2 [34]. A typical proposal-free detector consists of a feature extractor that is shared among all classes and a final prediction layer that outputs object classification and bounding box coordinates. However, it is known that certain features are more important for certain classes [47, 46]

. Moreover, some classes tend to share common features (e.g., cat and dog). For example, if our goal is to detect cats, the features for detecting dogs are probably more useful than those for detecting aeroplanes. This motivates us to enhance the reusability of features by utilizing the feature representation learned on base classes to help better detecting the novel classes. Therefore, we propose to assign a weight for each of the extracted features while detecting a specific class of objects, such that the model can put more attention on features related to this class and ignore features that are irrelevant.

To this end, as shown in Figure 2

, we carefully design a light-weight convolutional neural network, namely a

meta-model

to generate such weight vectors of reweighting coefficients. Formally, we denote the meta-model as

, the reference images and their associated object bounding box annotation as and respectively, for class where is the number of classes. The meta-model takes in one annotated sample from each of the classes. Such annotated samples are used as references indicating the target class to detect. Then it learns to predict set of coefficients , which are responsible for adjusting the features to detect the corresponding classes . The reweighting vector is predicted as . The basis features are provided by a DarkNet-19 based feature extractor is used to extracted basis features from the input image : . The number of feature maps in is equivalent to the number of weights in . Our proposed model obtains the class-specific feature for novel class by following feature reweighting:

(1)

where denotes channel-wise multiplication. The reweighting operation could be easily implemented using a 11 depth-wise convolution.

After acquiring class-specific features , we put a prediction layer on top of to directly regress the detection relevant output, by following the common practice of current proposal-free detectors [33, 35, 34]. The output corresponds to the objectness score , bounding box location offsets , and classification score for each of a predefined set of anchors. The prediction layer is shared across different classes. The above feature reweighting (Equation 1) has adapted the features to novel classes. These features are then fed into the prediction layer to generate detections for the target classes:

(2)

where is one-versus-all classification score indicating the probability of the corresponding object belongs to class .

3.3 Training Details

Meta-model Input

The input of the meta-model should be the object of interest. However, in object detection task, usually one image contains multiple classes of objects. To let the meta-model know what the target class is, in additional to three RGB channels, we include an additional “mask” channel () that has only binary values: on the position of the object of interest, the value is 1, otherwise it is 0 (see left-bottom of Figure 2). If multiple target objects are present on the image, only one object is used. This additional mask channel gives the meta-model the knowledge of what part of the image’s information it should use, and what part should be considered as “background”.

Loss Functions

To train the meta-model for feature reweighting, we need to carefully choose the loss functions in particular for the class prediction considering the sample number is very few. Given that now the predictions are made class-wisely, it seems natural to use binary cross entropy loss, regressing 1 if the object is the target class and 0 otherwise. However, we found this loss function gives a model outputting redundant detections (e.g., detecting a train as a train, a bus and a car). This could be due to that for a specific region, only one out of

classes is truly positive. However, binary loss strives to produce balanced positive and negative predictions. Non-maximum suppression could not help since it only operates on predictions within each class.

To resolve this issue, we attach a softmax layer for calibrating the classification scores among classes, and suppress detecting wrong classes. Therefore, the actual classification score is given by . Then to better align training and prediction, a cross-entropy loss over the calibrated scores is adopted for optimization:

(3)

where is an indicator function for whether current anchor box belongs to class or not. After introducing softmax, the summation of classification scores for a specific anchor is equal to 1, and less probable class predictions will be suppressed. This softmax loss will be shown to be superior to binary loss in the following experiments. For bounding box and objectiveness regression, we adopt the similar loss function as YOLOv2 [34] but we balance the positive and negative by not computing some loss from negatives samples for the objectiveness scores.

Training

The training procedure consists of two phases. (1) Base training: given that we have a lot of labeled data on base classes, we first train our model on images with only base class annotations for learning representation. In this phase, despite abundant labels are available for each class so that a conventional detector would also learn good representations, we still jointly train the detector together with the meta-model. This is to make them coordinate in a desired way: the meta-model needs to identify the class to detect and predict the reweighting accordingly. In each iteration, the meta-model takes (number of base classes) images and masks as input, produces set of reweighting coefficients, with each used for detecting one class of objects. (2) Few-shot fine-tuning: in this phase, we train the model on both base and novel classes. As only -shot labels are available for novel classes, to balance between classes we also include boxes for each base classes. The training process is the same as the first phase, except that it takes significantly fewer iterations for the model to converge.

In both training phases, the reweighting coefficients depend on the meta-model’s input, which is randomly sampled from the available data we have in each iteration. After few-shot fine-tuning, we would like to obtain a model that does not depend on any meta-model input. This is achieved by setting the weights for a target class to the average weights predicted by the meta-model taking the -shot samples as input. After setting the weights, the meta-model can be completely detached, so during inference time our method adds negligible overhead to the original detector.

4 Experimental Evaluations

In this section, we present our main experimental results. We also compare with various baseline methods and show our method can adapt to novel object detection significantly faster and more accurately. We use PyTorch

[29] for implementation and YOLOv2 [34] as the base detector. More implementation details are given in the supplementary material. The code to reproduce the results will be made publicly available.

4.1 Datasets and Settings

We evaluate our method on widely-used object detection benchmarks, i.e., PASCAL VOC 2007 [12], 2012 [11], and MS-COCO [23] datasets.

VOC 07 and 12 each consists of training, validation, and test image sets. We follow the common practice [34, 36, 38, 6] of evaluating on the 07 test set while use 07 and 12 train/val images for training. The images in VOC contain 20 object categories, from which we randomly select 5 categories as novel classes, while the remaining 15 being base classes. We evaluate with 3 different random seeds for the class set split. During base training, only annotations of the base classes are given. For few-shot fine-tuning, we use a very small set of training images to ensure that each category of objects only has annotated bounding boxes, where is set to be 1, 2, 3, 5 and 10.

COCO is a larger-scale dataset containing 80k training images, 40k validation images and 20k test images. We use 5000 images from the validation set for evaluation, and the rest images in train/val sets for training. COCO has a more diverse set of 80 object classes, from which we select 20 classes that are also in PASCAL VOC to be the set of novel classes, and the remaining 60 classes to be base classes. In this way we also consider learning base representations on the 60 classes of COCO and detecting the 20 novel objects in PASCAL VOC. This is the cross-dataset setting denoted as COCO to PASCAL.

Baselines

We compare our method with three baselines, all based on the original YOLOv2 detector. The first is to jointly train the original YOLOv2 detector on images with abundantly-labeled base classes and rarely-labeled novel classes. We term this baseline as YOLO-joint. We train this baseline for the same total iterations in our proposed model. Besides joint training, the other two baselines also use two phases of training (base training + few-shot fine-tuning) as our model. But here the original YOLOv2 model is used. The base training phase are the same for these two baselines; for few-shot fine-tuning, we train the second baseline, YOLO-ft, for the same iterations as our method, and train the third baseline, YOLO-ft-full, to full convergence.

4.2 Results

Pascal Voc

Before we present the results on target novel classes, we first analyze the first-stage representation learning on base classes. Ideally, a desirable low-shot detection system should preferably perform as well when data are abundant. We compare the mAP on base classes for models obtained after the first-stage base training, between our meta-learning model and the original YOLO detector (used in latter two baselines). The results are shown in Table 1. Despite our detector is designed for a few-shot scenario, it also has strong representation power to reach comparable performance with the original YOLOv2 detector trained on a lot of samples. This indicates that using class-specific, meta-model predicted weights instead of sharing the weights for all the classes, does not introduce additional difficulty for the optimization process. This lays a basis for solving the few-shot object detection problem.

Base Set 1 Base Set 2 Base Set 3
YOLO Baseline 70.3 72.2 70.6
Our model 69.7 72.0 70.8
Table 1: Detection performance (mAP) on base categories. We evaluate the vanilla YOLO detector and our proposed detection model on three different sets of base categories.
Novel Set 1 Novel Set 2 Novel Set 3
Method / Shot 1 2 3 5 10 1 2 3 5 10 1 2 3 5 10
YOLO-joint 0.0 0.0 1.8 1.8 1.8 0 0.1 0 1.8 0 1.8 1.8 1.8 3.6 3.9
YOLO-ft 3.2 6.5 6.4 7.5 12.3 8.2 3.8 3.5 3.5 7.8 8.1 7.4 7.6 9.5 10.5
YOLO-ft-full 6.6 10.7 12.5 24.8 38.6 12.5 4.2 11.6 16.1 33.9 13.0 15.9 15.0 32.2 38.4
Ours 14.8 15.5 26.7 33.9 47.2 15.7 15.3 22.7 30.1 39.2 19.2 21.7 25.7 40.6 41.3
Table 2: Few-shot detection performance (mAP) on the PASCAL VOC dataset. We evaluate the performance on three different sets of novel categories. Our model consistently outperforms baseline methods.
Novel Base
# Shots bird bus cow mbike sofa mean aero bike boat bottle car cat chair table dog horse person plant sheep train tv mean
3 YOLO-joint 0 0 0 0 9.1 1.8 78.0 77.2 61.2 45.6 81.6 83.7 51.7 73.4 80.7 79.6 75.0 45.5 65.6 83.1 72.7 70.3
YOLO-ft 10.9 5.5 15.3 0.2 0.1 6.4 76.7 77.0 60.4 46.9 78.8 84.9 51.0 68.3 79.6 78.7 73.1 44.5 67.6 83.6 72.4 69.6
YOLO-ft-full 21.0 22.0 19.1 0.5 0.0 12.5 73.4 67.5 56.8 41.2 77.1 81.6 45.5 62.1 74.6 78.9 67.9 37.8 54.1 76.4 71.9 64.4
Ours 26.1 19.1 40.7 20.4 27.1 26.7 73.6 73.1 56.7 41.6 76.1 78.7 42.6 66.8 72.0 77.7 68.5 42.0 57.1 74.7 70.7 64.8
10 YOLO-joint 0.0 0.0 0.0 0.0 9.1 1.8 76.9 77.1 62.2 47.3 79.4 85.1 51.3 70.1 78.6 78.0 75.2 47.4 63.9 85.0 72.3 70.0
YOLO-ft 11.4 28.4 8.9 4.8 7.8 12.2 77.4 76.9 60.9 44.8 78.3 83.2 48.5 68.9 78.5 78.9 72.6 44.8 67.3 82.7 69.3 68.9
YOLO-ft-full 22.3 53.9 32.9 40.8 43.2 38.6 71.9 69.8 57.1 41.0 76.9 81.7 43.6 65.3 77.3 79.2 70.1 41.5 63.7 76.9 69.1 65.7
Ours 30.0 62.7 43.2 60.6 39.6 47.2 65.3 73.5 54.7 39.5 75.7 81.1 35.3 62.5 72.8 78.8 68.6 41.5 59.2 76.2 69.2 63.6
Table 3: Detection performance (AP) for the base and novel categories on the PASCAL VOC dataset for the first base/novel split. We evaluate the performance for different numbers of training examples for the novel categories.

We present our main results on novel classes in Table 2. First we note that our model significantly outperforms the baselines, especially when the labels are extremely scarce (1-3 shot). The improvements are also consistent for different base/novel class splits and number of shots. Second, we note that YOLO-ft/YOLO-ft-full also performs significantly better than YOLO-joint. This demonstrates the necessity of the two training phases employed in our model: it is better to first train a good knowledge representation on base classes and then fine-tune with few-shot data, otherwise joint training with let the detector bias towards base classes and learn nearly nothing about novel classes.

More detailed results about each class is available at Table 3 (novel set 1). Compared with the results at Table 1, it can be seen that after few-shot fine-tuning, the accuracy on base classes will be hurt. This phenomenon holds for both our method (from 69.7% to 63.6%) and the YOLO-ft/YOLO-ft-full (from 70.33% to 68.9%/65.7%). During few-shot fine-tuning the labels for base classes are also scarce, this might explain why the accuracy on base classes are hurt. However, this is a tradeoff we have to make. In practice, one can use the few-shot fine-tuned detector to detect novel objects and the original detector that is trained on large data for detecting base class objects.

Coco

Average Precision Average Recall
# Shots 0.5:0.95 0.5 0.75 S M L 1 10 100 S M L
10 YOLO-ft 0.4 1.1 0.1 0.3 0.7 0.6 5.8 8.0 8.0 0.6 5.1 15.5
YOLO-ft-full 3.1 7.9 1.7 0.7 2.0 6.3 7.8 10.5 10.5 1.1 5.5 20
Ours 5.6 12.3 4.6 0.9 3.5 10.5 10.1 14.3 14.4 1.5 8.4 28.2
30 YOLO-ft 0.6 1.5 0.3 0.2 0.7 1.0 7.4 9.4 9.4 0.4 3.9 19.3
YOLO-ft-full 7.7 16.7 6.4 0.4 3.3 14.4 11.7 15.3 15.3 1.0 7.7 29.2
Ours 9.1 19.0 7.6 0.8 4.9 16.8 13.2 17.7 17.8 1.5 10.4 33.5
Table 4: Few-shot detection performance for the novel categories on the COCO dataset. We evaluate the performance for different numbers of training shots for the novel categories.

The results for COCO dataset is shown in Table 4. We evaluate and . In both cases, our model outperforms YOLO baselines. When the baseline is trained with same iterations with our model, it achieves an AP of less than 1%. We also observe that there is much room to improve the results obtained in the few-shot scenario. This is possibly due to the large amount of data in COCO and few-shot detection over it is quite challenging.

COCO to PASCAL

We evaluate this setting using 10-shot data of each class from PASCAL. The mAP of YOLO-ft and YOLO-ft-full are 11.24% and 28.29% respectively, while our method achieves 32.29%. The performance on PASCAL novel classes is worse than that when we use base classes in PASCAL dataset (usually around 40%), which could be explained by the domain shift, as images in COCO and PASCAL are of different complexities.

4.3 Adaptation Speed

Some previous works on meta-learning [32, 13] explicitly optimizes the network model so that the system can adapt to new few-shot classification tasks quickly. Here we show that despite the fact that our few-shot detection model does not consider adaptation speed explicitly in the optimization process, it still exhibits surprisingly fast adaptation ability. Note that in experiments of Table 2 and Table 3, YOLO-ft-full requires 25,000 iterations for it to fully converge, while our meta-learning model only require 1200 iterations to converge to a higher accuracy. When the baseline YOLO is trained for the same iterations (YOLO-ft) as our method, the performance is far worse. In this section, we compare the full convergence behavior of YOLO-joint, YOLO-ft-full and our method in Figure 3. The AP value are normalized by the maximum value during the training of our method and the baseline together. This experiment is conducted on PASCAL VOC base/novel split 1, with 10-shot bounding box labels on novel classes.

Figure 3: Adaptation speed comparison between our proposed few-shot detection model and the YOLO-ft-full baseline. We plot the AP (normalized by the converged value) against number of training iterations. Our model shows much faster adaption speed.

From Figure 3, our method (solid lines) converges significantly faster than the baseline YOLO detector (dashed lines), for each novel class as well as on average. For the class Sofa (orange line), despite the baseline YOLO detector eventually slightly outperforms our method, it takes a great amount of training iterations to catch up with the latter. This behavior makes our model a good few-shot detector in practice, where scarcely labeled novel classes may come in any time and short adaptation time is desired to put the system in real usage fast. This also opens up our model’s potential in a life-long learning setting [5], where the model accumulates the knowledge learned from past and uses/adapts it for future prediction.

4.4 Analysis of Predicted Reweightings

(a)
(b)
Figure 4:

(a) Visualization of the predicted weights (in row vectors) from the meta model for each class. Column correspond to feature maps, ranked by variance among classes. Due to space limit, we only plot randomly sampled 256 features. (b) t-SNE

[27] embedding of the reweighting vectors. More visually similar classes tend to have closer embeddings.

The feature reweighting for a class is predicted by the meta-model, and averaged over multiple inputs during testing. Therefore, in the final model, each class (both base and novel) corresponds to a unique weight vector that decides which features are important for detecting that class. A natural question is to see whether there exist patterns on those predicted weights. We present a detailed analysis of the predicted weights, and discuss some interesting observations.

We first plot the reweighting coefficients for each class in (a). In our architecture, the weights form a 1024 dimensional vector. In the figure, each row corresponds to a class and each column corresponds to a feature. The features are ranked by variance among 20 classes from left to right. We observe that roughly half of the features (columns) have notable variance among different classes (multiple colors in a column), while the other half are insensitive to classes (roughly the same color in a column). This suggests that indeed only a portion of features are used differently when detecting different classes, while the remaining features are shared across different classes.

We further visualize the predicted weight vectors by t-SNE [27] in (b). In this figure, we also plot the weight vector generated by each input of the meta-model, along with their average for each class. We use the weights of the 10-shot trained model so each class has 10 points with one mean. The model is trained on base/novel split 1, and the novel classes are bold. We observe that not only weights of the same classes tend to form clusters, visually similar classes also tend to be close. The classes Cow, Horse, Sheep, Cat and Dog are all around the right-bottom corner, and they are all animals. Classes of transportation tools are at the top of the figure. Person and Bird are more visually different from the mentioned animals, but are still closer to them than the transportation tools.

4.5 Ablation Studies

We analyze the effects of various components in our system, by comparing the performance on both base classes and novel classes. The experiments are on PASCAL VOC base/novel split 1, using 10-shot data on novel classes.

Which Layer Output to Reweight

In our experiments, we apply the reweighting technique to the output of the second last layer (layer 21). This is the highest level of intermediate features we could use. However, other options could be considered as well. We experiment with reweighting layer 20 and 13, while also considering reweighting only half of features in layer 21. The results are shown in Table 5. We can see that the reweighting technique is more suited to be applied at deeper layers, as using earlier layers gives us worse performance. Moreover, reweighting only half of the features does not hurt the performance much, which demonstrates that a significant portion of features can be shared among classes, as we analyzed in subsection 4.4.

Layer 13 Layer 20 Layer 21 Layer 21(half)
Base 69.6 69.2 69.7 69.2
Novel 40.7 43.6 47.2 46.9
Table 5: Performance comparison for the detection models trained with reweighting applied on different layers.

Loss Functions

As we mentioned in subsection 3.3, for classification scores produced by different predicted reweightings, there are several options for the classification loss. Among them the binary loss is the most straightforward one: if the inputs to the meta-model and the detector are from the same class, the model predicts 1 and otherwise 0. This binary loss can be defined in following two ways. The single-binary loss refers to that in each iteration the meta-model only takes one class of input, and the detector regresses 0 or 1; and the multi-binary loss refers to that per iteration the meta-model takes examples from classes, and compute binary loss in total. Prior works on Siamese Network [19] and Learnet [2] use the single-binary loss. Instead, our model uses the softmax loss for calibrating the classification scores of classes. To investigate the effects of using different loss functions, we compare model performance trained with the single-binary, multi-binary loss and with our softmax loss in Table 6. We observe that using softmax loss significantly outperforms binary loss. This is likely due to its effect in suppressing redundant detections, since the classification scores must sum to 1.

Single-binary Multi-binary Softmax
Base 49.1 64.1 69.7
Novel 14.8 41.6 47.2
Table 6: Performance comparison for the detection models trained with different loss functions.

Input Form of the Meta-model

In our experiments, we use an image of the target class with a binary mask channel indicating position of the object as input to the meta-model. We examine the case where we only feed the image. From Table 7 we see that this gives lower performance especially on novel classes. An apparently reasonable alternative is to feed the cropped target object together with the image. From Table 7, this solution is also slightly worse. The necessity of the mask may lie in that it provides the precise information about the object location and its context.

Image Mask Object Base Novel
69.5 43.3
69.7 47.2
69.2 45.8
69.4 46.8
Table 7: Performance comparison for different input forms to the meta-model. The shadowed line is the scheme we use in the main experiments.

We also analyze the meta-model’s input sampling scheme for testing and the effect of sharing weights between feature extractor and meta-model. Due to space limit, we defer the results to the supplementary material.

5 Conclusion

This work is among the first to explore the practical and challenging few-shot detection problems. It introduced a meta-model to learn to fast adjust contributions of the basic features to detect novel classes with a few example. Experiments on realistic benchmark datasets clearly demonstrate effectiveness of the proposed model. This work also compared the model adaption speed, analyzed predicted feature weights and contributions of each design component to provide an in-depth understanding of the proposed model. Few-shot detection is challenging and we will further explore how to improve its performance for complex scenes.

References

  • [1] A. Bansal, K. Sikka, G. Sharma, R. Chellappa, and A. Divakaran. Zero-shot object detection. arXiv preprint arXiv:1804.04340, 2018.
  • [2] L. Bertinetto, J. F. Henriques, J. Valmadre, P. Torr, and A. Vedaldi. Learning feed-forward one-shot learners. In Advances in Neural Information Processing Systems, pages 523–531, 2016.
  • [3] H. Bilen and A. Vedaldi. Weakly supervised deep detection networks. In

    Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition

    , pages 2846–2854, 2016.
  • [4] H. Chen, Y. Wang, G. Wang, and Y. Qiao. Lstd: A low-shot transfer detector for object detection. arXiv preprint arXiv:1803.01529, 2018.
  • [5] Z. Chen and B. Liu. Lifelong machine learning.

    Synthesis Lectures on Artificial Intelligence and Machine Learning

    , 12(3):1–207, 2018.
  • [6] J. Dai, Y. Li, K. He, and J. Sun. R-fcn: Object detection via region-based fully convolutional networks. In Advances in neural information processing systems, pages 379–387, 2016.
  • [7] J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei-Fei. Imagenet: A large-scale hierarchical image database. In Computer Vision and Pattern Recognition, 2009. CVPR 2009. IEEE Conference on, pages 248–255. Ieee, 2009.
  • [8] A. Diba, V. Sharma, A. M. Pazandeh, H. Pirsiavash, and L. Van Gool. Weakly supervised cascaded convolutional networks. In CVPR, 2017.
  • [9] X. Dong, L. Zheng, F. Ma, Y. Yang, and D. Meng. Few-example object detection with model communication. arXiv preprint arXiv:1706.08249, 2017.
  • [10] M. Douze, A. Szlam, B. Hariharan, and H. Jégou. Low-shot learning with large-scale diffusion. In Computer Vision and Pattern Recognition (CVPR), 2018.
  • [11] M. Everingham, S. A. Eslami, L. Van Gool, C. K. Williams, J. Winn, and A. Zisserman. The pascal visual object classes challenge: A retrospective. International journal of computer vision, 111(1):98–136, 2015.
  • [12] M. Everingham, L. Van Gool, C. K. Williams, J. Winn, and A. Zisserman. The pascal visual object classes (voc) challenge. International journal of computer vision, 88(2):303–338, 2010.
  • [13] C. Finn, P. Abbeel, and S. Levine. Model-agnostic meta-learning for fast adaptation of deep networks. ICML, 2017.
  • [14] R. Girshick. Fast r-cnn. In Proceedings of the IEEE international conference on computer vision, pages 1440–1448, 2015.
  • [15] R. Girshick, J. Donahue, T. Darrell, and J. Malik. Rich feature hierarchies for accurate object detection and semantic segmentation. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 580–587, 2014.
  • [16] B. Hariharan and R. Girshick. Low-shot visual recognition by shrinking and hallucinating features. In 2017 IEEE International Conference on Computer Vision (ICCV), pages 3037–3046. IEEE, 2017.
  • [17] K. He, G. Gkioxari, P. Dollár, and R. Girshick. Mask r-cnn. In Computer Vision (ICCV), 2017 IEEE International Conference on, pages 2980–2988. IEEE, 2017.
  • [18] K. He, X. Zhang, S. Ren, and J. Sun. Spatial pyramid pooling in deep convolutional networks for visual recognition. In European conference on computer vision, pages 346–361. Springer, 2014.
  • [19] G. Koch. Siamese neural networks for one-shot image recognition. In ICML Workshop, 2015.
  • [20] B. Lake, R. Salakhutdinov, J. Gross, and J. Tenenbaum. One shot learning of simple visual concepts. In Proceedings of the Annual Meeting of the Cognitive Science Society, volume 33, 2011.
  • [21] B. M. Lake, R. R. Salakhutdinov, and J. Tenenbaum. One-shot learning by inverting a compositional causal process. In Advances in neural information processing systems, pages 2526–2534, 2013.
  • [22] F.-F. Li, R. Fergus, and P. Perona. One-shot learning of object categories. IEEE transactions on pattern analysis and machine intelligence, 28(4):594–611, 2006.
  • [23] T.-Y. Lin, M. Maire, S. Belongie, J. Hays, P. Perona, D. Ramanan, P. Dollár, and C. L. Zitnick. Microsoft coco: Common objects in context. In European conference on computer vision, pages 740–755. Springer, 2014.
  • [24] W. Liu, D. Anguelov, D. Erhan, C. Szegedy, S. Reed, C.-Y. Fu, and A. C. Berg. Ssd: Single shot multibox detector. In European conference on computer vision, pages 21–37. Springer, 2016.
  • [25] J. Long, E. Shelhamer, and T. Darrell. Fully convolutional networks for semantic segmentation. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 3431–3440, 2015.
  • [26] Z. Luo, Y. Zou, J. Hoffman, and L. F. Fei-Fei. Label efficient learning of transferable representations acrosss domains and tasks. In Advances in Neural Information Processing Systems, pages 165–177, 2017.
  • [27] L. v. d. Maaten and G. Hinton. Visualizing data using t-sne. Journal of machine learning research, 9(Nov):2579–2605, 2008.
  • [28] I. Misra, A. Shrivastava, and M. Hebert.

    Watch and learn: Semi-supervised learning for object detectors from video.

    In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 3593–3602, 2015.
  • [29] A. Paszke, S. Gross, S. Chintala, G. Chanan, E. Yang, Z. DeVito, Z. Lin, A. Desmaison, L. Antiga, and A. Lerer. Automatic differentiation in pytorch. In NIPS-W, 2017.
  • [30] H. Qi, M. Brown, and D. G. Lowe. Low-shot learning with imprinted weights. arXiv preprint arXiv:1712.07136, 2017.
  • [31] S. Rahman, S. Khan, and F. Porikli. Zero-shot object detection: Learning to simultaneously recognize and localize novel concepts. arXiv preprint arXiv:1803.06049, 2018.
  • [32] S. Ravi and H. Larochelle. Optimization as a model for few-shot learning. In ICLR, 2017.
  • [33] J. Redmon, S. Divvala, R. Girshick, and A. Farhadi. You only look once: Unified, real-time object detection. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 779–788, 2016.
  • [34] J. Redmon and A. Farhadi. Yolo9000: Better, faster, stronger. In 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 6517–6525. IEEE, 2017.
  • [35] J. Redmon and A. Farhadi. Yolov3: An incremental improvement. arXiv preprint arXiv:1804.02767, 2018.
  • [36] S. Ren, K. He, R. Girshick, and J. Sun. Faster r-cnn: Towards real-time object detection with region proposal networks. In Advances in neural information processing systems, pages 91–99, 2015.
  • [37] D. Shen, G. Wu, and H.-I. Suk. Deep learning in medical image analysis. Annual review of biomedical engineering, 19:221–248, 2017.
  • [38] Z. Shen, Z. Liu, J. Li, Y.-G. Jiang, Y. Chen, and X. Xue. Dsod: Learning deeply supervised object detectors from scratch. In 2017 IEEE International Conference on Computer Vision (ICCV), pages 1937–1945. IEEE, 2017.
  • [39] J. Snell, K. Swersky, and R. Zemel. Prototypical networks for few-shot learning. In Advances in Neural Information Processing Systems, pages 4077–4087, 2017.
  • [40] H. O. Song, Y. J. Lee, S. Jegelka, and T. Darrell. Weakly-supervised discovery of visual pattern configurations. In Advances in Neural Information Processing Systems, pages 1637–1645, 2014.
  • [41] F. Sung, Y. Yang, L. Zhang, T. Xiang, P. H. Torr, and T. M. Hospedales. Learning to compare: Relation network for few-shot learning. CVPR, 2018.
  • [42] J. R. Uijlings, K. E. Van De Sande, T. Gevers, and A. W. Smeulders. Selective search for object recognition. International journal of computer vision, 104(2):154–171, 2013.
  • [43] O. Vinyals, C. Blundell, T. Lillicrap, D. Wierstra, et al. Matching networks for one shot learning. In Advances in Neural Information Processing Systems, pages 3630–3638, 2016.
  • [44] Y.-X. Wang, R. Girshick, M. Hebert, and B. Hariharan. Low-shot learning from imaginary data. In CVPR, 2018.
  • [45] Y.-X. Wang and M. Hebert. Model recommendation: Generating object detectors from few samples. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 1619–1628, 2015.
  • [46] F. Yu, Z. Qin, and X. Chen. Distilling critical paths in convolutional neural networks. arXiv preprint arXiv:1811.02643, 2018.
  • [47] B. Zhou, Y. Sun, D. Bau, and A. Torralba. Revisiting the importance of individual units in cnns via ablation. arXiv preprint arXiv:1806.02891, 2018.
  • [48] P. Zhu, H. Wang, T. Bolukbasi, and V. Saligrama. Zero-shot detection. arXiv preprint arXiv:1803.07113, 2018.

Appendix

A. Implementation Details

All our models are trained using SGD with momentum 0.9, and weight-decay 0.0005 (on both detector and meta-model). The batch size is set to be 64. For base training we train for 80,000 iterations, a step-wise learning rate decay strategy is used, with learning rate being 10, 10, 10, 10, and changes happening in iteration 500, 40,000, 60,000. For few-shot fine-tuning, we use a constant learning rate of 0.001 and train for 1500 iterations. We use multi-scale training, and evaluate the model in 416 416 resolution, as with the original YOLOv2.

B. Additional Ablation Studies

Sampling of Examples for Testing

During training, the meta-model takes random input from the -shot data each of the classes. In testing, we take the -shot example as meta-model’s input and use the average of their predicted weights for detecting the corresponding class. If we replace the averaging process by randomly selecting meta-model’s input (as during training), the performance on base/novel classes will drop significantly from 69.7%/47.2% to 63.9%/45.1%. This is similar to the ensembling effect, except that this averaging over reweighting coefficients do not need additional inference time as in normal ensembling.

Sharing Weights Between Feature Extractor and Meta-model

The first few layers of the meta-model and the backbone feature extractor share the same architecture. Thus some weights can be shared between them. We evaluate this alternative and found the performance on base/novel classes decrease from 69.7%/47.2% to 68.3%/44.8%. The reason could be it imposes more constraints in the optimization process.

C. Complete Results on PASCAL VOC

Here we present the complete results for each class and number of shot on PASCAL VOC dataset. The results for base/novel split 1/2/3 are shown in Table 1/2/3 respectively.

Novel Base
# Shots bird bus cow mbike sofa mean aero bike boat bottle car cat chair table dog horse person plant sheep train tv mean
1 YOLO-joint 0.0 0.0 0.0 0.0 0.0 0.0 78.4 76.9 61.5 48.7 79.8 84.5 51.0 72.7 79.0 77.6 74.9 48.2 62.8 84.8 73.1 70.2
YOLO-ft 6.8 0.0 9.1 0.0 0.0 3.2 77.1 78.2 61.7 46.7 79.4 82.7 51.0 69.0 78.3 79.5 74.2 42.7 68.3 84.1 72.9 69.7
YOLO-ft-full 11.4 17.6 3.8 0.0 0.0 6.6 75.8 77.3 63.1 45.9 78.7 84.1 52.3 66.5 79.3 77.2 73.7 44.0 66.0 84.2 72.2 69.4
Ours 13.5 10.6 31.5 13.8 4.3 14.8 75.1 70.7 57.0 41.6 76.6 81.7 46.6 72.4 73.8 76.9 68.8 43.1 63.0 78.8 69.9 66.4
2 YOLO-joint 0.0 0.0 0.0 0.0 0.0 0.0 77.6 77.6 60.4 48.1 81.5 82.6 51.5 72.0 79.2 78.8 75.2 47.0 65.2 86.0 72.7 70.4
YOLO-ft 11.5 5.8 7.6 0.1 7.5 6.5 77.9 75.0 58.5 45.7 77.6 84.0 50.4 68.5 79.2 79.7 73.8 44.0 66.0 77.5 72.9 68.7
YOLO-ft-full 16.6 9.7 12.4 0.1 14.5 10.7 76.4 70.2 56.9 43.3 77.5 83.8 47.8 70.7 79.1 77.6 71.7 39.6 61.4 77.0 70.3 66.9
Ours 21.2 12.0 16.8 17.9 9.6 15.5 74.6 74.9 56.3 38.5 75.5 68.0 43.2 69.3 66.2 42.4 68.1 41.8 59.4 76.4 70.3 61.7
3 YOLO-joint 0 0 0 0 9.1 1.8 78.0 77.2 61.2 45.6 81.6 83.7 51.7 73.4 80.7 79.6 75.0 45.5 65.6 83.1 72.7 70.3
YOLO-ft 10.9 5.5 15.3 0.2 0.1 6.4 76.7 77.0 60.4 46.9 78.8 84.9 51.0 68.3 79.6 78.7 73.1 44.5 67.6 83.6 72.4 69.6
YOLO-ft-full 21.0 22.0 19.1 0.5 0.0 12.5 73.4 67.5 56.8 41.2 77.1 81.6 45.5 62.1 74.6 78.9 67.9 37.8 54.1 76.4 71.9 64.4
Ours 26.1 19.1 40.7 20.4 27.1 26.7 73.6 73.1 56.7 41.6 76.1 78.7 42.6 66.8 72.0 77.7 68.5 42.0 57.1 74.7 70.7 64.8
5 YOLO-joint 0.0 0.0 0.0 0.0 9.1 1.8 77.8 76.4 65.7 45.9 79.5 82.3 50.4 72.5 79.1 79.0 75.5 47.9 67.2 83.0 72.5 70.3
YOLO-ft 11.6 7.1 10.7 2.1 6.0 7.5 76.5 76.4 61.0 45.5 78.7 84.5 49.2 68.7 78.5 78.1 73.7 45.4 66.8 85.3 70.0 69.2
YOLO-ft-full 20.2 20.0 22.4 36.4 24.8 24.8 72.0 70.6 60.7 42.0 76.8 84.2 47.7 63.7 76.9 78.8 72.1 42.2 61.1 80.8 69.9 66.6
Ours 31.5 21.1 39.8 40.0 37.0 33.9 69.3 57.5 56.8 37.8 74.8 82.8 41.2 67.3 74.0 77.4 70.9 40.9 57.3 73.5 69.3 63.4
10 YOLO-joint 0.0 0.0 0.0 0.0 9.1 1.8 76.9 77.1 62.2 47.3 79.4 85.1 51.3 70.1 78.6 78.0 75.2 47.4 63.9 85.0 72.3 70.0
YOLO-ft 11.4 28.4 8.9 4.8 7.8 12.2 77.4 76.9 60.9 44.8 78.3 83.2 48.5 68.9 78.5 78.9 72.6 44.8 67.3 82.7 69.3 68.9
YOLO-ft-full 22.3 53.9 32.9 40.8 43.2 38.6 71.9 69.8 57.1 41.0 76.9 81.7 43.6 65.3 77.3 79.2 70.1 41.5 63.7 76.9 69.1 65.7
Ours 30.0 62.7 43.2 60.6 39.6 47.2 65.3 73.5 54.7 39.5 75.7 81.1 35.3 62.5 72.8 78.8 68.6 41.5 59.2 76.2 69.2 63.6
Table 8: Detection performance (AP) for the base and novel categories on the PASCAL VOC dataset for the 1st base/novel split. We evaluate the performance for different numbers of training examples for the novel categories.
Novel Base
# Shots aero bottle cow horse sofa mean bike bird boat bus car cat chair table dog mbike person plant sheep train tv mean
1 YOLO-joint 0.0 0.0 0.0 0.0 0.0 0.0 78.8 73.2 63.6 79.0 79.7 87.2 51.5 71.2 81.1 78.1 75.4 47.7 65.9 84.0 73.7 72.7
YOLO-ft 0.4 0.2 10.3 29.8 0.0 8.2 77.9 70.2 62.2 79.8 79.4 86.6 51.9 72.3 77.1 78.1 73.9 44.1 66.6 83.4 74.0 71.8
YOLO-ft-full 0.6 9.1 11.2 41.6 0.0 12.5 74.9 67.2 60.1 78.8 79.0 83.8 50.6 72.7 75.5 74.8 71.7 43.9 62.5 81.8 72.6 70.0
Ours 11.8 9.1 15.6 23.7 18.2 15.7 77.6 62.7 54.2 75.3 79.0 80.0 49.6 70.3 78.3 78.2 68.5 42.2 58.2 78.5 70.4 68.2
2 YOLO-joint 0.0 0.6 0.0 0.0 0.0 0.1 78.4 69.7 64.5 78.3 79.7 86.1 52.2 72.6 81.2 78.6 75.2 50.3 66.1 85.3 74.0 72.8
YOLO-ft 0.2 0.2 17.2 1.2 0.0 3.8 78.1 70.0 60.6 79.8 79.4 87.1 49.7 70.3 80.4 78.8 73.7 44.2 62.2 82.4 74.9 71.4
YOLO-ft-full 1.8 1.8 15.5 1.9 0.0 4.2 76.4 69.7 58.0 80.0 79.0 86.9 44.8 68.2 75.2 77.4 72.2 40.3 59.1 81.6 73.4 69.5
Ours 28.6 0.9 27.6 0.0 19.5 15.3 75.8 67.4 52.4 74.8 76.6 82.5 44.5 66.0 79.4 76.2 68.2 42.3 53.8 76.6 71.0 67.2
3 YOLO-joint 0.0 0.0 0.0 0.0 0.0 0.0 77.6 72.2 61.2 77.9 79.8 85.8 49.9 73.2 80.0 77.9 75.3 50.8 64.3 84.2 72.6 72.2
YOLO-ft 4.9 0.0 11.2 1.2 0.0 3.5 78.7 71.6 62.4 77.4 80.4 87.5 49.5 70.8 79.7 79.5 72.6 44.3 60.0 83.0 75.2 71.5
YOLO-ft-full 10.7 4.6 12.9 29.7 0.0 11.6 74.9 69.2 60.4 79.4 79.1 87.3 43.4 69.7 75.8 75.2 70.5 39.4 52.9 80.8 73.4 68.8
Ours 29.4 4.6 34.9 6.8 37.9 22.7 62.6 64.7 55.2 76.6 77.1 82.7 46.7 65.4 75.4 78.3 69.2 42.8 45.2 77.9 69.6 66.0
5 YOLO-joint 0.0 0.0 0.0 0.0 9.1 1.8 78.0 71.5 62.9 81.7 79.7 86.8 50.0 72.3 81.7 77.9 75.6 48.4 65.4 83.2 73.6 72.6
YOLO-ft 0.8 0.2 11.3 5.2 0.0 3.5 78.6 72.4 61.5 79.4 81.0 87.8 48.6 72.1 81.0 79.6 73.6 44.9 61.4 83.9 74.7 72.0
YOLO-ft-full 10.3 9.1 17.4 43.5 0.0 16.0 76.4 69.6 59.1 80.3 78.5 87.8 42.1 72.1 76.6 77.1 70.7 43.1 58.0 82.4 72.6 69.8
Ours 33.1 9.4 38.4 25.4 44.0 30.1 73.2 65.6 52.9 75.9 77.5 80.0 43.7 65.0 73.8 78.4 68.9 39.2 56.4 78.0 70.8 66.6
10 YOLO-joint 0.0 0.0 0.0 0.0 0.0 0.0 77.4 71.5 61.1 78.8 82.7 87.1 52.5 74.6 80.8 79.3 75.4 46.1 64.2 85.2 73.6 72.7
YOLO-ft 3.8 0.0 18.3 17.0 0.0 7.8 79.3 72.8 61.6 78.5 81.4 87.1 46.9 73.3 79.8 79.0 73.1 44.6 65.9 83.4 73.7 72.0
YOLO-ft-full 41.7 9.5 34.5 45.1 38.4 33.9 75.5 69.4 60.0 78.3 78.8 86.8 44.9 68.4 75.8 76.9 70.7 44.0 64.1 81.6 71.1 69.8
Ours 43.2 13.9 41.5 58.1 39.2 39.2 74.1 63.8 52.0 75.5 77.6 81.8 35.6 57.9 68.2 77.6 68.0 37.9 62.4 76.9 71.3 65.4
Table 9: Detection performance (AP) for the base and novel categories on the PASCAL VOC dataset for the 2nd base/novel split. We evaluate the performance for different numbers of training examples for the novel categories.
Novel Base
# Shots boat cat mbike sheep sofa mean aero bike bird bottle bus car chair cow table dog horse person plant train tv mean
1 YOLO-joint 0.0 9.1 0.0 0.0 0.0 1.8 78.7 76.8 73.4 48.8 79.0 82.3 50.2 68.4 71.4 76.7 80.7 75.0 46.8 83.8 71.7 70.9
YOLO-ft 0.1 25.8 10.7 3.6 0.1 8.1 77.2 74.9 69.1 47.4 78.7 79.7 47.9 68.3 69.6 74.7 79.4 74.2 42.2 82.7 71.1 69.1
YOLO-ft-full 0.1 30.9 26.0 8.0 0.1 13.0 75.1 70.7 65.9 43.6 78.4 79.5 47.8 68.7 68.0 72.8 79.5 72.3 40.1 80.5 68.6 67.4
Ours 10.8 44.0 17.8 18.1 5.3 19.2 77.1 71.8 66.3 40.4 75.2 77.8 50.1 54.6 66.8 69.1 78.3 68.1 41.9 80.6 70.3 65.9
2 YOLO-joint 0.0 9.1 0.0 0.0 0.0 1.8 77.6 77.1 74.0 49.4 79.8 79.9 50.5 71.0 72.7 76.3 81.0 75.0 48.4 84.9 72.7 71.4
YOLO-ft 0.0 24.4 2.5 9.8 0.1 7.4 78.2 76.0 72.2 47.2 79.3 79.8 47.3 72.1 70.0 74.9 80.3 74.3 45.2 84.9 72.0 70.2
YOLO-ft-full 0.0 35.2 28.7 15.4 0.1 15.9 75.3 72.0 69.8 44.0 79.1 78.8 42.1 70.0 64.9 73.8 81.7 71.4 40.9 80.9 69.4 67.6
Ours 5.3 46.4 18.4 26.1 12.4 21.7 71.4 72.4 64.5 37.9 75.3 77.1 42.9 55.0 57.4 73.7 78.9 68.0 41.5 75.9 69.0 64.1
3 YOLO-joint 0.0 9.1 0.0 0.0 0.0 1.8 77.1 77.0 70.6 46.3 77.5 79.7 49.7 68.8 73.4 74.5 79.4 75.6 48.1 83.6 72.1 70.2
YOLO-ft 0.0 27.0 1.8 9.1 0.1 7.6 77.7 76.6 71.4 47.5 78.0 79.9 47.6 70.0 70.5 74.4 80.0 73.7 44.1 83.0 70.9 69.7
YOLO-ft-full 0.0 39.0 18.1 17.9 0.0 15.0 73.2 71.1 68.8 43.7 78.9 79.3 43.1 67.8 62.2 76.3 79.4 70.8 40.5 81.6 69.6 67.1
Ours 11.2 39.8 20.9 23.7 33.0 25.7 73.2 68.0 65.9 39.8 77.3 77.5 43.5 57.7 60.7 64.5 77.5 68.4 42.0 80.6 70.2 64.4
5 YOLO-joint 0.0 9.1 0.0 0.0 9.1 3.6 78.2 78.5 72.1 47.8 76.6 82.1 50.7 70.1 71.8 77.6 80.4 75.4 46.0 84.8 72.5 71.0
YOLO-ft 0.0 33.8 2.6 7.8 3.2 9.5 77.2 77.1 71.9 47.3 78.8 79.8 47.1 69.8 71.8 77.0 80.2 74.3 44.2 82.5 70.6 70.0
YOLO-ft-full 7.9 48.0 39.1 29.4 36.6 32.2 75.5 73.6 69.1 43.3 78.4 78.9 42.3 70.2 66.1 77.4 79.8 72.2 41.9 82.8 69.3 68.1
Ours 14.2 57.3 50.8 38.9 41.6 40.6 70.1 66.3 66.5 40.0 78.1 77.0 40.4 61.2 61.5 71.2 79.1 70.4 38.5 80.0 68.0 64.6
10 YOLO-joint 0.0 9.1 1.5 0.0 9.1 3.9 78.7 77.1 73.3 48.0 79.4 79.8 51.6 71.7 71.1 77.6 79.9 74.4 47.8 83.2 73.4 71.1
YOLO-ft 0.0 35.6 1.0 14.2 1.5 10.5 78.0 77.8 69.0 46.0 78.5 79.6 45.3 69.9 70.9 77.0 80.8 74.2 45.2 83.3 70.7 69.8
YOLO-ft-full 12.0 59.6 42.5 39.1 38.9 38.4 73.2 74.0 66.5 44.0 78.1 78.5 43.6 68.0 66.9 76.9 81.4 72.1 43.8 82.1 68.2 67.8
Ours 20.1 51.8 55.6 42.4 36.6 41.3 68.4 71.4 66.6 37.0 75.0 76.2 35.7 52.6 60.6 66.7 79.7 68.9 40.7 76.5 68.6 63.0
Table 10: Detection performance (AP) for the base and novel categories on the PASCAL VOC dataset for the 3rd base/novel split. We evaluate the performance for different numbers of training examples for the novel categories.