1 Introduction
Object detection is one of the longstanding and important problems in computer vision. Motivated by the recent success of deep learning
[27, 21, 3, 6, 28, 4, 36] on visual object recognition tasks [26, 41, 49, 42, 45], significant improvements have been made in the object detection problem [44, 11, 20]. Most notably, Girshick et al. [18] proposed the “regions with convolutional neural network” (RCNN) framework for object detection and demonstrated stateoftheart performance on standard detection benchmarks (e.g., PASCAL VOC [12, 13], ILSVRC [35]) with a large margin over the previous arts, which are mostly based on deformable part model (DPM) [15].There are two major keys to the success of the RCNN. First, features matter [18]. In the RCNN, the lowlevel image features (e.g., HOG [8]) are replaced with the CNN features, which are arguably more discriminative representations. One drawback of CNN features, however, is that they are expensive to compute. The RCNN overcomes this issue by proposing a few hundreds or thousands candidate bounding boxes via the selective search algorithm [47] to effectively reduce the computational cost required to evaluate the detection scores at all regions of an image.
Despite the success of RCNN, it has been pointed out through an error analysis [22] that inaccurate localization causes the most egregious errors in the RCNN framework [18]
. For example, if there is no bounding box in the close proximity of ground truth among those proposed by selective search, no matter what we have for the features or classifiers, there is no way to detect the correct bounding box of the object. Indeed, there are many applications that require accurate localization of an object bounding box, such as detecting moving objects (e.g., car, pedestrian, bicycles) for autonomous driving
[17], detecting objects for robotic grasping or manipulation in robotic surgery or manufacturing [29], and many others.In this work, we address the localization difficulty of the RCNN detection framework with two ideas. First, we develop a finegrained search algorithm to expand an initial set of bounding boxes by proposing new bounding boxes with scores that are likely to be higher than the initial ones. By doing so, even if the initial region proposals were poor, the algorithm can find a region that is getting closer to the ground truth after a few iterations. We build our algorithm in the Bayesian optimization framework [32, 43], where evaluation of the complex detection function is replaced with queries from a probabilistic distribution of the function values defined with a computationally efficient surrogate model. Second, we train a CNN classifier with a structured SVM objective that aims at classification and localization simultaneously. We define the structured SVM objective function with a hinge loss that balances between classification (i.e., determines whether an object exists) and localization (i.e., determines how much it overlaps with the ground truth) to be used as the last layer of the CNN.
In experiments, we evaluated our methods on PASCAL VOC 2007 and 2012 detection tasks and compared to other competing methods. We demonstrated significantly improved performance over the stateoftheart at different levels of intersection over union (IoU) criteria. In particular, our proposed method outperforms the previous arts with a large margin at higher IoU criteria (e.g., IoU = ), which highlights the good localization ability of our method.
Overall, the contributions of this paper are as follows: 1) we develop a Bayesian optimization framework that can find more accurate object bounding boxes without significantly increasing the number of bounding box proposals, 2) we develop a structured SVM framework to train a CNN classifier for accurate localization, 3) the aforementioned methods are complementary and can be easily adopted to various CNN models, and finally, 4) we demonstrate significant improvement in detection performance over the RCNN on both PASCAL VOC 2007 and 2012 benchmarks.
2 Related work
The DPM [15] and its variants [33, 16] have been the dominating methods for object detection tasks for years. These methods use image descriptors such as HOG [8], SIFT [31], and LBP [1] as features and densely sweep through the entire image to find a maximum response region. With the notable success of CNN on large scale object recognition [26], several detection methods based on CNNs have been proposed [41, 40, 44, 11, 18]. Following the traditional sliding window method for region proposal, Sermanet et al. [41] proposed to search exhaustively over an entire image using CNNs, but made it efficient by conducting a convolution on the entire image at once at multiple scales. Apart from the sliding window method, Szegedy et al. [44] used CNNs to regress the bounding boxes of objects in the image and used another CNN classifier to verify whether the predicted boxes contain objects. Girshick et al. [18] proposed the RCNN following the “recognition using regions” paradigm [19], which also inspired several previous stateoftheart methods [47, 48]. In this framework, a few hundreds or thousands of regions are proposed for an image via the selective search algorithm [47] and the CNN is finetuned with these region proposals. Our method is built upon the RCNN framework using the CNN proposed in [42], but with 1) a novel method to propose extra bounding boxes in the case of poor localization, and 2) a classifier with improved localization sensitivity.
The structured SVM objective function in our work is inspired by Blaschko and Lampert [5], where they trained a kernelized structured SVM on lowlevel visual features (i.e., HoG [8]) to predict the object location. Schulz and Behnke [39] integrated a structured objective with the deep neural network for object detection, but they adopted the branchandbound strategy for training as in [5]. In our work, we formulate the linear structured objective upon highlevel features learned by deep CNN architectures, but our negative mining step is very efficient thanks to the regionbased detection framework. We also present a gradientbased optimization method for training our architecture.
There have been several other related work for accurate object localization. Fidler et al. [16] incorporated the geometric consistency of bounding boxes with bottomup segmentation as auxiliary features into the DPM. Dai and Hoiem [7] used the structured SVM with color and edge features to refine the bounding box coordinates in DPM framework. Schulter et al. [38] used the height prior of an object. These auxiliary features to aid object localization can be injected into our framework without modifications.
Localization refinement can be also taken as a CNN regression problem. Girshick et al. [18]
extracted the middle layer features and linearly regressed the initially proposed regions to better locations.
Sermanet et al. [41] refined bounding boxes from a grid layout to flexible locations and sizes using the higher layers of the deep CNN architecture. Erhan et al. [11] jointly conducted classification and regression in a single architecture. Our method is different in that 1) it uses the information from multiple existing regions instead of a single bounding box for predicting a new candidate region, and 2) it focuses only on maximizing the localization ability of the CNN classifier instead of doing any regression from one bounding box to another.3 Finegrained search for bounding box via Bayesian optimization
Let denote a detection score of an image at the region with the box coordinates . The object detection problem deals with finding the local maximum of with respect to of an unseen image .^{1}^{1}1When multiple (including zero) objects exist, it involves finding the local maxima that exceed a certain threshold. As it requires an evaluation of the score function at many possible regions, it is crucial to have an efficient algorithm to search for the candidate bounding boxes.
A sliding window method has been used as a dominant search algorithm [8, 15], which exhaustively searches over an entire image with fixedsized windows at different scales to find a bounding box with a maximum score. However, evaluating the score function at all regions determined by the sliding window approach is prohibitively expensive when the CNN features are used as the image region descriptor. The problem becomes more severe when flexible aspect ratios are needed for handling object shape variations. Alternatively, the “recognition using regions” [19, 18] method has been proposed, which requires to evaluate significantly fewer number of regions (e.g., few hundreds or thousands) with different scales and aspect ratios, and it can use the stateoftheart image features with high computational complexity, such as the CNN features [10]. One potential issue of object detection pipelines based on region proposal is that the correct detection will not happen when there is no region proposed in the proximity of the ground truth bounding box.^{2}^{2}2We refer to selective search as a representative method for region proposal. To resolve this issue, one can propose more bounding boxes to cover the entire image more densely, but this would significantly increase the computational cost. In this section, we develop a finegrained search (FGS) algorithm based on Bayesian optimization that sequentially proposes a new bounding box with a higher expected detection score than previously proposed bounding boxes without significantly increasing the number of region proposals. We first present the general Bayesian optimization framework (Section 3.1) and describe the FGS algorithm using Gaussian process as the prior for the score function (Section 3.2). We then present the local FGS algorithm that searches over multiple local regions instead of a single global region (Section 3.3
), and discuss the hyperparameter learning of our FGS algorithm (Section
3.4).3.1 General Bayesian optimization framework
Let be the set of solutions (e.g., bounding boxes). In the Bayesian optimization framework, is assumed to be drawn from a probabilistic model:
(1) 
where and . Here, the goal is to find a new solution that maximizes the chance of improving the detection score , where the chance is often defined as an acquisition function . Then, the algorithm proceeds by recursively sampling a new solution from , and update the set to draw a new sample solution with an updated observation.
Bayesian optimization is efficient in terms of the number of function evaluation [25], and is particularly effective when is computationally expensive. When is much less expensive than to evaluate, and the computation for requires only a few function evaluations, we can efficiently find a solution that is getting closer to the ground truth.
3.2 Efficient region proposal via GP regression
A Gaussian process (GP) defines a prior distribution over the function . Due to this property, a distribution over is fully specified by a mean function and a positive definite covariance kernel , i.e., . Specifically, for a finite set
, the random vector
follows a multivariate Gaussian distribution
. A random Gaussian noise with precision is usually added to each independently in practice. Here, we used the constant mean function and the squared exponential covariance kernel with automatic relevance determination (SEard) as follows:where is a diagonal matrix whose diagonal entries are . These form a dimensional GP hyperparameter to be learned from the training data. transforms the bounding box coordinates into a new form:
(2) 
where and denote the center coordinates, denotes the width, and denotes the height of a bounding box. We introduce a latent variable to make the covariance kernel scaleinvariant.^{3}^{3}3If the image and the bounding boxes are scaled down by a certain factor, we can keep invariant by properly setting . We determine in a datadriven manner by maximizing the marginal likelihood of , or
(3) 
The GP regression (GPR) problem tries to find a new argument given observations that maximizes the value of acquisition function, which, in our case, is defined with the expected improvement (EI) as:
(4) 
where . The posterior of given follows Gaussian distribution:
(5) 
with the following mean function and covariance kernels:
We refer [34] for detailed derivation. By plugging (5) in (4),
(6) 
where .
is the cumulative distribution function of standard normal distribution
.3.3 Local finegrained search
In this section, we extend the GPRbased algorithm for global maximum search to local finegrained search (FGS).
The local FGS steps are described in Figure 1. We perform the FGS by pruning out easy negatives with low classification scores from the set of regions proposed by the selective search algorithm and sorting out a few bounding boxes with the maximum scores in local regions. Then, for each local optimum (red boxes in Figure 1), we propose a new candidate bounding box (green boxes in Figure 1). Specifically, we initialize a set of local observations for from the set given by the selective search algorithm, whose localness is measured by an IoU between and region proposals (yellow boxes^{4}^{4}4In practice, the local search region associated with is not a rectangular region around local optimum since we use IoU to determine it. in Figure 1). is used to fit a GP model, and the procedure is iterated for each local optimum at different levels of IoU until there is no more acceptable proposal. We provide a pseudocode of local FGS in Algorithm 1, where the parameters are set as: , .
In addition to the capability of handling multiple objects in a single image, better computational efficiency is another factor making local FGS preferable to global search. As a kernel method, the computational complexity of GPR increases cubically to the number of observations. By restricting the observation set to the nearby region of a local optimum, the GP fitting and proposal process can be performed efficiently. In practice, FGS introduces only computational overhead compared to the original RCNN. Please see the appendices, which are also available in our technical report [50], for more details on its practical efficiency (Appendix A4).
3.4 Learning GP hyperparameter
As we locally perform the FGS, the GP hyperparameter also needs to be trained with observations in the vicinity of ground truth objects. To this end, for an annotated object in the training set, we form a set of observations with the structured labels and corresponding classification scores of the bounding boxes that are close to the ground truth bounding box. Such an observation set is composed of the bounding boxes (given by selective search and random selection) whose IoU with the ground truth exceed a certain threshold. Finally, we fit a GP model by maximizing the joint likelihood of such observations:
where is the index set for positive training samples (i.e., with ground truth object annotations), and is a ground truth annotation of an image .^{5}^{5}5We assumed one object per image. See Section 4.2 for handling multiple objects in training. We set , where consists of the bounding boxes given by selective search on , is a random subset of , and is the overlap threshold. The optimal solution can be obtained via LBFGS. Our implementation relies on the GPML toolbox [34].
4 Learning RCNN with structured loss
This section describes a training algorithm of RCNN for object detection using structured loss. We first revisit the object detection framework with structured output regression introduced by Blaschko and Lampert [5] in Section 4.1, and extend it to RCNN pipeline that allows training the network with structured hinge loss in Section 4.2.
4.1 Structured output regression for detection
Let be the set of training images and be the set of corresponding structured labels. The structured label is composed of 5 elements ; when , and denote the topleft and bottomright coordinates of the object, respectively, and when , it implies that there is no object in , and there is no meaning on coordinate elements . Note that the definition of is extended from Section 3 to indicate the presence of an object as well as its location when exists. When there are multiple objects in an image, we crop an image into multiple positive () images, each of which contains a single object, and a negative image () that doesn’t contain any object.^{6}^{6}6We also perform the same procedure for images with a single object during the training. Let
represent the feature extracted from an image
for a label with . In our case, denotes the toplayer representations of the CNN (excluding the classification layer) at location specified by ,^{7}^{7}7Following [18], we crop and warp the image patch of at location given by to a fixed size (e.g., 224224) to compute the CNN features. which are fed into the classification layer. The detection problem is to find a structured label that has the highest score:(7) 
where
(8)  
(9) 
Note that (9) includes a trick for setting the detection threshold to . The model parameter is trained to minimize the structured loss between the predicted label and the groundtruth label :
(10) 
For the detection problem, the structured loss is defined in terms of intersection over union (IoU) of two bounding boxes defined by and as follows:
(11) 
where . In general, the optimization problem (10) is difficult to solve due to the complicated form of structured loss. Instead, we formulate the surrogate objective in structured SVM framework [46] as follows:
(12)  
(13)  
(14) 
Using (9) and (11), the constraint (13) is written as follows:
(15)  
(16)  
(17) 
where , and denote the set of indices for positive and negative training examples, respectively, and .
4.2 Gradientbased learning of RCNN with structured SVM objective
To learn the RCNN detector with structured loss, we propose to make several modifications to the original structured SVM formulation. First, we restrict the output space of th example to regions proposed via selective search. This results in a change in notation for every in (15) and (17) of th example to . Second, the constraints (15, 16, 17
) should be transformed into hinge loss to backpropagate the gradient to lower layers of CNN. Specifically, the objective function (
12) is reformulated as follows:(18) 
where , are given as:
(19)  
(20) 
Note that we use different values for positive and negative examples. In experiments, and .
Structured SVM objective may cause a slow convergence in parameter estimation since it utilizes at most one instance
among a large number of instances in the (restricted) output space , whose size varies from few hundreds to thousands. To overcome this issue, we alternately perform a gradientbased parameter estimation and hard negative data mining that effectively adapts the number of training examples to be evaluated for updating the parameters (Appendix A2).For model parameter estimation, we use LBFGS to first learn parameters of the classification layer only. We found that this already resulted in a good detection performance. Then, we optionally use stochastic gradient descent to finetune the whole CNN classifiers (Appendix
A1).5 Experimental results
We applied our proposed methods to standard visual object detection tasks on PASCAL VOC 2007 [12] and 2012 [14]. In all experiments, we consider RCNNs [18] as baseline models. Following [18]
, we used the CNN models pretrained on ImageNet database
[9] with object categories [26, 42], and finetuned the whole network using the target database by replacing the existing softmax classification layer to a new one with a different number of classes (e.g., classes for VOC 2007 and 2012). We provide the learning details in Appendix A3. Our implementation is based on the Caffe toolbox
[23].Setting the RCNN as a baseline method, we compared the detection performance of our proposed methods, such as RCNN with FGS (RCNN + FGS), RCNN trained with structured SVM objective (RCNN + StructObj), and their combination (RCNN + StructObj + FGS). Since our goal is to localize the bounding boxes more accurately at the object regions, we also consider the IoU of for an evaluation criterion, which only counts the detection results as correct when the overlap between the predicted bounding box and the ground truth is greater than . This is more challenging than common practices (e.g., IoU ), but will be a good indicator for a better localization of an object bounding box if successful.
Model  BBoxReg  aero  bike  bird  boat  bottle  bus  car  cat  chair  cow  table  dog  horse  mbike  person  plant  sheep  sofa  train  tv  mAP 

RCNN (AlexNet)  No  64.2  69.7  50.0  41.9  32.0  62.6  71.0  60.7  32.7  58.5  46.5  56.1  60.6  66.8  54.2  31.5  52.8  48.9  57.9  64.7  54.2 
RCNN (VGG)  No  68.5  74.5  61.0  37.9  40.6  69.2  73.7  69.9  37.2  68.6  56.8  70.6  69.0  67.1  59.6  33.4  63.9  58.9  62.6  68.5  60.6 
+ StructObj  No  68.7  73.5  62.6  40.6  41.5  69.6  73.5  71.1  39.9  69.6  58.1  70.0  67.5  69.8  59.8  35.9  63.6  59.0  62.6  67.7  61.2 
+ StructObjFT  No  69.3  75.2  62.2  39.4  42.3  70.7  74.5  74.3  40.4  71.3  59.8  72.0  69.8  69.4  60.3  35.3  64.5  62.0  63.7  69.8  62.3 
+ FGS  No  70.6  78.4  65.7  46.2  48.8  74.6  77.0  74.3  42.7  70.8  60.9  75.1  75.8  70.7  66.3  37.1  66.3  57.6  66.6  71.0  64.8 
+ StructObj + FGS  No  73.4  80.9  64.5  46.7  49.1  73.9  78.2  76.8  44.8  75.3  63.0  75.3  74.2  72.7  68.5  37.0  67.5  58.1  66.9  70.5  65.9 
+ StructObjFT + FGS  No  72.5  78.8  67.0  45.2  51.0  73.8  78.7  78.3  46.7  73.8  61.5  77.1  76.4  73.9  66.5  39.2  69.7  59.4  66.8  72.9  66.5 
RCNN (AlexNet)  Yes  68.1  72.8  56.8  43.0  36.8  66.3  74.2  67.6  34.4  63.5  54.5  61.2  69.1  68.6  58.7  33.4  62.9  51.1  62.5  64.8  58.5 
RCNN (VGG)  Yes  70.8  77.1  69.4  45.8  48.4  74.0  77.0  75.0  42.2  72.5  61.5  75.6  77.7  66.6  65.3  39.1  65.8  64.2  68.6  71.5  65.4 
+ StructObj  Yes  73.1  77.5  69.2  47.6  47.6  74.5  78.2  75.4  44.5  76.3  64.9  76.7  76.3  69.9  68.1  39.4  67.0  65.6  68.7  70.9  66.6 
+ StructObjFT  Yes  72.6  79.4  69.4  45.2  47.8  74.4  77.8  76.5  45.4  76.3  61.4  80.2  77.1  73.8  66.8  41.1  67.8  64.7  67.9  72.3  66.9 
+ FGS  Yes  74.2  78.9  67.8  51.6  52.3  75.7  78.7  76.6  45.4  72.4  63.1  76.6  79.3  70.7  68.0  40.3  67.8  61.8  70.2  71.6  67.2 
+ StructObj + FGS  Yes  74.1  83.2  67.0  50.8  51.6  76.2  81.4  77.2  48.1  78.9  65.6  77.3  78.4  75.1  70.1  41.4  69.6  60.8  70.2  73.7  68.5 
+ StructObjFT + FGS  Yes  71.3  80.5  69.3  49.6  54.2  75.4  80.7  79.4  49.1  76.0  65.2  79.4  78.4  75.0  68.4  41.6  71.3  61.2  68.2  73.3  68.4 
Model  BBoxReg  aero  bike  bird  boat  bottle  bus  car  cat  chair  cow  table  dog  horse  mbike  person  plant  sheep  sofa  train  tv  mAP 

RCNN (AlexNet)  No  32.9  40.1  19.7  18.7  11.1  39.4  40.5  26.5  14.8  29.8  24.5  26.4  23.7  31.9  18.5  13.3  27.6  25.8  26.6  39.5  26.6 
RCNN (VGG)  No  40.2  43.3  23.4  14.4  13.3  48.2  44.5  36.4  17.1  34.0  27.9  36.3  26.8  28.2  21.2  10.3  33.7  36.6  31.6  48.9  30.8 
+ StructObj  No  42.5  44.4  24.5  17.8  15.3  46.8  46.4  37.9  17.6  33.4  26.6  36.8  24.3  31.5  21.3  10.4  30.0  36.1  30.6  46.3  31.0 
+ StructObjFT  No  44.1  47.1  23.4  16.6  16.4  50.1  48.7  39.7  18.4  39.4  28.6  38.6  27.5  32.4  23.6  11.1  33.1  41.0  34.3  49.6  33.2 
+ FGS  No  44.3  55.5  28.9  19.1  22.9  56.9  57.6  37.8  19.6  35.7  31.9  38.1  43.0  42.7  30.3  9.8  42.3  33.3  43.4  55.4  37.4 
+ StructObj + FGS  No  43.5  56.1  30.9  18.7  24.9  55.2  57.6  38.9  20.7  38.6  28.4  37.7  38.7  46.3  30.9  8.4  37.6  37.0  42.2  51.3  37.2 
+ StructObjFT + FGS  No  46.3  58.1  31.1  21.6  25.8  57.1  58.2  43.5  23.0  46.4  29.0  40.7  40.6  46.3  33.4  10.6  41.3  40.9  45.8  56.3  39.8 
RCNN (AlexNet)  Yes  47.6  48.7  25.3  25.0  17.3  53.4  54.6  36.8  16.7  42.3  31.6  35.8  38.0  41.8  24.5  14.3  38.8  28.9  34.0  49.0  35.2 
RCNN (VGG)  Yes  45.1  48.6  26.0  18.2  21.2  57.2  52.4  37.3  20.1  33.7  31.9  38.8  39.6  36.3  26.5  9.2  37.8  33.4  39.4  50.7  35.2 
+ StructObj  Yes  49.4  56.5  36.5  21.3  23.3  61.0  58.1  44.3  20.8  47.4  33.3  39.8  40.7  45.9  31.0  14.7  39.6  42.9  45.7  56.9  40.5 
+ StructObjFT  Yes  49.3  58.1  35.4  23.3  24.4  62.3  60.1  45.8  21.8  48.7  32.4  41.8  43.2  45.7  32.0  14.4  44.6  45.1  48.6  59.8  41.8 
+ FGS  Yes  50.9  59.8  34.4  20.9  31.6  66.1  62.3  44.9  22.0  46.5  36.8  42.5  51.4  46.8  34.1  13.5  44.7  39.1  48.9  57.7  42.7 
+ StructObj + FGS  Yes  53.6  60.7  32.1  19.9  31.3  63.2  63.2  46.4  23.6  53.0  34.9  40.4  53.6  49.9  34.6  10.2  42.2  40.1  48.3  58.3  43.0 
+ StructObjFT + FGS  Yes  47.1  61.8  35.2  18.1  29.7  66.0  64.7  48.0  25.3  50.4  34.9  43.7  50.8  49.4  36.8  13.7  44.7  43.6  49.8  60.5  43.7 
Model  BBoxReg  aero  bike  bird  boat  bottle  bus  car  cat  chair  cow  table  dog  horse  mbike  person  plant  sheep  sofa  train  tv  mAP 

RCNN (AlexNet)  No  68.1  63.8  46.1  29.4  27.9  56.6  57.0  65.9  26.5  48.7  39.5  66.2  57.3  65.4  53.2  26.2  54.5  38.1  50.6  51.6  49.6 
RCNN (VGGNet)  No  76.3  69.8  57.9  40.2  37.2  64.0  63.7  80.2  36.1  63.6  47.3  81.1  71.2  73.8  59.5  30.9  64.2  52.2  62.4  58.7  59.5 
RCNN (AlexNet)  Yes  71.8  65.8  52.0  34.1  32.6  59.6  60.0  69.8  27.6  52.0  41.7  69.6  61.3  68.3  57.8  29.6  57.8  40.9  59.3  54.1  53.3 
RCNN (VGGNet)  Yes  79.2  72.3  62.9  43.7  45.1  67.7  66.7  83.0  39.3  66.2  51.7  82.2  73.2  76.5  64.2  33.7  66.7  56.1  68.3  61.0  63.0 
+ StructObj  Yes  80.9  74.8  62.7  42.6  46.2  70.2  68.6  84.0  42.2  68.2  54.1  82.2  74.2  79.8  66.6  39.3  67.6  61.0  71.3  65.2  65.1 
+ FGS  Yes  80.5  73.5  64.1  45.3  48.7  66.5  68.3  82.8  39.8  68.2  52.7  82.1  75.1  76.6  66.3  35.5  66.9  56.8  68.7  61.6  64.0 
+ StructObj + FGS  Yes  82.9  76.1  64.1  44.6  49.4  70.3  71.2  84.6  42.7  68.6  55.8  82.7  77.1  79.9  68.7  41.4  69.0  60.0  72.0  66.2  66.4 
NIN [30]    80.2  73.8  61.9  43.7  43.0  70.3  67.6  80.7  41.9  69.7  51.7  78.2  75.2  76.9  65.1  38.6  68.3  58.0  68.7  63.3  63.8 
aeroplane  bicycle  bird  boat  bottle  
bus  car  cat  chair  cow  
diningtable  dog  horse  motorbike  person  
pottedplant  sheep  sofa  train  tvmonitor 
5.1 FGS efficacy test with oracle detector
Before reporting the performance of the proposed methods in RCNN framework, we demonstrate the efficacy of FGS algorithm using an oracle detector. We design a hypothetical oracle detector whose score function is defined as , where is a ground truth annotation for an image . The score function is ideal in the sense that it outputs high scores for bounding boxes with high overlap with the ground truth and vice versa, overall achieving 100% mAP.
We summarize the results in Figure 2. We report the performance on the VOC 2007 test set at different levels of IoU criteria () for the baseline selective search (SS; “fast mode” in [47]), selective search with objectness [2] (SS + Objectness), selective search with extended superpixel similarity measurements (SS extended) [47], “quality mode” of selective search (SS quality) [47], local random search,^{8}^{8}8Like local FGS, local random search first determine the local search regions by NMS. However, it randomly choose a fixed number of bounding box in those regions rather than sequentially proposing new boxes based on some informed method. and the proposed FGS method with the baseline selective search.
For low values of IoU (), all methods using the oracle detectors performed almost perfectly due to the ideal score function. However, we found that the detection performance with different region proposal schemes other than our proposed FGS algorithm start to break down at high values of IoU. For example, the performance of SS, SS + Objectness, SS extended, and local random search methods, which used around bounding boxes per image in average, significantly dropped at IoU . SS quality method kept pace with the FGS method until IoU of , but again, the performance started to drop at IoU .
On the other hand, the performance of FGS dropped in mAP at IoU of by only introducing approximately new bounding boxes per image. Given that SS quality requires region proposals per image, our proposed FGS method is much more computationally efficient ( less bounding boxes) while localizing the bounding boxes much more accurately. This provides an insight that, if the detector is accurate, our Bayesian optimization framework would limit the number of bounding boxes to a manageable number (e.g., few thousands per image on average) to achieve almost perfect detection results.
We also report similar experimental analysis for the real detector trained with the proposed structured objective in Appendix A6.
5.2 Pascal Voc 2007
In this section, we demonstrate the performance of our proposed methods on PASCAL VOC 2007 [12] detection task (comp4), a standard benchmark for object detection problem. Similarly to the training pipeline of RCNN [18], we finetuned the CNN models (with softmax classification layer) pretrained on ImageNet database using images from both train and validation sets of VOC 2007 and further trained the network with linear SVM (baseline) or the proposed structured SVM objective. We evaluated on the test set using the proposed FGS algorithm. For postprocessing, we performed NMS and bounding box regression [18].
Figure 3 shows representative examples of successful detection using our method. For these cases, our method can localize objects accurately even if the initial bounding box proposals don’t have good overlaps with the ground truth. We show more examples (including the failure cases) in Appendix A9, A10, A11.
The summary results are in Table 1 with IoU criteria of and Table 2 with . We report the performance with the AlexNet [26] and the VGGNet (16 layers) [42], a deeper CNN model than AlexNet that showed a significantly better recognition performance and achieved the best performance on object localization task in ILSVRC 2014.^{9}^{9}9The 16layer VGGNet can be downloaded from: https://gist.github.com/ksimonyan/211839e770f7b538e2d8. First of all, we observed the significant performance improvement by simply having a better CNN model. Building upon the VGGNet, the FGS improved the performance by and in mAP without and with bounding box regression (Table 1). It becomes much more significant when we consider IoU criteria of (Table 2), improving upon the baseline model by and in mAP without and with bounding box regression. The results demonstrate that our FGS algorithm is effective in accurately localizing the bounding box of an object.
Further improvement has been made by training a classifier with structured SVM objective; we obtained mAP in IoU criteria of , which, to our knowledge, is higher than the best published results, and mAP in IoU criteria of with FGS and bounding box regression by training the classification layer only (“StructObj”). By finetuning the whole CNN classifiers (“StructObjFT”), we observed extra improvement for most cases; for example, we obtained mAP in IOU criteria of , which improves by in mAP over the method without finetuning. However, for IoU0.5 criterion, the overall improvement due to finetuning was relatively small, especially when using bounding box regression. In this case, considering the high computational cost for finetuning, we found that training only the classification layer is practically a sufficient way to learn a good localizationaware classifier.
We provide indepth analysis of our proposed methods in the appendices. Specifically, we report the precisionrecall curves of different combinations of the proposed methods (Appendix A7), the performance of FGS with different GP iterations (Appendix A5), the analysis of localization accuracy (Appendix A8), and more detection examples.
5.3 Pascal Voc 2012
We also evaluate the performance of the proposed methods on PASCAL VOC 2012 [14]. As the data statistics are similar to VOC 2007, we used the same hyperparameters as described in Section 5.2 for this experiment. We report the test set mAP over 20 object categories in Table 3. Our proposed method shows improvement by with RCNN + StructObj and with RCNN + FGS over baseline RCNN using VGGNet. Finally, we obtained mAP by combining the two methods, which significantly improved upon the baseline RCNN model and the previously published results on the leaderboard.
6 Conclusion
In this work, we proposed two complementary methods to improve the performance of object detection in RCNN framework with 1) finegrained search algorithm in a Bayesian optimization framework to refine the region proposals and 2) a CNN classifier trained with structured SVM objective to improve localization. We demonstrated the stateoftheart detection performance on PASCAL VOC 2007 and 2012 benchmarks under standard localization requirements. Our methods showed more significant improvement with higher IoU evaluation criteria (e.g., IoU ), and hold promise for missioncritical applications that require highly precise localization, such as autonomous driving, robotic surgery and manipulation.
Acknowledgments
This work was supported by Samsung Digital Media and Communication Lab, Google Faculty Research Award, ONR grant N000141310762, China Scholarship Council, and Rackham Merit Fellowship. We also acknowledge NVIDIA for the donation of GPUs. Finally, we thank Scott Reed, Brian Wang, Junhyuk Oh, Xinchen Yan, Ye Liu, Wenling Shang, and Roni Mittelman for helpful discussions.
1
References
 Ahonen et al. [2004] T. Ahonen, A. Hadid, and M. Pietikäinen. Face recognition with local binary patterns. In ECCV, 2004.
 Alexe et al. [2012] B. Alexe, T. Deselaers, and V. Ferrari. Measuring the objectness of image windows. IEEE Transactions on Pattern Analysis and Machine Intelligence, 34(11):2189–2202, Nov 2012.
 Bengio et al. [2007] Y. Bengio, P. Lamblin, D. Popovici, H. Larochelle, et al. Greedy layerwise training of deep networks. In NIPS, 2007.
 Bengio et al. [2013] Y. Bengio, A. Courville, and P. Vincent. Representation learning: A review and new perspectives. IEEE Transactions on Pattern Analysis and Machine Intelligence, 35(8):1798–1828, Aug 2013.
 Blaschko and Lampert [2008] M. B. Blaschko and C. H. Lampert. Learning to localize objects with structured output regression. In ECCV, 2008.

Boureau et al. [2008]
Y.l. Boureau, Y. L. Cun, et al.
Sparse feature learning for deep belief networks.
In NIPS, pages 1185–1192, 2008.  Dai and Hoiem [2012] Q. Dai and D. Hoiem. Learning to localize detected objects. In CVPR, 2012.
 Dalal and Triggs [2005] N. Dalal and B. Triggs. Histograms of oriented gradients for human detection. In CVPR, 2005.
 Deng et al. [2009] J. Deng, W. Dong, R. Socher, L. J. Li, K. Li, and L. FeiFei. ImageNet: A largescale hierarchical image database. In CVPR, 2009.
 Donahue et al. [2013] J. Donahue, Y. Jia, O. Vinyals, J. Hoffman, N. Zhang, E. Tzeng, and T. Darrell. DeCAF: A deep convolutional activation feature for generic visual recognition. CoRR, abs/1310.1531, 2013.
 Erhan et al. [2014] D. Erhan, C. Szegedy, A. Toshev, and D. Anguelov. Scalable object detection using deep neural networks. In CVPR, 2014.
 Everingham et al. [2007] M. Everingham, L. Van Gool, C. K. I. Williams, J. Winn, and A. Zisserman. The PASCAL Visual Object Classes Challenge 2007 (VOC2007) Results, 2007.
 Everingham et al. [2010] M. Everingham, L. Van Gool, C. K. I. Williams, J. Winn, and A. Zisserman. The PASCAL Visual Object Classes Challenge 2010 (VOC2010) Results, 2010.
 Everingham et al. [2012] M. Everingham, L. Van Gool, C. K. I. Williams, J. Winn, and A. Zisserman. The PASCAL Visual Object Classes Challenge 2012 (VOC2012) Results, 2012.
 Felzenszwalb et al. [2010] P. Felzenszwalb, R. Girshick, D. McAllester, and D. Ramanan. Object detection with discriminatively trained partbased models. IEEE Transactions on Pattern Analysis and Machine Intelligence, 32(9):1627–1645, 2010.
 Fidler et al. [2013] S. Fidler, R. Mottaghi, A. Yuille, and R. Urtasun. Bottomup segmentation for topdown detection. In CVPR, 2013.
 Geiger et al. [2012] A. Geiger, P. Lenz, and R. Urtasun. Are we ready for autonomous driving? the KITTI vision benchmark suite. In CVPR, 2012.
 Girshick et al. [2014] R. Girshick, J. Donahue, T. Darrell, and J. Malik. Rich feature hierarchies for accurate object detection and semantic segmentation. In CVPR, 2014.
 Gu et al. [2009] C. Gu, J. J. Lim, P. Arbelaez, and J. Malik. Recognition using regions. In CVPR, 2009.
 He et al. [2014] K. He, X. Zhang, S. Ren, and J. Sun. Spatial pyramid pooling in deep convolutional networks for visual recognition. arXiv preprint arXiv:1406.4729, 2014.
 Hinton and Salakhutdinov [2006] G. E. Hinton and R. R. Salakhutdinov. Reducing the dimensionality of data with neural networks. Science, 313(5786):504–507, 2006.
 Hoiem et al. [2012] D. Hoiem, Y. Chodpathumwan, and Q. Dai. Diagnosing error in object detectors. In ECCV, 2012.
 Jia et al. [2014] Y. Jia, E. Shelhamer, J. Donahue, S. Karayev, J. Long, R. B. Girshick, S. Guadarrama, and T. Darrell. Caffe: Convolutional architecture for fast feature embedding. CoRR, abs/1408.5093, 2014.

Joachims [2008]
T. Joachims.
Svmstruct: Support vector machine for complex outputs.
http://www.cs.cornell.edu/people/tj/svm_light/svm_struct.html, 2008.  Jones [2001] D. R. Jones. A taxonomy of global optimization methods based on response surfaces. Journal of Global Optimization, 21(4):345–383, 2001.
 Krizhevsky et al. [2012] A. Krizhevsky, I. Sutskever, and G. E. Hinton. Imagenet classification with deep convolutional neural networks. In NIPS, 2012.
 LeCun et al. [1989] Y. LeCun, B. Boser, J. S. Denker, D. Henderson, R. E. Howard, W. Hubbard, and L. D. Jackel. Backpropagation applied to handwritten zip code recognition. Neural Computation, 1(4):541–551, 1989.
 Lee et al. [2011] H. Lee, R. Grosse, R. Ranganath, and A. Y. Ng. Unsupervised learning of hierarchical representations with convolutional deep belief networks. Communications of the ACM, 54(10):95–103, 2011.
 Lenz et al. [2013] I. Lenz, H. Lee, and A. Saxena. Deep learning for detecting robotic grasps. In Robotics Science and Systems, 2013.
 Lin et al. [2013] M. Lin, Q. Chen, and S. Yan. Network in network. CoRR, abs/1312.4400, 2013.
 Lowe [2004] D. G. Lowe. Distinctive image features from scaleinvariant keypoints. International Journal of Computer Vision, 60(2):91–110, 2004.
 Mockus et al. [1978] J. Mockus, V. Tiesis, and A. Zilinskas. The application of bayesian methods for seeking the extremum. Towards Global Optimization, 2(117129):2, 1978.
 Pepikj et al. [2013] B. Pepikj, M. Stark, P. Gehler, and B. Schiele. Occlusion patterns for object class detection. In CVPR, 2013.

Rasmussen and Williams [2006]
C. Rasmussen and C. Williams.
Gaussian Processes for Machine Learning (Adaptive Computation and Machine Learning)
. The MIT Press, 2006.  Russakovsky et al. [2014] O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma, Z. Huang, A. Karpathy, A. Khosla, M. Bernstein, A. C. Berg, and L. FeiFei. ImageNet Large Scale Visual Recognition Challenge, 2014.
 Schmidhuber [2015] J. Schmidhuber. Deep learning in neural networks: An overview. Neural Networks, 61:85–117, 2015.
 [37] M. Schmidt. minFunc toolbox. http://www.cs.ubc.ca/~schmidtm/Software/minFunc.html.

Schulter et al. [2014]
S. Schulter, C. Leistner, P. Wohlhart, P. M. Roth, and H. Bischof.
Accurate object detection with joint classificationregression random forests.
In CVPR, 2014.  Schulz and Behnke [2014] H. Schulz and S. Behnke. Structured prediction for object detection in deep neural networks. In ICANN, volume 8681, pages 395–402. 2014.
 Sermanet et al. [2013] P. Sermanet, K. Kavukcuoglu, S. Chintala, and Y. LeCun. Pedestrian detection with unsupervised multistage feature learning. In CVPR, 2013.
 Sermanet et al. [2014] P. Sermanet, D. Eigen, X. Zhang, M. Mathieu, R. Fergus, and Y. LeCun. OverFeat: Integrated recognition, localization and detection using convolutional networks. In ICLR, 2014.
 Simonyan and Zisserman [2015] K. Simonyan and A. Zisserman. Very deep convolutional networks for largescale image recognition. In ICLR, 2015.
 Snoek et al. [2012] J. Snoek, H. Larochelle, and R. P. Adams. Practical bayesian optimization of machine learning algorithms. In NIPS, 2012.
 Szegedy et al. [2013] C. Szegedy, A. Toshev, and D. Erhan. Deep neural networks for object detection. In NIPS, 2013.
 Szegedy et al. [2014] C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, and A. Rabinovich. Going deeper with convolutions. arXiv preprint arXiv:1409.4842, 2014.
 Tsochantaridis et al. [2005] I. Tsochantaridis, T. Joachims, T. Hofmann, and Y. Altun. Large margin methods for structured and interdependent output variables. Journal of Machine Learning Research, (6):1453–1484, 2005.
 Uijlings et al. [2013] J. R. R. Uijlings, K. E. A. Sande, T. Gevers, and A. W. M. Smeulders. Selective search for object recognition. International Journal of Computer Vision, 104(2):154–171, 2013.
 Wang et al. [2013] X. Wang, M. Yang, S. Zhu, and Y. Lin. Regionlets for generic object detection. In ICCV, pages 17–24, Dec 2013.
 Zeiler and Fergus [2014] M. D. Zeiler and R. Fergus. Visualizing and understanding convolutional networks. In ECCV, pages 818–833. Springer, 2014.
 Zhang et al. [2015] Y. Zhang, K. Sohn, R. Villegas, G. Pan, and H. Lee. Improving object detection with deep convolutional networks via bayesian optimization and structured prediction. CoRR, abs/1504.03293, 2015. URL http://arxiv.org/abs/1504.03293.
Appendix A1 Parameter estimation for finetuning with structured SVM objective
The model parameters are updated via gradient descent. The gradient, for example, with respect to the CNN parameters for positive examples is given as follows:
(A1) 
where . Similarly, the gradient for negative examples can be computed as follows:
(A2) 
where . The gradient with respect to the parameters of all layers of CNN can be computed efficiently using backpropagation. When finetuning the entire network, the parameter updated in the hard mining procedure illustrated by Algorithm A1 is done by replacing with the CNN parameters.
Appendix A2 Details on hard negative data mining
The active set consisting of the hard training instances are updated in two steps during the iterative learning process. First, we include instances to the active set when they are likely to be active, i.e., affect the gradient:
(A3)  
(A4) 
Second, once new parameters are estimated, we exclude instances from the current active set when they are likely to be inactive, i.e., have no effect on the gradient:
(A5)  
(A6) 
In our experiments, we used and . The values of are the same as those for the SVM training in RCNN [18]. We did not observe a noticeable performance fluctuation due to different values. Algorithm A1 summarizes the hardmining procedure.
Appendix A3 Implementation details on model parameter estimation
For our experiments on PASCAL VOC 2007 and VOC 2012, we first finetune the CNN pretrained on ImageNet by stochastic gradient descent with a 21way softmax classification layer, where 20 ways are for the 20 object categories of interest, and the rest 1 way is for the background. In this step, the SGD learning rate starts at 0.0003, and decreases by 0.3 every 15000 iterations with a minibatch size of 48. We set the momentum to 0.9, and the weight decay to 0.0005 for all the layers.
After that, we replace the softmax loss layer with a 20way structured loss layer, where each way is a binary classifier, and the hinge loss for different category are simply summed up.
For classification layer only learning, LBFGS is adopted, as batch gradient descent for a single layer. Each category has an associated active set for hard negative mining. The classifier update happens independently for each category when ( in Algorithm A1
) new hard examples are added to the active set. It is worth mentioning that, in the beginning of the hard negative mining, significantly more positive images are present than the negative images, resulting in serious unbalance of the active training samples. As a heuristic to avoid this problem, we limit the number of positive image to the number of the active negative images when classifier update happens in the first epoch. We run the hard negative mining for 2 epochs in total. The first epoch is for initializing the active set with the above heuristic, and the rest is for learning with the all the training data. Compared to the linear SVM training in RCNN
[18], our LBFGS based solution to the structured objective costs x longer time. However, it turns out to be significantly more efficient than SVM^{struct} [24].For the entire network finetuning, we initialize the structured loss layer with the weights obtained bythe classificationlayeronly learning. The whole network is then finetuned by backpropagating the gradient from the top layer with a fixed SGD learning rate of . For implementation simplicity, we keep updating the active sets until the end of an epoch, and update the classifiers per epoch (i.e., in Algorithm A1). Like before, each category still has one particular active set. However, the network parameters (except for the classifier) are shared across all the category so that the feature extraction time is not scaled up with the number of categories. In practice, we found one epoch was enough for both hard negative mining and SGD in the entire network finetuning case. Running more epochs did not make noticeable improvement on the final detection performance on PASCAL VOC 2007 test set, but cost a significantly larger amount of training time.
Appendix A4 Efficiency of finegrained search (FGS)
In this section, we provide more details on the local FGS presented in Algorithm 1 of the main text.
GPR practical efficiency: For the initial proposals given by selective search, usually turns out to be 20 to 100, and line 11,12 can be efficiently solved in around 9 and 6 LBFGS [37] iterations for StructObj, respectively.
GPU parallelism for CNN: One image can have multiple search regions (e.g., line 6), and 20 object categories together yield more regions. FGS proposes one extra box per iteration for every search region. These boxes are fed into the CNN together to utilize GPU parallelism. For VGGNet, we use the batch size of 8 for computing the CNN features within the FGS procedure.
Time overhead: For PASCAL VOC 2007, FGS () induced only total overhead compared to initial time cost, which mostly consists of CNN feature extraction from bounding boxes proposed by selective search (SS). Specifically, of the overhead is caused by CNN feature extraction from the newly proposed boxes (line 11); the rest is caused by GPR (line9, 10), NMS (line 5), and pruning (line 12). Each GP iteration (line 216) counts for with respect to the initial time cost, and GP iterations were sufficient for convergence. Figure A1 shows the trends of the accumulated time overhead introduced by FGS per iteration. The time overhead due to FGS may vary with different datasets (e.g., VOC 2012), but in general, it is compared to initial time cost.
Appendix A5 Stepwise performance of finegrained search (FGS)
We evaluated the mAP at each GP iteration using RCNN(VGG)+StructObj+FGS+BBoxReg. The mAPs from 0 to 8 GP iterations are reported in Table A1. mAP increases rapidly in the first 4 iterations, and becomes stable in the following iterations.
# GP iter  0  1  2  3  4  5  6  7  8 

mAP  66.6  67.5  67.8  68.2  68.3  68.6  68.4  68.6  68.5 
Appendix A6 Test set mAP on PASCAL VOC 2007 using VGGNet with different region proposal methods
In Figure 2 of the paper, we report and compare the test set mAPs on PASCAL VOC 2007 with different region proposal algorithms (e.g., selective search (SS) [47] at different modes, Objectness [2]) using an oracle detector. In this section, we performed similar experiments using a real detector trained with structured SVM objective based on VGGNet features. The summary results are given in Table A2 and A3.
Region proposal methods IoU threshold  

SS (2000 boxes per image)  
SS + Objectness (3000 boxes per image)  
SS extended (3500 boxes per image)  
SS quality (10000 boxes per image)  
SS + FGS (2150 boxes per image) 
Region proposal methods IoU threshold  

SS (2000 boxes per image)  
SS + Objectness (3000 boxes per image)  
SS extended (3500 boxes per image)  
SS quality (10000 boxes per image)  
SS + FGS (2150 boxes per image) 
For both cases (with and without bounding box regression), the FGS showed improved performance over other region proposal methods using smaller number of region proposals. In particular, the SS + FGS method (row 5 in Table A2 and A3) even outperformed the SS “quality” mode [47], which requires more computational expenses than our proposed method to compute CNNbased detection scores for bounding box proposals.
Although the current stateoftheart CNNbased detector outperforms other object detection methods [15, 33, 5] by a large margin, there still remains a significant gap with that of the hypothetical oracle detector. This motivates us to further research on improving the quality of the CNNs for better visual object recognition performance.
Appendix A7 Precisionrecall curves on PASCAL VOC 2007
In this section, we present the precisionrecall curves for four different models. Specifically, we show results for VGGNet, VGGNet trained with structured SVM objective (VGGNet + StructObj), VGGNet with FGS (VGGNet + FGS), and VGGNet with both (VGGNet + StructObj + FGS) in Figure A2. In general, the improvement from the structured SVM objective is more significant for the high recall range (i.e., recall ) than the low recall range other than “sheep” class. FGS usually improves the precision for most object categories.
Appendix A8 Localization accuracy on PASCAL VOC 2007
In this section, we analyze the localization behavior of different methods. We find the predicted bounding boxes that most accurately localize the ground truth from each method by picking the detected box with the highest overlap (i.e., IoU) with respect to each ground truth bounding box. Comparisons between the different methods are performed by estimating the distribution of the overlaps for every category. Our findings are shown in Figure A3 (without BBoxReg) and Figure A4 (with BBoxReg + the baseline without BBoxReg). Methods with better localization ability should have higher frequency at higher IoU (i.e., IoU between 0.6 and 0.9) and lower frequency at lower IoU (i.e., IoU between 0 and 0.4). Curve peaks leaning to the right signify better localization.
As shown in Figure A3 and A4, individually applying FGS and StructObj results in better localization compared to the baseline RCNN, regardless of using or not using bounding box regression. However, the performance improvement due to FGS and StructObj independently results in different phenomena. In general, we found that FGS pushes the distribution peak to the right, and StructObj pulls the distribution peak higher while pushing the frequencies in the low IoU interval down. This indicates that FGS can propose more accurately localized boxes if the original set of bounding boxes are reasonably well localized, and StructObj can make detection scores more accurate (i.e., give low detection scores to boxes with low overlap and high detection scores to boxes with high overlap) based on the overlap of the proposed bounding boxes with respect to the ground truth. Combining FGS and StructObj together capitalizes on the advantages of both, and leads to the best localization accuracy.
Appendix A9 Examples with the largest improvement on PASCAL VOC 2007 test set
In this section, we show examples with the largest improvement in localization accuracy using our best proposed method (VGGNet + StructObj + FGS) over the baseline detection method (VGGNet) from PASCAL VOC 2007 test set. For each example in Figure A5, we show the category of interest on the leftbottom corner of the image, and draw the detection of our best proposed method (in yellow box) that is best matched with the particular ground truth (in green box) and the best matched detection of the baseline (in red box). The number on the top right of the detected bounding box denotes the IoU with the ground truth.
Appendix A10 Topranked false positives on PASCAL VOC 2007 test set
In this section, we show examples of the topranked false positive detections^{10}^{10}10The topranked false positives are selected among false positive bounding boxes with the highest detection scores. (in green box) of our best proposed method (VGGNet + StructObj + FGS) from PASCAL VOC 2007 test set (Figure A6). We categorize the false positives into four categories as in [22]:

loc: poor localization,

sim: confusion with similar objects,

oth: confusion with other objects,

bg: confusion with background or unlabeled objects.
The overlap (ov) measured by the IoU between the false positive detection and its best matching ground truth bounding box is provided. For “loc” examples, the closest bounding box annotated with the same object category is provided as a ground truth (in yellow box). For “sim” or “oth” examples, the closest bounding box annotated with any object category is provided as a ground truth (in red box).
Appendix A11 Random detection examples on PASCAL VOC 2007 test set
Finally, we show randomly selected detection examples of our best proposed method (VGGNet + StructObj + FGS) from PASCAL VOC 2007 test set. In Figure A7, we use bounding boxes with different colors for different categories. The category label with detection score is displayed on the topleft corner of each bounding box. Detections with low scores are ignored.
aeroplane (examples with the largest improvement) 
bicycle (examples with the largest improvement) 
bird (examples with the largest improvement) 
boat (examples with the largest improvement) 
bottle (examples with the largest improvement) 
bus (examples with the largest improvement) 
car (examples with the largest improvement) 
cat (examples with the largest improvement) 
chair (examples with the largest improvement) 
cow (examples with the largest improvement) 
diningtable (examples with the largest improvement) 
dog (examples with the largest improvement) 
horse (examples with the largest improvement) 
motorbike (examples with the largest improvement) 
person (examples with the largest improvement) 
pottedplant (examples with the largest improvement) 
sheep (examples with the largest improvement) 
sofa (examples with the largest improvement) 
train (examples with the largest improvement) 
tvmonitor (examples with the largest improvement) 
aeroplane (topranked false positive) 
bicycle (topranked false positive) 
bird (topranked false positive) 
boat (topranked false positive) 
bottle (topranked false positive) 
bus (topranked false positive) 
car (topranked false positive) 
cat (topranked false positive) 
chair (topranked false positive) 
cow (topranked false positive) 
diningtable (topranked false positive) 
dog (topranked false positive) 
horse (topranked false positive) 
motorbike (topranked false positive) 
person (topranked false positive) 
pottedplant (topranked false positive) 
sheep (topranked false positive) 
sofa (topranked false positive) 
train (topranked false positive) 
tvmonitor (topranked false positive) 
Comments
There are no comments yet.