Mid-level Elements for Object Detection

04/27/2015 ∙ by Aayush Bansal, et al. ∙ Carnegie Mellon University 0

Building on the success of recent discriminative mid-level elements, we propose a surprisingly simple approach for object detection which performs comparable to the current state-of-the-art approaches on PASCAL VOC comp-3 detection challenge (no external data). Through extensive experiments and ablation analysis, we show how our approach effectively improves upon the HOG-based pipelines by adding an intermediate mid-level representation for the task of object detection. This representation is easily interpretable and allows us to visualize what our object detector "sees". We also discuss the insights our approach shares with CNN-based methods, such as sharing representation between categories helps.



There are no comments yet.


page 1

page 2

page 3

page 4

page 5

page 6

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

How do we represent and recognize objects such as the dog or the car shown in Figure 1? Until recent years, the most popular way to represent objects was using low-level features such as HOG [11] or SIFT [37]

. These low-level features were then used to train the classifiers such as SVMs or random forests. Recently, several approaches have proposed discriminative mid-level visual elements as an intermediate image representation between the low-level features and the high-level semantic classes. While these approaches have shown strong results for a variety of tasks such as indoor scene classification 

[14], 3D scene understanding [24], video understanding [28] and even visual prediction [54], relatively little effort has been devoted toward adapting them for object detection (with the notable exception of [18], which while providing a first step towards object detection on PASCAL [20], leaves room for improvement quantitatively).

Figure 1: Left: Input image and a visualization of what our object detector sees. Right: The average images of the mid-level elements which are most useful for detecting objects in input images.
Figure 2: Feature Representation

: Given an image (top left), the region proposals are first extracted (top center). The mid-level elements are trained offline (bottom left), and then each region proposal is represented by convolving mid-level elements over a HOG feature pyramid extracted from the region (bottom center). The responses are max-pooled across different scales in a spatial pyramid pattern to construct the final feature, which is then fed into a linear SVM classifier. Refer to Section 

3.2 for details.

In this paper, we build upon a recently-proposed mid-level representation framework [14]

and adapt it for the task of object detection. Even though our mid-level representation uses a HOG-based pipeline, it still performs comparably to convolutional neural networks (CNNs) 

[1, 25] on the comp3 detection challenge (no external data allowed). However, when compared to other HOG-based approaches, it does provide a substantial boost. We believe this boost is significant since it points out the importance of having a mid-level representation in a recognition pipeline, and may guide research in designing mid-level features and their application in object detection.

Why mid-level representation? Over the years there has been a lot of research in low-level and high-level visual representation. Low-level representations are susceptible to small variations in style and pose. On the other hand, directly learning high-level representations require millions of labeled images of objects in all possible configurations, and it is difficult to encode large intra-class variation. Therefore, what we need is a mid-level representation in an object-detection pipeline: a representation that is more adaptable to the appearance distributions in the real world than the low-level features, but does not require the semantic grounding of the high-level entities.

There have been efforts to include mid-level representations such as poselets [6] and object-parts [18] but none of these approaches have given any significant boost to latent SVM-based approaches [21]. On the other hand, CNN-based approaches for object detection [25] have outperformed classic object detection approaches [21]. We believe one of the reasons for better performance of CNN-based approaches is the existence of the discriminatively-trained mid-level representation, which in this case consists of multiple layers of convolution. But these CNNs still require millions of images to train the networks and therefore, in case of low data availability (comp3 challenge in PASCAL [20]), they are still comparable to existing approaches. In this paper, we want to explore the alternative mid-level representation proposed in [14]. We explore how including this mid-level representation can increase the performance of a classic HOG-based pipeline.

Contributions: Our paper is one of the first papers to demonstrate how discriminative mid-level elements [15, 45] can be used effectively for the task of object detection. The goal of this paper is to analyze how mid-level representations can boost the performance of a HOG-based pipeline. Specifically, we have shown that “simple” HOG features have more power if a “shallow” mid-level visual element representation used in the HOG pipeline. Using our approach, we achieve performance comparable to the state-of-the-art on PASCAL [20] comp3 object detection challenge. But more importantly, we hope this paper will be able to rekindle the discussion on mid-level representations and inspire more researchers to look at the mid-level elements as an important component in an object detection pipeline.

2 Related Work

Over the past decade, object detection has been one of the most extensively studied problems in computer vision. One of the early advancements in statistical object detection came back in 2005 when Dalal and Triggs 

[11] introduced histograms-of-gradient (HOG) descriptor to represent object templates and coupled it with SVM. Consequently, much subsequent work focused on exploiting the HOG+SVM strategy, in conjunction with exhaustive sliding window search. The most successful have been deformable parts-based models (DPM) [21]. DPM extended these HOG-based templates by adding part templates and allowing deformation between them. The emergence of DPM, and improvements in algorithms to train it, have led to a brisk increase in performance on the PASCAL VOC object detection challenge. Later, numerous works focused on improving the parts themselves, from using strongly-supervised parts [6, 5, 19, 57] to using weak 3D supervision [44, 47, 43, 40].

Figure 3: Most informative elements by category (positive: two elements represent the category; negative: one element representing what it is not). Each row (in both columns) depicts the top three training-set detections for three of the most informative elements for one category. We measure an element’s informativeness by the weight of the respective dimension in the category-level SVM’s vector. Left two sets depict the two most positive-weighted elements; the right shows the most negative-weighted elements. The positive-weighted elements were all mined from the positive category (demonstrating the utility of discriminative training), and the negative ones often depict patterns easily confused with the category, or from objects that commonly appear in that category’s context.

An alternate direction for improvement in performance was to incorporate bottom-up segmentation priors for training DPMs [8, 23]. One such approach, SegDPM [23], augmented HOG features with simple segmentation-based features and respectably outperformed other DPM-style approaches. However, these approaches have a fundamental limitation – given the complexity of exhaustive search, they can only utilize simple features.

As a consequence, a major shift in detection paradigm was to bypass the need to exhaustive search completely by generating category-independent candidates for object location and scale [3, 17, 50, 7, 4, 9, 58, 30]. Commonly-used methods propose around 1,000 regions using fast segmentation algorithms, which aim to discard many windows which are unlikely to contain objects [56, 3, 50]. These object proposal methods have resulted in the use of more sophisticated features [56, 23, 10, 51] and learning algorithms [53]. For example, [10, 51] use improved Fisher Vectors over SIFT [37] and color descriptors; [52] uses color descriptors, feature encodings and spatial poolings; and [53] use multiple kernel learning on top of a variety of appearance features with spatial pooling.

Concurrently, researchers have studied another important class of features that are derived from CNNs [33], especially the formulation proposed by [31]. Recently, CNNs have consistently shown state-of-the-art performance on image classification, motivating a number of researchers to apply CNNs to the task of object detection. One strategy has been to train similar networks directly for object detection; for example, [49] poses object localization as a regression problem, while [25, 1] trains CNN to directly classify region proposals. The methods using CNN-based features in the region proposal paradigm are currently the state-of-the-art (e.g., RCNN [25]) on PASCAL VOC detection challenge by a comfortable margin.

Mid-level visual elements: Mid-level visual elements, or mid-level discriminative patches, are similar to parts, but are generally not constrained to a particular location in an object template [15, 45]. While the locations of these discriminative patches within the dataset are generally not known beforehand, they can still be identified by measuring (1) how representative they are of a particular category, and (2) how informative they are with respect to identifying whatever categories they represent. Numerous works have shown strong performance on a wide variety of tasks, including scene classification [45, 14, 29, 35, 48, 55], visual data mining [15], video understanding [28], video-based prediction [54], 3D geometry [24], and even unsupervised object discovery [45]. Particularly relevant is the work of [18] applying mid-level elements to object detection; though the results were promising, they were well below the canonical HOG-based approaches [21]. The paradigm of using mid-level elements is similar to object bank [34], with the key difference being that visual elements often capture visual concepts of a smaller granularity, which makes them more shareable across categories, and more robust to large changes in object appearance.

In this paper, we propose a representation using HOG-based mid-level elements in the region proposal paradigm and achieve results comparable to the state-of-the-art on PASCAL VOC comp3 challenge (no external data).

3 Object Detection Pipeline

Figure 4: Pooling Scheme: (1) 5-region pooling (, and ); (2) 7-region pooling (, , and ).

Our object detection pipeline is similar to the recent work of Girshick et al. [25]. While their approach is built around CNN features, ours uses HOG-based mid-level visual elements. Our detection pipeline has three basic modules: (1) Region proposals: class-independent object hypotheses obtained via exhaustively searching over multiple segmentations of a given image; (2) Mid-level visual elements: a set of detectors, each of which corresponds to some discriminative part of a category, and whose responses within a given region proposal are aggregated into a representative feature for that proposal; (3) Class-specific classifiers: a class-specific classifier is used to classify whether a region proposal belongs to a particular class or not. Post Processing is then applied on the output of these classifiers to avoid overlapping detections and to improve localization.

3.1 Region proposals

Much recent work in computer vision has been devoted to proposing, within a given image, a set of regions that might correspond to objects. The idea is that these regions should provide high recall while minimizing the number of regions that need to be considered. This reduces the computational complexity during detection stage, and biases the algorithm toward ‘object-like’ regions. Objectness [3], category-independent object proposals [17], randomized prim [38], selective search [50], constrained parametric min-cuts [7], multi-scale combinatorial grouping [4]

, binarized normed gradients 

[9], edge boxes [58], and geodesic proposals [30] all provide different trade-offs of speed, recall, and the total number of object proposals returned. Our approach is agnostic to the kind of region proposals used. We extract about 2,000 region proposals per image using selective search [50]. This allows us to make a fair comparison with SS-SPM [50] and with R-CNN [25].

3.2 Mid-level visual element representation

Given a region, the next major challenge is to build a representation of its contents that can easily be classified as one of the object categories, or as background. Many hand-tuned low-level representations exist (e.g. HOG [11] and SIFT [37]), but these have limited invariance to the sort of deformations seen in objects. More complex representations like bag of visual words [46] and, more recently, improved Fisher vectors [41, 10], improve the invariance to deformation by ignoring the spatial position of each visual word within the region. However, the basic elements of these representations (e.g., SIFT [37]) have limited spatial extent and therefore capture relatively simple concepts. Furthermore, these features are generally not tuned to be discriminative with respect to the object categories of interest. On the other hand, DPMs [21] have large parts which are trained discriminatively, but are less flexible in other respects; for instance, it is more difficult to share parts across different views of a given object category.

Representations based on mid-level discriminative patches [45, 14, 15, 18, 28, 29, 35, 24, 48, 55, 54] have recently shown strong performance for many vision tasks, especially scene classification. The idea is to find patches which are frequent, i.e., they will occur many times in the category of interest; discriminative, i.e., easily recognizable; and informative, in that they occur in only one of the categories. Detectors for these patches are commonly implemented using medium-sized HOG templates, and are therefore similar to the “parts” of DPM. However, the training generally uses weaker supervision (e.g., image-level labels), and no spatial layout is assumed.

Mining Mid-level Elements: For discovering mid-level elements, we use the formulation of [14]

, which uses a discriminative extension of mean shift. They formalize the idea of “frequent yet informative” by attempting to find regions of patch feature space that satisfy two properties: (1) it is populated by a reasonable number of patches; and (2) the ratio between the positive and negative patches is maximized in the region. Essentially this corresponds to finding the local-maxima of an estimate of the density ratio between positives and negatives.

We use this approach to mine a set of mid-level elements for each category, where . These elements are mined using the ground-truth training set boxes (dilated by 25% of its size) which act as positives, and images not containing the object as negatives. To further improve the localization and reduce confusion arising out of sharing between similar categories, we also mine 50 elements per category such that they have an overlap (IoU) greater than 0.8 with the ground-truth boxes (see Table 2).

Figure 5: Examples of detections in the PASCAL VOC 2007 test set (in each case, we show only the top detection in the image). Images outlined in yellow denote false positives.

Feature Representation: We now use these mid-level elements to generate representation for region proposals. To construct the feature vector on each region proposal, a HOG pyramid for the region is extracted, and then a sliding window operation is done within the pyramid using these mid-level elements (regardless of category). We then max-pool the responses of each element across different scales using a 2-level spatial pyramid ( and grids) [32, 26] as shown in Figure 2. These 5 pooling regions, elements per category and categories make a dimensional feature vector. We also experimented with another pooling scheme, where we pool in 7 regions (11, 13, and 31 grids) (as shown in Figure 4).

Implementation Details:

For a speedy feature extraction, we construct a single feature pyramid for an entire image and then extract responses for each region proposal from this whole-image pyramid (c.f. 

[26]). For the patch-level features, we use [14] where each pixel window is represented by a HOG and image (down-sampled ab channels of the corresponding Lab image); thus resulting in feature. For the HOG feature pyramid, we use 4 scales per octave during training (for efficiency) and 8 scales per octave during testing (for accuracy) (c.f. [21]). We up-sample images by a factor of 2 when evaluating proposals smaller than pixels.

Figure 6: Reconstructing using mid-level representations. First, we compute average images for our elements. Then given the detections, we can visualize why the detections occurred. We display these element-level averages positioned over the element detections which contributed the most to the detection score (as measured by the detection score times the feature weight in the category SVM), weighted according to the contribution. This highlights which parts contributed the most in detecting a particular object.

3.3 Object detection using mid-level elements

Given a feature representation for a region proposal, we use class-specific classifiers [25, 50] to predict whether a proposal belongs to a particular category or not. We post-process the output of these classifiers to remove overlapping detections via non-max suppression (NMS) [21, 25] and improve localization via bounding box regression [25].

Class-specific classifiers: We train a simple 1-vs-all linear SVM [25, 50] for each category. During training, we use all ground truth bounding boxes (and their flipped versions) as positives for their respective classifiers, and any window with IoU with a ground-truth box for a given category as negative for that category (all other windows are discarded). We found that only one iteration of hard negative mining was sufficient for convergence.

NMS: NMS [21, 25] works by iteratively selecting the highest-scoring proposal from the pool of candidates from an image, and then removing all candidates with IoU greater than a given threshold (0.3 in our case) with the selected proposal [25].

Bounding Box Regression: Bounding box regression (BBReg) [25] model is a class-specific regressor which aims to improve localization. It learns a transformation function which maps a proposal’s features to the associated ground-truth bounding box. is assumed to be a linear function of the proposal’s features, where the output space is a 4-vector that defines (1) x- and (2) y-translation on the bounding box’s upper-left corner (scaled by the input box’s width and height respectively), as well as (3) x- and (4) y-scaling factor for the width and height of the bounding box, in log space. Our implementation follows [25], except that we replace CNN features with our mid-level features.

4 Experiments

We now discuss our experimental results on the standard PASCAL VOC-2007 and VOC-2010 [20] dataset for object detection. We also perform an extensive ablative analysis to understand how various design choices impact the performance.

4.1 Performance on VOC-2007

First, we compare our approach with several baselines on the VOC-2007 comp3 challenge (no extra data) [20]. Compared to DPM [21] (33.7% mAP), which also uses HOG, our algorithm achieves an mAP of 41.9%, a boost of approximately 8% (absolute). This is a significant improvement, and clearly demonstrates the utility of mid-level layer for object detection. Interestingly, our algorithm’s performance is comparable to the state-of-the-art, even though we do not use any segmentation (as used by segDPM [23]) or context [10]. We did, however, use bounding-box regression from the R-CNN [25] framework, which we found provides a 3% boost in mAP. We also found that the 7-region pooling works slightly better than the 5-region pooling (see section 3.2), especially when lesser number of elements are used (e.g., when using top-100 elements, 5-region pooling gives 33% mAP while 7-region pooling gives 33.7% mAP).

Qualitative Analysis: Mid-level elements provide a number of convenient ways to understand the behavior of our algorithm. First, we aim to see which mid-level elements are useful for the task of detection. In Figure 3

we show the elements that received the highest (or lowest) weights in the final class-specific SVM. We first show two elements with the highest positive weight, and one with the largest negative-weight. Note, for example, that the most discriminative aspect of bicycles (as chosen by our mid-level representation) are wheels, yet the SVM has a strong negative weight for bus wheels; this is likely to prevent bus wheels from being confused with the bicycle wheels. Furthermore, dining tables receive a strong negative weight for people, probably because a person bounding box containing too much of a table is likely to result in a poor localization.

VOC 2007 test aero bike bird boat bottle bus car cat chair cow table dog horse mbike person plant sheep sofa train tv mAP
DPM-v5 [21] 33.2 60.3 10.2 16.1 27.3 54.3 58.2 23.0 20.0 24.1 26.7 12.7 58.1 48.2 43.2 12.0 21.1 36.1 46.0 43.5 33.7
SS SPM [50] 43.5 46.5 10.4 12.0 9.3 49.4 53.7 39.4 12.5 36.9 42.2 26.4 47.0 52.4 23.5 12.1 29.9 36.3 42.2 48.8 33.7
RM[16] 37.7 61.4 12.7 17.6 29.9 55.1 56.3 29.5 24.6 28.2 30.7 21.2 59.5 51.5 40.3 14.3 23.9 41.6 49.2 46.0 36.6
[10] (w/o context) 52.6 52.6 19.2 25.4 18.7 47.3 56.9 42.1 16.6 41.4 41.9 27.7 47.9 51.5 29.9 20.0 41.1 36.4 48.6 53.2 38.5
Regionlets [56] 54.2 52.0 20.3 24.0 20.1 55.5 68.7 42.6 19.2 44.2 49.1 26.6 57.0 54.5 43.4 16.4 36.6 37.7 59.4 52.3 41.7
RCNN-Scratch [1] 49.9 60.6 24.7 23.7 20.3 52.5 64.8 32.9 20.4 43.5 34.2 29.9 49.0 60.4 47.5 28.0 42.3 28.6 51.2 50.0 40.7
5-Region Pooling 50.7 58.3 16.6 26.2 24.2 56.4 57.2 44.9 18.8 39.9 43.5 27.3 44.5 49.4 26.8 19.4 35.3 41.4 47.8 47.4 38.8
5-Region + BBReg 52.0 60.9 17.1 26.4 25.7 59.3 60.9 44.9 20.6 42.7 46.6 30.4 57.1 49.7 32.5 19.9 38.0 42.3 53.0 50.3 41.5
7-Region Pooling 49.2 58.3 16.4 25.6 22.5 55.2 57.6 47.0 19.3 39.9 44.8 28.2 44.5 50.6 31.1 21.1 35.6 35.8 47.0 48.8 38.9
7-Region + BBReg 51.7 61.5 17.9 27.0 24.0 57.5 60.2 47.9 21.1 42.2 48.9 29.8 58.3 51.9 34.3 22.2 36.8 40.2 54.3 50.9 41.9
Table 1: Results on VOC-2007: We use top- elements for our approach (last 4 rows).
VOC 2007 test aero bike bird boat bottle bus car cat chair cow table dog horse mbike person plant sheep sofa train tv mAP
top- 43.8 52.8 11.8 18.9 21.8 52.0 54.5 38.7 14.9 32.4 39.0 23.1 34.8 39.4 24.7 16.2 28.6 31.8 39.0 42.5 33.0
top- 45.8 54.3 13.1 21.7 22.1 53.2 55.9 39.6 15.6 33.3 41.7 23.0 37.9 42.4 26.7 16.5 34.1 33.9 42.0 46.4 35.0
top- 47.2 55.3 12.5 23.4 21.8 55.4 56.1 39.2 16.3 37.4 44.0 25.0 40.0 42.3 24.6 17.3 28.8 35.0 44.8 45.4 35.6
top- 48.1 56.1 13.2 23.1 22.4 54.8 57.1 40.3 17.4 39.9 42.2 24.6 41.3 45.9 25.8 17.4 31.0 36.6 46.4 45.8 36.5
top- 48.5 56.7 13.2 22.8 24.0 54.9 56.6 41.2 18.6 36.3 42.5 27.3 39.0 44.6 25.8 19.0 32.1 38.9 45.1 45.8 36.6
top- 49.7 56.5 14.1 24.3 24.1 56.1 56.7 42.4 18.6 39.5 43.2 28.7 42.1 48.9 26.7 19.8 33.4 39.7 47.6 46.8 37.9
top- 49.6 57.2 16.1 25.4 23.9 55.6 56.6 42.9 18.7 37.9 45.7 27.9 42.7 49.5 27.0 18.6 35.8 37.1 47.5 47.3 38.2
top- 50.7 58.3 16.6 26.2 24.2 56.4 57.2 44.9 18.8 39.9 43.5 27.3 44.5 49.4 26.8 19.4 35.3 41.4 47.8 47.4 38.8
top- [45] 38.2 52.0 5.8 15.9 17.5 46.1 53.2 36.3 12.5 30.3 35.3 19.2 32.4 40.9 22.6 13.7 19.4 26.7 36.7 35.9 29.5
Table 2: Ablation Analysis: We use 5-region pooling (, and ) to analyze the detection performance with the number of mid-level elements. We also analyze the influence of adding elements corresponding to IoU 0.8 with ground-truth boxes (Section 3.2).

We also show some representative detections in Figure 5. A predominant failure mode of our algorithm seems to be localization error, specifically where multiple instances of the same category (e.g. two birds, multiple people, or bottle) occur together. We attribute this to the relatively aggressive pooling scheme in our feature vector. One way to combat this kind of error would be to include more spatial information in the feature vector; however, we leave this investigation for future work.

Finally, we highlight the information captured by our representation for a few detected objects in Figure 6. For each element, we first average the top- detections from the training set to get a representative image. Then for each detected object, we get the high-scoring mid-level elements, and transfer their representative images to the locations where these elements were detected. Then we take the weighted-mean of these transfers to get the final visualization (Figure 6). Note, for example, that how representative wheel elements are for vehicles, and face elements for cats and dogs (which are in sync with the observation by [39]).

VOC 2010 test aero bike bird boat bottle bus car cat chair cow table dog horse mbike person plant sheep sofa train tv mAP
DPM-v5 (w/o context) [21] 45.6 49.0 11.0 11.6 27.2 50.5 43.1 23.6 17.2 23.2 10.7 20.5 42.5 44.5 41.3 8.7 29 18.7 40.0 34.5 29.6
DPM-v5 [21] 49.2 53.8 13.1 15.3 35.5 53.4 49.7 27.0 17.2 28.8 14.7 17.8 46.4 51.2 47.7 10.8 34.2 20.7 43.8 38.3 33.4
SS SPM [50] 56.2 42.4 15.3 12.6 21.8 49.3 36.8 46.1 12.9 32.1 30.0 36.5 43.5 52.9 32.9 15.3 41.1 31.8 47.0 44.8 35.1
BCP [18] 44.3 35.2 9.7 10.1 15.1 44.6 32.0 35.3 4.4 17.5 15.0 27.6 36.2 42.1 30.0 5.0 13.7 18.8 34.4 28.6 25.0
Poselet [6] 33.2 51.9 8.5 8.2 34.8 39.0 48.8 22.2 - 20.6 - 18.5 48.2 44.1 48.5 9.1 28.0 13.0 22.5 33.0
RM[16] 49.8 50.6 15.1 15.5 28.5 51.1 42.2 30.5 17.3 28.3 12.4 26.0 45.6 51.8 41.4 12.6 30.4 26.1 44.0 37.6 32.8
[10] (w/o context) 61.3 46.4 21.1 21.0 18.1 49.3 45.0 46.9 12.8 29.2 26.1 38.9 40.4 53.1 31.9 13.3 39.9 33.4 43.0 45.3 35.8
SegDPM [23] 56.4 48.0 24.3 21.8 31.3 51.3 47.3 48.2 16.1 29.4 19.0 37.5 44.1 51.5 44.4 12.6 32.0 28.8 48.9 39.1 36.6
SegDPM+rescore [23] 58.7 51.4 25.3 24.1 33.8 52.5 49.2 48.8 11.7 30.4 21.6 37.7 46.0 53.1 46.0 13.1 35.7 29.4 52.5 41.8 38.1
Regionlets [56] 65.0 48.9 25.9 24.6 24.5 56.1 54.5 51.2 17.0 28.9 30.2 35.8 40.2 55.7 43.5 14.3 43.9 32.6 54.0 45.9 39.6
top- (5-Region) 55.1 50.8 16.7 18.3 22.6 50.4 44.9 48.3 10.3 27.7 25.6 35.8 43.3 49.9 27.6 14.3 34.2 31.4 43.8 41.7 34.6
top- (5-Region + BBReg) 60.8 52.4 17.7 18.9 25.2 51.6 47.6 49.1 11.5 32.1 27.7 36.9 46.2 53.6 30.9 16.5 36.2 31.2 51.4 43.3 37.1
Table 3: Results on VOC-2010: We use top-500 elements and 5-region pooling for this experiment (last 2 rows).

4.2 Ablative Analysis and Detection Diagnosis

We now perform ablative analysis to understand how different components influence the performance of our system. First we investigate the effects of increasing the number of mid-level elements. For this experiment, we use the 5-region pooling scheme (Section 3.2). As it can be seen from the Table 2, the performance of our system consistently increases with the number of mid-level elements.

We also compared the performance of our approach when we use the mid-level elements generated by [45]. Our results indicate that the elements obtained using discriminative mode-seeking [14] are better suited for object detection.

Finally, we use the diagnostic framework from  [27] to better understand the failure modes of our system111The full diagnostic report is available on authors’ website.. The key take-away is that in case of person, the localization error is quite significant; this is likely due to our detections encompassing multiple instances of the object (see Figure 5).

4.3 Performance on PASCAL VOC-2010

We now compare the performance of our approach on VOC-2010 comp3 challenge (no extra data) [20] with several standard baselines, including the state-of-the-art (see Table 3). In this experiment, we used top- elements per category and performed -region pooling for feature representation. Our approach achieves 37.1% mAP, and outperforms the standard HOG-based DPM [21] (without context) by more than 5% (absolute)222DPM [21] (with BB-Reg and without context) achieves 30.8% mAP as reported in [18], and DPM-v5 [21] (with BB-Reg and context) achieves 33.4% as reported on the authors’ website. We also compared our approach to Boosted Collection of Parts (BCP) [18] and with Poselets [6], which are also based on similar ideas of using mid-level elements. Compared to [18], our approach has a significant boost of 12% (absolute). The mAP for 18 categories obtained using Poselets [6] (chair and table were not available) is 29.6%, whereas our mAP is 38.9% for those categories. Note that our approach does not use any contextual re-scoring as done in SegDPM [23], but still achieves comparable results. Our approach is also comparable to Regionlets [56] which uses a combination of HOG, LBP and covariance features.

5 Discussion

Our work, even though focused on HOG-based mid-level elements, shares some insights with the current CNN-based methods. [1]

showed how learning from large amounts of data is one of the strengths of deep networks – when the convolutional network is pre-trained on ImageNet data (i.e., 1M images) 

[13, 31], the performance on PASCAL is significantly higher than when the same network is trained on PASCAL images only (54.2% vs. 40.7% mAP on VOC-2007). But it is interesting that the deep network trained only on PASCAL data still outperforms the canonical DPM [21] (33.7% mAP) by a reasonable margin (7% absolute). These multi-layer CNNs share data across categories to learn features. The simple mid-level representation we build and investigate in this paper, also enables sharing between categories (which was remarkably missing in most HOG-based pipelines) and allows for encoding loose spatial constraints. We believe that these are the main reasons we are able to bridge the performance gap between CNN and HOG pipelines (even though our representation uses the same features as DPM).

A concurrent work [36] presented an approach to discover similar mid-level elements using CNN features, and achieved state-of-the-art performance on the task of scene classification. We believe that our work can also utilize these CNN feature based mid-level elements for object detection, and it would be an interesting future work. Further, we hope that our work will inspire future research on combining mid-level elements [45, 14] with deep architectures (such as learning a hierarchy of mid-level representations).

The current mid-level discovery approaches [45, 14, 36, 18, 29] are not easily scalable to millions of images – the main bottleneck being dense sliding window mining (detection in HOG-feature pyramid for [45, 14, 18, 29]

, and dense deep-feature extraction for 

[36]). We are optimistic that the methods developed to scale dense sliding window object detection [42, 2, 12, 22] will help scale-up current mid-level approaches in the near-future.

6 Conclusion

We have presented a surprisingly simple, yet effective, approach for object detection which builds upon the recent success of discriminative mid-level elements. This simple representation performs comparably to the state-of-the-art on the PASCAL VOC comp3 detection challenge. We also demonstrate that this representation is easily interpretable, in the sense that we can understand what the final classifier has learned, and visualize what the representation “sees” when it detects or mis-detects an object. We hope this will inspire further research on mid-level representations.

Acknowledgements: This work was partially supported by ONR MURI N000141010934. AS, CD and AG were partially supported by Microsoft Research PhD Fellowship, Google PhD Fellowship and Bosch Young Faculty Fellowship. The authors would like to thank Yahoo! for the cluster donation and Amazon for AWS grant.


  • [1] P. Agrawal, R. Girshick, and J. Malik. Analyzing the performance of multilayer neural networks for object recognition. In ECCV. 2014.
  • [2] E. Ahmed, G. Shakhnarovich, and S. Maji. Knowing a good HOG filter when you see it: Efficient selection of filters for detection. In ECCV, 2014.
  • [3] B. Alexe, T. Deselaers, and V. Ferrari. Measuring the objectness of image windows. TPAMI, 2012.
  • [4] P. Arbeláez, J. Pont-Tuset, J. T. Barron, F. Marques, and J. Malik. Multiscale combinatorial grouping. In CVPR, 2014.
  • [5] H. Azizpour and I. Laptev. Object detection using strongly-supervised deformable part models. In ECCV, 2012.
  • [6] L. Bourdev and J. Malik. Poselets: Body part detectors trained using 3d human pose annotations. In ICCV, 2009.
  • [7] J. Carreira and C. Sminchisescu. Constrained parametric min-cuts for automatic object segmentation. In CVPR, 2010.
  • [8] X. Chen, A. Shrivastava, and A. Gupta. Enriching Visual Knowledge Bases via Object Discovery and Segmentation. In CVPR, 2014.
  • [9] M.-M. Cheng, Z. Zhang, W.-Y. Lin, and P. H. S. Torr. BING: Binarized normed gradients for objectness estimation at 300fps. In CVPR, 2014.
  • [10] R. G. Cinbis, J. Verbeek, and C. Schmid. Segmentation driven object detection with Fisher vectors. In ICCV, 2013.
  • [11] N. Dalal and B. Triggs. Histograms of oriented gradients for human detection. In CVPR, 2005.
  • [12] T. Dean, M. A. Ruzon, M. Segal, J. Shlens, S. Vijayanarasimhan, and J. Yagnik. Fast, accurate detection of 100,000 object classes on a single machine. In CVPR, 2013.
  • [13] J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei-Fei. ImageNet: A Large-Scale Hierarchical Image Database. In CVPR, 2009.
  • [14] C. Doersch, A. Gupta, and A. A. Efros. Mid-level visual element discovery as discriminative mode seeking. In NIPS, 2013.
  • [15] C. Doersch, S. Singh, A. Gupta, J. Sivic, and A. A. Efros. What makes paris look like paris? ACM TOG, 2012.
  • [16] A. Eigenstetter, M. Takami, and B. Ommer. Randomized max-margin compositions for visual recognition. In CVPR, June 2014.
  • [17] I. Endres and D. Hoiem. Category independent object proposals. In ECCV, 2010.
  • [18] I. Endres, K. Shih, J. Jiaa, and D. Hoiem. Learning collections of part models for object recognition. In CVPR, 2013.
  • [19] I. Endres, V. Srikumar, M.-W. Chang, and D. Hoiem. Learning shared body-plans. In CVPR, 2012.
  • [20] M. Everingham, L. Van Gool, C. K. I. Williams, J. Winn, and A. Zisserman. The PASCAL Visual Object Classes (VOC) Challenge. IJCV, 2010.
  • [21] P. Felzenszwalb, R. Girshick, D. McAllester, and D. Ramanan. Object detection with discriminatively trained part-based models. PAMI, 32(9), 2010.
  • [22] P. F. Felzenszwalb, R. B. Girshick, and D. A. McAllester. Cascade object detection with deformable part models. In CVPR, 2010.
  • [23] S. Fidler, R. Mottaghi, A. Yuille, and R. Urtasun. Bottom-up segmentation for top-down detection. In CVPR, 2013.
  • [24] D. F. Fouhey, A. Gupta, and M. Hebert. Data-driven 3D primitives for single image understanding. In ICCV, 2013.
  • [25] R. Girshick, J. Donahue, T. Darrell, and J. Malik. Rich feature hierarchies for accurate object detection and semantic segmentation. In CVPR, 2014.
  • [26] K. He, X. Zhang, S. Ren, and J. Sun. Spatial pyramid pooling in deep convolutional networks for visual recognition. In ECCV, 2014.
  • [27] D. Hoiem, Y. Chodpathumwan, and Q. Dai. Diagnosing error in object detectors. In ECCV, 2012.
  • [28] A. Jain, A. Gupta, M. Rodriguez, and L. Davis. Representing videos using mid-level discriminative patches. In CVPR, 2013.
  • [29] M. Juneja, A. Vedaldi, C. V. Jawahar, and A. Zisserman. Blocks that shout: Distinctive parts for scene classification. In CVPR, 2013.
  • [30] P. Krähenbühl and V. Koltun. Geodesic object proposals. In ECCV. 2014.
  • [31] A. Krizhevsky, I. Sutskever, and G. E. Hinton. Imagenet classification with deep convolutional neural networks. In NIPS, 2012.
  • [32] S. Lazebnik, C. Schmid, and J. Ponce. Beyond bags of features: Spatial pyramid matching for recognizing natural scene categories. In CVPR, 2006.
  • [33] Y. LeCun, B. Boser, J. S. Denker, D. Henderson, R. E. Howard, W. Hubbard, and L. D. Jackel. Backpropagation applied to handwritten zip code recognition. Neural computation, 1989.
  • [34] L.-J. Li, H. Su, L. Fei-Fei, and E. P. Xing. Object bank: A high-level image representation for scene classification & semantic feature sparsification. In NIPS, 2010.
  • [35] Q. Li, J. Wu, and Z. Tu. Harvesting mid-level visual concepts from large-scale internet images. In CVPR, 2013.
  • [36] Y. Li, L. Liu, C. Shen, and A. van den Hengel. Mid-level deep pattern mining. CoRR, abs/1411.6382, 2014.
  • [37] D. G. Lowe. Object recognition from local scale-invariant features. In ICCV, 1999.
  • [38] S. Manén, M. Guillaumin, and L. Van Gool. Prime Object Proposals with Randomized Prim’s Algorithm. In ICCV, 2013.
  • [39] O. M. Parkhi, A. Vedaldi, C. V. Jawahar, and A. Zisserman. The truth about cats and dogs. In ICCV, 2011.
  • [40] B. Pepik, M. Stark, P. Gehler, and B. Schiele. Teaching 3D geometry to deformable part models. In CVPR, 2012.
  • [41] F. Perronnin, Y. Liu, J. Sánchez, and H. Poirier.

    Large-scale image retrieval with compressed Fisher vectors.

    In CVPR, 2010.
  • [42] M. A. Sadeghi and D. A. Forsyth. 30hz object detection with DPM V5. In ECCV, 2014.
  • [43] S. Savarese and L. Fei-Fei. 3D generic object categorization, localization and pose estimation. In ICCV, 2007.
  • [44] A. Shrivastava and A. Gupta. Building parts-based object detectors via 3d geometry. In ICCV, 2013.
  • [45] S. Singh, A. Gupta, and A. A. Efros. Unsupervised discovery of mid-level discriminative patches. In ECCV, 2012.
  • [46] J. Sivic and A. Zisserman. Video google: A text retrieval approach to object matching in videos. In ICCV, 2003.
  • [47] M. Stark, M. Goesele, and B. Schiele. Back to the future: Learning shape models from 3D CAD data. In BMVC, 2010.
  • [48] J. Sun and J. Ponce. Learning discriminative part detectors for image classification and cosegmentation. In ICCV, 2013.
  • [49] C. Szegedy, A. Toshev, and D. Erhan. Deep neural networks for object detection. In NIPS, 2013.
  • [50] J. Uijlings, K. van de Sande, T. Gevers, and A. Smeulders. Selective search for object recognition. IJCV, 2013.
  • [51] K. E. van de Sande, C. G. Snoek, and A. W. Smeulders. Fisher and VLAD with FLAIR. In CVPR, 2014.
  • [52] K. E. A. van de Sande, T. Gevers, and C. G. M. Snoek. Evaluating color descriptors for object and scene recognition. TPAMI, 2010.
  • [53] A. Vedaldi, V. Gulshan, M. Varma, and A. Zisserman. Multiple kernels for object detection. In ICCV, 2009.
  • [54] J. Walker, A. Gupta, and M. Hebert. Patch to the future: Unsupervised visual prediction. In CVPR, 2014.
  • [55] X. Wang, B. Wang, X. Bai, W. Liu, and Z. Tu. Max-margin multiple-instance dictionary learning. In ICML, 2013.
  • [56] X. Wang, M. Yang, S. Zhu, and Y. Lin. Regionlets for generic object detection. In ICCV, 2013.
  • [57] Y. Yang and D. Ramanan. Articulated pose estimation with exible mixtures-of-parts. In CVPR, 2011.
  • [58] C. L. Zitnick and P. Dollar. Edge boxes: Locating object proposals from edges. In ECCV, 2014.