1 Introduction
Detecting the presence and position of objects in an image is a fundamental task in computer vision. For example, tracking humans in video or performing scene understanding on a still image requires the ability to reason about the number and position of objects. While great progress has been made in recent years in terms of feature sets, the basic training procedure has remained the same. In this procedure, a set of positive and negative image windows are selected from training images. Then a binary classifier is trained on these windows. Lastly, the classifier is tested on images containing no targets of interest, and false alarm windows are identified and added into the training set. The classifier is then retrained and, optionally, this process is iterated.
This approach does not make efficient use of the available training data since it trains on only a subset of image windows. Additionally, windows partially overlapping an object are a common source of false alarms. This training procedure makes it difficult to directly incorporate these examples into the training set since these windows are neither fully a false alarm or a true detection. Most importantly, the accuracy of the object detection system as a whole, is not optimized. Instead, the accuracy of a binary classifier on the subsampled training set is used as a proxy.
In this work, we show how to address all of these issues. In particular, we will show how to design an optimizer that runs over all windows and optimizes the performance of an object detection system in terms of the number of missed detections and false alarms in the final system output. Moreover, our formulation leads to a convex optimization and we provide an algorithm which finds the globally optimal set of parameters. Finally, we test our method on three publicly available datasets and show that it substantially improves the accuracy of the learned detectors. Strikingly, we find that a single rigid HOG filter can outperform a stateoftheart deformable part model if the HOG filter is learned via MMOD.
2 Related Work
In their seminal work, Dalal and Triggs introduced the Histogram of Oriented Gradients (HOG) feature for detecting pedestrians within a sliding window framework [3]. Subsequent object detection research has focused primarily on finding improved representations. Many recent approaches include features for partbasedmodeling, methods for combining local features, or dimensionality reduction [6, 20, 16, 4, 18]. All these methods employ some form of binary classifier trained on positive and negative image windows.
In contrast, Blaschko and Lampert’s research into structured output regression is the most similar to our own [2]
. As with our approach, they use a structural support vector machine formulation, which allows them to train on all window locations. However, their training procedure assumes an image contains either 0 or 1 objects. While in the present work, we show how to treat object detection in the general setting where an image may contain any number of objects.
3 Problem Definition
In what follows, we will use to denote a rectangular area of an image. Additionally, let denote the set of all rectangular areas scanned by our object detection system. To incorporate the common nonmaximum suppression practice, we define a valid labeling of an image as a subset of such that each element of the labeling “does not overlap” with each other. We use the following popular definition of “does not overlap”: rectangles and do not overlap if the ratio of their intersection area to total area covered is less than 0.5. That is,
(1) 
Finally, we use to denote the set of all valid labelings.
Then, given an image and a window scoring function , we can define the object detection procedure as
(2) 
That is, find the set of sliding window positions which have the largest scores but simultaneously do not overlap. This is typically accomplished with the greedy peak sorting method shown in Algorithm 1. An ideal learning algorithm would find the window scoring function which jointly minimized the number of false alarms and missed detections produced when used in Algorithm 1.
It should be noted that solving Equation (2) exactly is not computationally feasible. Thus, this algorithm does not always find the optimal solution to (2). An example which leads to suboptimal results is shown in Figure 1. However, as we will see, this suboptimal behavior does not lead to difficulties. Moreover, in the next section, we give an optimization algorithm capable of finding an appropriate window scoring function for use with Algorithm 1.
4 MaxMargin Object Detection
In this work, we consider only window scoring functions which are linear in their parameters. In particular, we use functions of the form
(3) 
where extracts a feature vector from the sliding window location in image , and is a parameter vector. If we denote the sum of window scores for a set of rectangles, , as , then Equation (2) becomes
(4) 
Then we seek a parameter vector which leads to the fewest possible detection mistakes. That is, given a randomly selected image and label pair , we would like the score for the correct labeling of to be larger than the scores for all the incorrect labelings. Therefore,
(5) 
should be satisfied as often as possible.
4.1 The Objective Function for MaxMargin Object Detection
Our algorithm takes a set of images and associated labels and attempts to find a such that the detector makes the correct prediction on each training sample. We take a maxmargin approach [10] and require that the label for each training sample is correctly predicted with a large margin. This leads to the following convex optimization problem:
(6)  
s.t. 
Where denotes the loss for predicting a labeling of when the true labeling is . In particular, we define the loss as
(7)  
where and control the relative importance of achieving high recall and high precision, respectively.
Equation (6
) is a hardmargin formulation of our learning problem. Since real world data is often noisy, not perfectly separable, or contains outliers, we extend this into the softmargin setting. In doing so, we arrive at the defining optimization for MaxMargin Object Detection (MMOD)
(8)  
s.t.  
In this setting, C is analogous to the usual support vector machine parameter and controls the tradeoff between trying to fit the training data or obtain a large margin.
Insight into this formulation can be gained by noting that each is an upper bound on the loss incurred by training example . This can be seen as follows (let )
(9)  
(10)  
(11) 
In the step from (9) to (10) we replace the max over with a particular element, . Therefore, the inequality continues to hold. In going from (10) to (11) we note that since is by definition the element of which maximizes .
Therefore, the MMOD objective function defined by Equation (8) is a convex upper bound on the average loss per training image
(12) 
This means that, for example, if from Equation (8) is driven to zero then the detector is guaranteed to produce the correct output from the corresponding training example.
This type of maxmargin approach has been used successfully in a number of other domains. An example is the Hidden Markov SVM [1], which gives stateoftheart results on sequence labeling tasks. Other examples include multiclass SVMs and methods for learning probabilistic context free grammars [10].
4.2 Solving the MMOD Optimization Problem
We use the cutting plane method [9, 17] to solve the MaxMargin Object Detection optimization problem defined by Equation (8). Note that MMOD is equivalent to the following unconstrained problem
(13) 
where is
(14) 
Further, note that is a convex function of and therefore is lower bounded by any tangent plane. The cutting plane method exploits this to find the minimizer of . It does this by building a progressively more accurate lower bounding approximation constructed from tangent planes. Each step of the algorithm finds a new minimizing this approximation. Then it obtains the tangent plane to at , and incorporates this new plane into the lower bounding function, tightening the approximation. A sketch of the procedure is shown in Figure 2.
Let denote the subgradient of at a point . Then a tangent plane to at is given by
(15) 
where
(16)  
(17) 
Given these considerations, the lower bounding approximation we use is
(18) 
where is the set of lower bounding planes, i.e. the “cutting planes”.
Pseudocode for this method is shown in Algorithm 2. It executes until the gap between the true MMOD objective function and the lower bound is less than . This guarantees convergence to the optimal to within . That is, we will have
(19) 
upon termination of Algorithm 2.
4.2.1 Solving the Quadratic Programming Subproblem
A key step of Algorithm 2 is solving the argmin on step 7. This subproblem can be written as a quadratic program and solved efficiently using standard methods. Therefore, in this section we derive a simple quadratic program solver for this problem. We begin by writing step 7 as a quadratic program and obtain
(20)  
s.t. 
The set of variables being optimized, , will typically have many more dimensions than the number of constraints in the above problem. Therefore, it is more efficient to solve the dual problem. To do this, note that the Lagrangian is
(21) 
and so the dual [5] of the quadratic program is
(22)  
s.t.  
After a little algebra, the dual reduces to the following quadratic program,
(23)  
s.t. 
where and are column vectors of the variables and respectively and .
We use a simplified variant of Platt’s sequential minimal optimization method to solve the dual quadratic program of Equation (23) [15]. Algorithm 3 contains the pseudocode. In each iteration, the pair of Lagrange multipliers (, ) which most violate the KKT conditions are selected (lines 613). Then the selected pair is jointly optimized (lines 1521). The solver terminates when the duality gap is less than a threshold.
Upon solving for the optimal , the needed by step 7 of Algorithm 2 is given by
(24) 
The value of needed for the test for convergence can be conveniently computed as
(25) 
Additionally, there are a number of nonessential but useful implementation tricks. In particular, the starting should be initialized using the from the previous iteration of the MMOD optimizer. Also, cutting planes typically become inactive after a small number of iterations and can be safely removed. A cutting plane is inactive if its associated Lagrange multiplier is 0. Our implementation removes a cutting plane if it has been inactive for 20 iterations.
4.2.2 Computing and its Subgradient
The final component of our algorithm is a method for computing and an element of its subgradient. Recall that and are
(26)  
(27) 
Then an element of the subgradient of is
(28) 
where
(29) 
Our method for computing is shown in Algorithm 4. It is a modification of the normal object detection procedure from Algorithm 1 to solve Equation (29) rather than (2). Therefore, the task of Algorithm 4 is to find the set of rectangles which jointly maximize the total detection score and loss.
There are two cases to consider. First, if a rectangle does not hit any truth rectangles, then it contributes positively to the argmax in Equation 29 whenever its score plus the loss per false alarm () is positive. Second, if a rectangle hits a truth rectangle then we reason as follows: if we reject the first rectangle which matches a truth rectangle then, since the rectangles are sorted in descending order of score, we will reject all others which match it as well. This outcome results in a single value of . Alternatively, if we accept the first rectangle which matches a truth rectangle then we gain its detection score. Additionally, we may also obtain additional scores from subsequent duplicate detections, each of which contributes the value of its window scoring function plus . Therefore, Algorithm 4 computes the total score for the accept case and checks it against It then selects the result with the largest value. In the pseudocode, these scores are accumulated in the variables.
This algorithm is greedy and thus may fail to find the optimal according to Equation (29). However, it is greedy in much the same way as the detection method of Algorithm 1. Moreover, since our goal from the outset is to find a set of parameters which makes Algorithm 1 perform well, we should use a training procedure which respects the properties of the algorithm being optimized. For example, if the correct output in the case of Figure 1 was to select the two boxes on the sides, then Algorithm 1 would make a mistake while a method which was optimal would not. Therefore, it is important for the learning procedure to account for this and learn that in such situations, if Algorithm 1 is to produce the correct output, the side rectangles need a larger score than the middle rectangle. Ultimately, it is only necessary for Algorithm 4 to give a value of which upper bounds the average loss per training image. In our experiments, we always observed this to be the case.
5 Experimental Results
To test the effectiveness of MMOD, we evaluate it on the TU Darmstadt cows [14], INRIA pedestrians [3], and FDDB [8]
datasets. When evaluating on the first two datasets, we use the same feature extraction (
) and parameter settings (, , , , and ), which are set as follows: , , , and . This value of means the optimization runs until the potential improvement in average loss per training example is less than . For the QP subproblem, we set to allow the accuracy with which we solve the subproblem to vary as the overall optimization progresses.For feature extraction, we use the popular spatial pyramid bagofvisualwords model [13]. In our implementation, each window is divided into a 6x6 grid. Within each grid location we extract a 2,048 bin histogram of visualwords. The visualword histograms are computed by extracting 36dimensional HOG [3] descriptors from each pixel location, determining which histogram bin the feature is closest too, and adding 1 to that visualword’s bin count. Next, the visualword histograms are concatenated to form the feature vector for the sliding window. Finally, we add a constant term which serves as a threshold for detection. Therefore, produces 73,729 dimensional feature vectors.
The local HOG descriptors are 36 dimensional and are extracted from 10x10 grayscale pixel blocks of four 5x5 pixel cells. Each cell contains 9 unsigned orientation bins. Bilinear interpolation is used for assigning votes to orientation bins but not for spatial bins.
To determine which visualword a HOG descriptor corresponds to, many researchers compute its distance to an exemplar for each bin and assign the vector to the nearest bin. However, this is computationally expensive, so we use a fast approximate method to determine bin assignment. In particular, we use a random projection based locality sensitive hash [7]. This is accomplished using 11 random planes. A HOG vector is hashed by recording the bit pattern describing which side of each plane it falls on. This 11bit number then indicates the visualword’s bin.
Finally, the sliding window classification can be implemented efficiently using a set of integral images. We also scan the sliding window over every location in an image pyramid which downsamples each layer by . To decide if two detections overlap for the purposes of nonmax suppression we use Equation (1). Similarly, we use Equation (1) to determine if a detection hits a truth box. Finally, all experiments were run on a single desktop workstation.
5.1 TU Darmstadt Cows
We performed 10fold crossvalidation on the TU Darmstadt cows [14] dataset and obtained perfect detection results with no false alarms. The best previous results on this dataset achieve an accuracy of 98.2% at equal error rate [2]. The dataset contains 112 images, each containing a sideview of a cow.
In this test the sliding window was 174 pixels wide and 90 tall. Training on the entire cows dataset finishes in 49 iterations and takes 70 seconds.
5.2 INRIA Pedestrians
We also tested MMOD on the INRIA pedestrian dataset and followed the testing methodology used by Dalal and Triggs [3]. This dataset has 2,416 cropped images of people for training as well as 912 negative images. For testing it has 1,132 people images and 300 negative images.
The negative testing images have an average of 199,834 pixels per image. We scan our detector over an image pyramid which downsamples at a rate of 4/5 and stop when the smallest pyramid layer contains 17,000 pixels. Therefore, MMOD scans approximately 930,000 windows per negative image on the testing data.
We use a sliding window 56 pixels wide and 120 tall. The entire optimization takes 116 minutes and runs for 220 iterations. Our results are compared to previous methods in Figure 3. The detection tradeoff curve shows that our method achieves superior performance even though we use a basic bagofvisualword feature set while more recent work has invested heavily in improved feature representations. Therefore, we attribute the increased improvement to our training procedure.
5.3 Fddb
Finally, we evaluate our method on the Face Detection Data Set and Benchmark (FDDB) challenge. This challenging dataset contains images of human faces in multiple poses captured in indoor and outdoor settings. To test MMOD, we used it to learn a basic HOG sliding window classifier. Therefore the feature extractor () takes in a window and outputs a HOG vector describing the entire window as was done in Dalal and Triggs’s seminal paper[3]. To illustrate the learned model, the HOG filter resulting from the first FDDB fold is visualized in Figure 4.
During learning, the parameters were set as follows: , , , and . We also upsampled each image by a factor of two so that smaller faces could be detected. Since our HOG filter box is 80x80 pixels in size this upsampling allows us to detect images that are larger than about 40x40 pixels in size. Additionally, we mirrored the dataset, effectively doubling the number of training images. This leads to training sizes of about 5000 images per fold and our optimizer requires approximately 25 minutes per fold.
To perform detection, this single HOG filter is scanned over the image at each level of an image pyramid and any windows which pass a threshold test are output after nonmax suppression is performed. A ROC curve that compares this learned HOG filter against other methods is created by sweeping this threshold and can be seen in Figure 5. To create the ROC curve we followed the FDDB evaluation protocol of performing 10 fold crossvalidation and combining the results in a single ROC curve using the provided FDDB evaluation software. Example images with detection outputs are also shown in Figure 6.
In Figure 5 we see that the HOG filter learned via MMOD substantially outperforms a HOG filter learned with the typical linear SVM “hard negative mining” approach[12] as well as the classic Viola Jones method [19]. Moreover, our single HOG filter learned via MMOD gives a slightly better accuracy than the complex deformable part model of Zhu [22].
6 Conclusion
We introduced a new method for learning to detect objects in images. In particular, our method leads to a convex optimization and we provided an efficient algorithm for its solution. We tested our approach on three publicly available datasets, the INRIA person dataset, TU Darmstadt cows, and FDDB using two feature representations. On all datasets, using MMOD to find the parameters of the detector lead to substantial improvements.
Our results on FDDB are most striking as we showed that a single rigid HOG filter can beat a stateoftheart deformable part model when the HOG filter is learned via MMOD. We attribute our success to the learning method’s ability to make full use of the data. In particular, on FDDB, our method can efficiently make use of all 300 million sliding window positions during training. Moreover, MMOD optimizes the overall accuracy of the entire detector, taking into account information which is typically ignored when training a detector. This includes windows which partially overlap target windows as well as the nonmaximum suppression strategy used in the final detector.
Our method currently uses a linear window scoring function. Future research will focus on extending this method to use morecomplex scoring functions, possibly by using kernels. The work of Yu and Joachims is a good starting point [21]
. Additionally, while our approach was introduced for 2D sliding window problems, it may also be useful for 1D sliding window detection applications, such as those appearing in the speech and natural language processing domains. Finally, to encourage future research, we have made a careful and thoroughly documented implementation of our method available as part of the open source dlib
^{1}^{1}1Software available at http://dlib.net/ml.html#structural_object_detection_trainermachine learning toolbox [11].References
 [1] Y. Altun, I. Tsochantaridis, and T. Hofmann. Hidden markov support vector machines. In International Conference on Machine Learning (ICML), 2003.
 [2] M. B. Blaschko and C. H. Lampert. Learning to localize objects with structured output regression. In European Conference on Computer Vision (ECCV), 2008.

[3]
N. Dalal and B. Triggs.
Histograms of oriented gradients for human detection.
In
Computer Vision and Pattern Recognition (CVPR)
, pages 886–893, 2005.  [4] G. Duan, C. Huang, H. Ai, and S. Lao. Boosting associated pairing comparison features for pedestrian detection. In Ninth IEEE International Workshop on Visual Surveillance, 2009.
 [5] R. Fletcher. Practical Methods of Optimization, Second Edition. John Wiley & Sons, 1987.
 [6] C. Huang and R. Nevatia. High performance object detection by collaborative learning of joint ranking of granules features. In Computer Vision and Pattern Recognition, 2010.

[7]
P. Indyk and R. Motwani.
Approximate nearest neighbors: towards removing the curse of dimensionality.
In30th Ann. ACM Symp. on Theory of Computing
, 1998.  [8] V. Jain and E. LearnedMiller. Fddb: A benchmark for face detection in unconstrained settings. Technical Report UMCS2010009, University of Massachusetts, Amherst, 2010.
 [9] T. Joachims, T. Finley, and C. J. Yu. Cuttingplane training of structural svms. Machine Learning, 77(1):27–59, 2009.
 [10] T. Joachims, T. Hofmann, Y. Yue, and C. J. Yu. Predicting structured objects with support vector machines. Communications of the ACM, Research Highlight, 52(11):97–104, November 2009.
 [11] D. E. King. Dlibml: A machine learning toolkit. Journal of Machine Learning Research, 10:1755–1758, 2009.
 [12] M. Köstinger, P. Wohlhart, P. M. Roth, and H. Bischof. Robust face detection by simple means. In DAGM 2012 CVAW workshop.
 [13] S. Lazebnik, C. Schmid, and J. Ponce. Beyond bags of features: Spatial pyramid matching for recognizing natural scene categories. In Computer Vision and Pattern Recognition (CVPR), 2006.

[14]
D. R. Magee and R. D. Boyle.
Detecting lameness using ‘resampling condensation’ and ‘multistream cyclic hidden markov models’.
Image and Vision Computing, 20, 2002.  [15] J. C. Platt. Fast training of support vector machines using sequential minimal optimization. In Advances in Kernel Methods — Support Vector Learning, pages 185–208, Cambridge, MA, 1998. MIT Press.
 [16] W. R. Schwartz, A. Kembhavi, D. Harwood, and L. S. Davis. Human detection using partial least squares analysis. In International Conference on Computer Vision (ICCV), 2009.
 [17] C. H. Teo, S. Vishwanthan, A. J. Smola, and Q. V. Le. Bundle methods for regularized risk minimization. J. Mach. Learn. Res., 11:311–365, March 2010.
 [18] O. Tuzel, F. Porikli, and P. Meer. Human detection via classification on reimannian manifolds. In Computer Vision and Pattern Recognition (CVPR), 2007.
 [19] P. Viola and M. Jones. Rapid object detection using a boosted cascade of simple features. In Computer Vision and Pattern Recognition, 2001.
 [20] B. Wu and R. Nevatia. Segmentation of multiple partially occluded objects by grouping merging assigning part detection responses. In Computer Vision and Pattern Recognition (CVPR), 2008.
 [21] C. J. Yu and T. Joachims. Training structural svms with kernels using sampled cuts. In ACM SIGKDD Conference on Knowledge Discovery and Data Mining, 2008.

[22]
X. Zhu and D. Ramanan.
Face detection, pose estimation, and landmark localization in the wild.
In Computer Vision and Pattern Recognition, 2012.