LACBoost and FisherBoost: Optimally Building Cascade Classifiers

05/22/2010 ∙ by Chunhua Shen, et al. ∙ 0

Object detection is one of the key tasks in computer vision. The cascade framework of Viola and Jones has become the de facto standard. A classifier in each node of the cascade is required to achieve extremely high detection rates, instead of low overall classification error. Although there are a few reported methods addressing this requirement in the context of object detection, there is no a principled feature selection method that explicitly takes into account this asymmetric node learning objective. We provide such a boosting algorithm in this work. It is inspired by the linear asymmetric classifier (LAC) of Wu et al. in that our boosting algorithm optimizes a similar cost function. The new totally-corrective boosting algorithm is implemented by the column generation technique in convex optimization. Experimental results on face detection suggest that our proposed boosting algorithms can improve the state-of-the-art methods in detection performance.



There are no comments yet.


page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Real-time detection of various categories of objects in images is one of the key tasks in computer vision. This topic has been extensively studied in the past a few years due to its important applications in surveillance, intelligent video analysis etc. Viola and Jones proffered the first real-time face detector [2, 3]. To date, it is still considered one of the state-of-the-art, and their framework is the basis of many incremental work afterwards. Object detection is a highly asymmetric classification problem with the exhaustive scanning-window search being used to locate the target in an image. Only a few are true target objects among the millions of scanned patches. Cascade classifiers have been proposed for efficient detection, which takes the asymmetric structure into consideration. Under the assumption of each node of the cascade classifier makes independent classification errors, the detection rate and false positive rate of the entire cascade are: and , respectively. As pointed out in [2, 1], these two equations suggest a node learning objective: Each node should have an extremely high detection rate (e.g., ) and a moderate false positive rate (e.g., ). With the above values of and , assume that the cascade has nodes, then and , which is usually the design goal.

A drawback of standard boosting like AdaBoost is that it does not take advantage of the cascade classifier. AdaBoost only minimizes the overall classification error and does not minimize the number of false negatives. In this sense, the features selected are not optimal for the purpose of rejecting negative examples. At the feature selection and classifier training level, Viola and Jones leveraged the asymmetry property, to some extend, by replacing AdaBoost with AsymBoost [3]. AsymBoost incurs more loss for misclassifying a positive example by simply modifying AdaBoost’s exponential loss. Better detection rates were observed over the standard AdaBoost. Nevertheless, AsymBoost addresses the node learning goal indirectly and still may not be the optimal solution. Wu et al. explicitly studied the node learning goal and they proposed to use linear asymmetric classifier (LAC) and Fisher linear discriminant analysis (LDA) to adjust the linear coefficients of the selected weak classifiers [1, 4]. Their experiments indicated that with this post-processing technique, the node learning objective can be better met, which is translated into improved detection rates. In Viola and Jones’ framework, boosting is used to select features and at the same time to train a strong classifier. Wu et al.’s work separates these two tasks: they still use AdaBoost or AsymBoost to select features; and at the second step, they build a strong classifier using LAC or LDA. Since there are two steps here, in Wu et al.’s work [1, 4], the node learning objective is only considered at the second step. At the first step—feature selection—the node learning objective is not explicitly considered. We conjecture that further improvement may be gained if the node learning objective is explicitly taken into account at both steps. We design new boosting algorithms to implement this idea and verify this conjecture. Our major contributions are as follows.

  1. We develop new boosting-like algorithms via directly minimizing the objective function of linear asymmetric classifier, which is termed as LACBoost (and FisherBoost from Fisher LDA). Both of them can be used to select features that is optimal for achieving the node learning goal in training a cascade classifier. To our knowledge, this is the first attempt to design such a feature selection method.

  2. LACBoost and FisherBoost share similarities with LPBoost [5]

    in the sense that both use column generation—a technique originally proposed for large-scale linear programming (LP). Typically, the Lagrange dual problem is solved at each iteration in column generation. We instead solve the primal quadratic programming (QP) problem, which has a special structure and entropic gradient (EG) can be used to solve the problem very efficiently. Compared with general interior-point based QP solvers, EG is much faster. Considering one needs to solve QP problems a few thousand times for training a complete cascade detector, the efficiency improvement is enormous. Compared with training an AdaBoost based cascade detector, the time needed for LACBoost (or FisherBoost) is comparable. This is because for both cases, the majority of the time is spent on weak classifier training and bootstrapping.

  3. We apply LACBoost and FisherBoost to face detection and better performances are observed over the state-of-the-art methods [1, 4]. The results confirm our conjecture and show the effectiveness of LACBoost and FisherBoost. LACBoost can be immediately applied to other asymmetric classification problems.

  4. We also analyze the condition that makes the validity of LAC, and show that the multi-exit cascade might be more suitable for applying LAC learning of [1, 4] (and our LACBoost) rather than Viola-Jones standard cascade.

Besides these, the LACBoost/FisherBoost algorithm differs from traditional boosting algorithms in that LACBoost/FisherBoost does not minimize a loss function. This opens new possibilities for designing new boosting algorithms for special purposes. We have also extended column generation for optimizing nonlinear optimization problems. Next we review some related work that is closest to ours.

Related work There is a large body of previous work in object detection [6, 7]; of particular relevance to our work is boosting object detection originated from Viola and Jones’ framework. There are three important components that make Viola and Jones’ framework tremendously successful: (1) The cascade classifier that efficiently filters out most negative patches in early nodes; and also contributes to enable the final classifier to have a very high detection rate; (2) AdaBoost that selects informative features and at the same time trains a strong classifier; (3) The use of integral images, which makes the computation of Haar features extremely fast. Most of the work later improves one or more of these three components. In terms of the cascade classifier, a few different approaches such as soft cascade [8], dynamic cascade [9], and multi-exit cascade [10]. We have used the multi-exit cascade in this work. The multi-exit cascade tries to improve the classification performance by using all the selected weak classifiers for each node. So for the -th strong classifier (node), it uses all the weak classifiers in this node as well as those in the previous

nodes. We show that the LAC post-processing can enhance the multi-exit cascade. More importantly, we show that the multi-exit cascade better meets LAC’s requirement of data being Gaussian distributions.

The second research topic is the learning algorithm for constructing a classifier. Wu et al. use fast forward feature selection to accelerate the training procedure [7]. They have also proposed LAC to learn a better strong classifier [1]. Pham and Cham recently proposed online asymmetric boosting with considerable improvement in training time [6]. By exploiting the feature statistics, they have also designed a fast method to train weak classifiers [11]. Li et al. advocated FloatBoost to discard some redundant weak classifiers during AdaBoost’s greedy selection procedure [12]. Liu and Shum proposed KLBoost to select features and train a strong classifier [13]. Other variants of boosting have been applied to detection.

Notation The following notation is used. A matrix is denoted by a bold upper-case letter (

); a column vector is denoted by a bold lower-case letter (

). The th row of is denoted by and the -th column

. The identity matrix is

and its size should be clear from the context. and are column vectors of ’s and ’s, respectively. We use to denote component-wise inequalities.

Let be the set of training data, where and , . The training set consists of positive training points and negative ones; . Let be a weak classifier that projects an input vector into . Here we only consider discrete classifier outputs. We assume that the set is finite and we have possible weak classifiers. Let the matrix where the entry of is . is the label predicted by weak classifier on the training datum . We define a matrix such that its entry is .

2 Linear Asymmetric Classification

Before we propose our LACBoost and FisherBoost, we briefly overview the concept of LAC. Wu et al. [4] have proposed linear asymmetric classification (LAC) as a post-processing step for training nodes in the cascade framework. LAC is guaranteed to get an optimal solution under the assumption of Gaussian data distributions.

Suppose that we have a linear classifier , if we want to find a pair of with a very high accuracy on the positive data and a moderate accuracy on the negative , which is expressed as the following problem:


where denotes a symmetric distribution with mean and covariance . If we prescribe to and assume that for any , is Gaussian and is symmetric, then (1) can be approximated by


(2) is similar to LDA’s optimization problem


(2) can be solved by eigen-decomposition and a close-formed solution can be derived:


On the other hand, each node in cascaded boosting classifiers has the following form:


We override the symbol here, which denotes the output vector of all weak classifiers over the datum . We can cast each node as a linear classifier over the feature space constructed by the binary outputs of all weak classifiers. For each node in cascade classifier, we wish to maximize the detection rate as high as possible, and meanwhile keep the false positive rate to an moderate level (e.g., ). That is to say, the problem (1) expresses the node learning goal. Therefore, we can use boosting algorithms (e.g., AdaBoost) as feature selection methods, and then use LAC to learn a linear classifier over those binary features chosen by boosting. The advantage is that LAC considers the asymmetric node learning explicitly.

However, there is a precondition of LAC’s validity. That is, for any , is a Gaussian and is symmetric. In the case of boosting classifiers, and can be expressed as the margin of positive data and negative data. Empirically Wu et al. [4] verified that is Gaussian approximately for a cascade face detector. We discuss this issue in the experiment part in more detail.

3 Constructing Boosting Algorithms from LDA and LAC

In kernel methods, the original data are nonlinearly mapped to a feature space and usually the mapping function is not explicitly available. It works through the inner product of . In boosting [14], the mapping function can be seen as explicitly known through: Let us consider the Fisher LDA case first because the solution to LDA will generalize to LAC straightforwardly, by looking at the similarity between (2) and (3).

Fisher LDA maximizes the between-class variance and minimizes the within-class variance. In the binary-class case, we can equivalently rewrite (

3) into


where and are the between-class and within-class scatter matrices; and are the projected centers of the two classes. The above problem can be equivalently reformulated as


for some certain constant and under the assumption that .111In our face detection experiment, we found that this assumption could always be satisfied. Now in the feature space, our data are , . We have


where is the -th row of .


Here the -th entry of is defined as if , otherwise . Similarly if , otherwise . We also define . For ease of exposition, we order the training data according to their labels. So the vector :


and the first components of correspond to the positive training data and the remaining ones correspond to the negative data. So we have , with the covariance matrices. By noticing that

we can easily rewrite the original problem into:


Here is a block matrix with

and is similarly defined by replacing with in . Also note that we have introduced a constant before the quadratic term for convenience. The normalization constraint removes the scale ambiguity of . Otherwise the problem is ill-posed.

In the case of LAC, the covariance matrix of the negative data is not involved, which corresponds to the matrix is zero. So we can simply set and (11) becomes the optimization problem of LAC.

At this stage, it remains unclear about how to solve the problem (11) because we do not know all the weak classifiers. The number of possible weak classifiers could be infinite—the dimension of the optimization variable is infinite. So (11) is a semi-infinite quadratic program (SIQP). We show how column generation can be used to solve this problem. To make column generation applicable, we need to derive a specific Lagrange dual of the primal problem.

The Lagrange dual problem We now derive the Lagrange dual of the quadratic problem (11). Although we are only interested in the variable , we need to keep the auxiliary variable in order to obtain a meaningful dual problem. The Lagrangian of (11) is with . gives the following Lagrange dual:


In our case, is rank-deficient and its inverse does not exist (for both LDA and LAC). We can simply regularize with with a very small constant. One of the KKT optimality conditions between the dual and primal is , which can be used to establish the connection between the dual optimum and the primal optimum. This is obtained by the fact that the gradient of w.r.t. must vanish at the optimum, , .

Problem (12) can be viewed as a regularized LPBoost problem. Compared with the hard-margin LPBoost [5], the only difference is the regularization term in the cost function. The duality gap between the primal (11) and the dual (12) is zero. In other words, the solutions of (11) and (12) coincide. Instead of solving (11) directly, one calculates the most violated constraint in (12) iteratively for the current solution and adds this constraint to the optimization problem. In theory, any column that violates dual feasibility can be added. To speed up the convergence, we add the most violated constraint by solving the following problem:


This is exactly the same as the one that standard AdaBoost and LPBoost use for producing the best weak classifier. That is to say, to find the weak classifier that has minimum weighted training error. We summarize the LACBoost/FisherBoost algorithm in Algorithm 1. By simply changing , Algorithm 1 can be used to train either LACBoost or FisherBoost. Note that to obtain an actual strong classifier, one may need to include an offset , i.e. the final classifier is because from the cost function of our algorithm (7), we can see that the cost function itself does not minimize any classification error. It only finds a projection direction in which the data can be maximally separated. A simple line search can find an optimal . Moreover, when training a cascade, we need to tune this offset anyway as shown in (5).

The convergence of Algorithm 1 is guaranteed by general column generation or cutting-plane algorithms, which is easy to establish. When a new that violates dual feasibility is added, the new optimal value of the dual problem (maximization) would decrease. Accordingly, the optimal value of its primal problem decreases too because they have the same optimal value due to zero duality gap. Moreover the primal cost function is convex, therefore in the end it converges to the global minimum.

Input: Labeled training data ; termination threshold ; regularization parameter ; maximum number of iterations . Initialization: ; ; and , . for do   Check for the optimality:
if and ,
break; and the problem is solved;
Add to the restricted master problem, which corresponds to a new constraint in the dual; Solve the dual problem (12) (or the primal problem (11)) and update and (). Increment the number of weak classifiers .
Output: The selected features are . The final strong classifier is: . Here the offset can be learned by a simple search.
Algorithm 1 Column generation for QP.

At each iteration of column generation, in theory, we can solve either the dual (12) or the primal problem (11). However, in practice, it could be much faster to solve the primal problem because (i) Generally, the primal problem has a smaller size, hence faster to solve. The number of variables of (12) is at each iteration, while the number of variables is the number of iterations for the primal problem. For example, in Viola-Jones’ face detection framework, the number of training data and . In other words, the primal problem has at most variables in this case; (ii) The dual problem is a standard QP problem. It has no special structure to exploit. As we will show, the primal problem belongs to a special class of problems and can be efficiently solved using entropic/exponentiated gradient descent (EG) [15, 16]. A fast QP solver is extremely important for training a object detector because we need to the solve a few thousand QP problems.

We can recover both of the dual variables easily from the primal variable :


The second equation is obtained by the fact that in the dual problem’s constraints, at optimum, there must exist at least one such that the equality holds. That is to say, is the largest edge over all weak classifiers.

We give a brief introduction to the EG algorithm before we proceed. Let us first define the unit simplex . EG efficiently solves the convex optimization problem


under the assumption that the objective function is a convex Lipschitz continuous function with Lipschitz constant w.r.t. a fixed given norm . The mathematical definition of is that holds for any in the domain of . The EG algorithm is very simple:

  1. Initialize with ;

  2. Generate the sequence , with:


    Here is the step-size. is the gradient of ;

  3. Stop if some stopping criteria are met.

The learning step-size can be determined by following [15]. In [16], the authors have used a simpler strategy to set the learning rate.

EG is a very useful tool for solving large-scale convex minimization problems over the unit simplex. Compared with standard QP solvers like Mosek [17], EG is much faster. EG makes it possible to train a detector using almost the same amount of time as using standard AdaBoost as the majority of time is spent on weak classifier training and bootstrapping.

In the case that ,

Similarly, for LDA, when . Hence,


Therefore, the problems involved can be simplified when and hold. The primal problem (11) equals


We can efficiently solve (19) using the EG method. In EG there is an important parameter , which is used to determine the step-size. can be determined by the -norm of . In our case is a linear function, which is trivial to compute. The convergence of EG is guaranteed; see [15] for details.

In summary, when using EG to solve the primal problem, Line of Algorithm 1 is:

Solve the primal problem (19) using EG, and update the dual variables with (14), and with (15).

4 Applications to Face Detection

Figure 1: Decision boundaries of AdaBoost (left) and FisherBoost (right) on D artificial data (positive data represented by ’s and negative data by ’s). Weak classifiers are decision stumps. In this case, FisherBoost intends to correctly classify more positive data in this case.

First, let us show a simple example on a synthetic dataset (more negative data than positive data) to illustrate the difference between FisherBoost and AdaBoost. Fig. 1 demonstrates the subtle difference of the classification boundaries obtained by AdaBoost and FisherBoost. We can see that FisherBoost seems to focus more on correctly classifying positive data points. This might be due to the fact that AdaBoost only optimizes the overall classification accuracy. This finding is consistent with the result in [18].

Figure 2:

Normality test (normal probability plot) for the face data’s margin distribution of nodes

, , . The nodes contains , , weak classifiers respectively. Curves close to a straight line mean close to a Gaussian.

Face detection In this section, we compare our algorithm with other state-of-art face detectors. We first show some results about the validity of LAC (or Fisher LDA) post-processing for improving node learning in object detection. Fig. 2 illustrates the normal probability plot of margins of positive training data, for the first three nodes in the multi-exit with LAC cascade. Clearly, the larger number of weak classifiers being used, the more closely the margin follows Gaussian distribution. In other words, LAC may achieve a better performance if a larger number of weak classifiers are used. The performance could be poor with too fewer weak classifiers. The same statement applies to Fisher LDA, and LACBoost, FisherBoost, too. Therefore, we do not apply LAC/LDA in the first eight nodes because the margin distribution could be far from a Gaussian distribution. Because the late nodes of a multi-exit cascade contain more weak classifiers, we conjecture that the multi-exit cascade might meet the Gaussianity requirement better. We have compared multi-exit cascades with LDA/LAC post-processing against standard cascades with LDA/LAC post-processing in [4] and slightly improved performances were obtained.

Six methods are evaluated with the multi-exit cascade framework [10], which are AdaBoost with LAC post-processing, or LDA post-processing, AsymBoost with LAC or LDA post-processing [4], and our FisherBoost, LACBoost. We have also implemented Viola-Jones’ face detector as the baseline [2]. As in [2], five basic types of Haar-like features are calculated, which makes up of a dimensional over-complete feature set on an image of pixels. To speed up the weak classifier training, as in [4], we uniformly sample of features for training weak classifiers (decision stumps). The training data are mirrored face images ( for training and for validation) and large background images, which are the same as in [4].

Multi-exit cascades with exits and weak classifiers are trained with various methods. For fair comparisons, we have used the same cascade structure and same number of weak classifiers for all the compared learning methods. The indexes of exits are pre-set to simplify the training procedure. For our FisherBoost and LACBoost, we have an important parameter , which is chosen from . We have not carefully tuned this parameter using cross-validation. Instead, we train a -node cascade for each candidate , and choose the one with the best training accuracy.222 To train a complete -node cascade and choose the best on cross-validation data may give better detection rates. At each exit, negative examples misclassified by current cascade are discarded, and new negative examples are bootstrapped from the background images pool. Totally, billions of negative examples are extracted from the pool. The positive training data and validation data keep unchanged during the training process.

Our experiments are performed on a workstation with Intel Xeon E CPUs and GB RAM. It takes about hours to train the multi-exit cascade with AdaBoost or AsymBoost. For FisherBoost and LACBoost, it takes less than hours to train a complete multi-exit cascade.333Our implementation is in C++ and only the weak classifier training part is parallelized using OpenMP. In other words, our EG algorithm takes less than

hour for solving the primal QP problem (we need to solve a QP at each iteration). A rough estimation of the computational complexity is as follows. Suppose that the number of training examples is

, number of weak classifiers is , At each iteration of the cascade training, the complexity for solving the primal QP using EG is with the iterations needed for EQ’s convergence. The complexity for training the weak classifier is with the number of all Haar-feature patterns. In our experiment, , , , . So the majority of the training computation is on the weak classifier training.

We have also experimentally observed the speedup of EG against standard QP solvers. We solve the primal QP defined by (19) using EG and Mosek [17]. The QP’s size is variables. With the same accuracy tolerance (Mosek’s primal-dual gap is set to and EG’s convergence tolerance is also set to ), Mosek takes seconds and EG is seconds. So EG is about times faster. Moreover, at iteration of training the cascade, EG can take advantage of the last iteration’s solution by starting EG from a small perturbation of the previous solution. Such a warm-start gains a to speedup in our experiment, while there is no off-the-shelf warm-start QP solvers available yet.

We evaluate the detection performance on the MIT+CMU frontal face test set. Two performance metrics are used here: each node and the entire cascade. The node metric is how well the classifiers meet the node learning objective. The node metric provides useful information about the capability of each method to achieve the node learning goal. The cascade metric uses the receiver operating characteristic (ROC) to compare the entire cascade’s peformance. Multiple issues have impacts on the cascade’s performance: classifiers, the cascade structure, bootstrapping etc.

We show the node comparison results in Fig. 3. The node performances between FisherBoost and LACBoost are very similar. From Fig. 3, as reported in [4], LDA or LAC post-processing can considerably reduce the false negative rates. As expected, our proposed FisherBoost and LACBoost can further reduce the false negative rates significantly. This verifies the advantage of selecting features with the node learning goal being considered.

From the ROC curves in Fig. 4, we can see that FisherBoost and LACBoost outperform all the other methods. In contrast to the results of the detection rate for each node, LACBoost is slightly worse than FisherBoost in some cases. That might be due to that many factors have impacts on the final result of detection. LAC makes the assumption of Gaussianity and symmetry data distributions, which may not hold well in the early nodes. This could explain why LACBoost does not always perform the best. Wu et al. have observed the same phenomenon that LAC post-processing does not outperform LDA post-processing in a few cases. However, we believe that for harder detection tasks, the benefits of LACBoost would be more impressive.

The error reduction results of FisherBoost and LACBoost in Fig. 4 are not as great as those in Fig. 3. This might be explained by the fact that the cascade and negative data bootstrapping remove of the error reducing effects, to some extend. We have also compared our methods with the boosted greedy sparse LDA (BGSLDA) in [18], which is considered one of the state-of-the-art. We provide the ROC curves in the supplementary package. Both of our methods outperform BGSLDA with AdaBoost/AsymBoost by about in the detection rate. Note that BGSLDA uses the standard cascade. So besides the benefits of our FisherBoost/LACBoost, the multi-exit cascade also brings effects.

Figure 3: Node performances on the validation data. “Ada” means that features are selected using AdaBoost; “Asym” means that features are selected using AsymBoost.
Figure 4: Cascade performances using ROC curves (number of false positives versus detection rate) on the MIT+CMU test data. “Ada” means that features are selected using AdaBoost. Viola-Jones cascade is the method in [2]. “Asym” means that features are selected using AsymBoost.

5 Conclusion

By explicitly taking into account the node learning goal in cascade classifiers, we have designed new boosting algorithms for more effective object detection. Experiments validate the superiority of our FisherBoost and LACBoost. We have also proposed to use entropic gradient to efficiently implement FisherBoost and LACBoost. The proposed algorithms are easy to implement and can be applied other asymmetric classification tasks in computer vision. We are also trying to design new asymmetric boosting algorithms by looking at those asymmetric kernel classification methods.


  • [1] Wu, J., Mullin, M.D., Rehg, J.M.: Linear asymmetric classifier for cascade detectors. In: Proc. Int. Conf. Mach. Learn., Bonn, Germany (2005) 988–995
  • [2] Viola, P., Jones, M.J.: Robust real-time face detection. Int. J. Comp. Vis. 57(2) (2004) 137–154
  • [3] Viola, P., Jones, M.: Fast and robust classification using asymmetric AdaBoost and a detector cascade. In: Proc. Adv. Neural Inf. Process. Syst., MIT Press (2002) 1311–1318
  • [4] Wu, J., Brubaker, S.C., Mullin, M.D., Rehg, J.M.: Fast asymmetric learning for cascade face detection. IEEE Trans. Pattern Anal. Mach. Intell. 30(3) (2008) 369–382
  • [5] Demiriz, A., Bennett, K., Shawe-Taylor, J.: Linear programming boosting via column generation. Mach. Learn. 46(1-3) (2002) 225–254
  • [6] Pham, M.T., Cham, T.J.: Online learning asymmetric boosted classifiers for object detection. In: Proc. IEEE Conf. Comp. Vis. Patt. Recogn., Minneapolis, MN (2007)
  • [7] Wu, J., Rehg, J.M., Mullin, M.D.: Learning a rare event detection cascade by direct feature selection. In: Proc. Adv. Neural Inf. Process. Syst., Vancouver (2003) 1523–1530
  • [8] Bourdev, L., Brandt, J.: Robust object detection via soft cascade. In: Proc. IEEE Conf. Comp. Vis. Patt. Recogn., San Diego, CA, US (2005) 236–243
  • [9] Xiao, R., Zhu, H., Sun, H., Tang, X.: Dynamic cascades for face detection. In: Proc. IEEE Int. Conf. Comp. Vis., Rio de Janeiro, Brazil (2007)
  • [10] Pham, M.T., Hoang, V.D.D., Cham, T.J.: Detection with multi-exit asymmetric boosting. In: Proc. IEEE Conf. Comp. Vis. Patt. Recogn., Anchorage, Alaska (2008)
  • [11] Pham, M.T., Cham, T.J.: Fast training and selection of Haar features using statistics in boosting-based face detection. In: Proc. IEEE Int. Conf. Comp. Vis., Rio de Janeiro, Brazil (2007)
  • [12] Li, S.Z., Zhang, Z.: FloatBoost learning and statistical face detection. IEEE Trans. Pattern Anal. Mach. Intell. 26(9) (2004) 1112–1123
  • [13] Liu, C., Shum, H.Y.: Kullback-Leibler boosting. In: Proc. IEEE Conf. Comp. Vis. Patt. Recogn. Volume 1., Madison, Wisconsin (June 2003) 587–594
  • [14] Rätsch, G., Mika, S., Schölkopf, B., Müller, K.R.: Constructing boosting algorithms from SVMs: An application to one-class classification. IEEE Trans. Pattern Anal. Mach. Intell. 24(9) (2002) 1184–1199
  • [15] Beck, A., Teboulle, M.: Mirror descent and nonlinear projected subgradient methods for convex optimization. Oper. Res. Lett. 31(3) (2003) 167–175
  • [16] Collins, M., Globerson, A., Koo, T., Carreras, X., Bartlett, P.L.: Exponentiated gradient algorithms for conditional random fields and max-margin Markov networks. J. Mach. Learn. Res. (2008) 1775–1822
  • [17] MOSEK ApS: The MOSEK optimization toolbox for matlab manual, version 5.0, revision 93 (2008) http: // www. mosek. com / .
  • [18] Paisitkriangkrai, S., Shen, C., Zhang, J.:

    Efficiently training a better visual detector with sparse Eigenvectors.

    In: Proc. IEEE Conf. Comp. Vis. Patt. Recogn., Miami, Florida, US (June 2009)