Fast Training of Effective Multi-class Boosting Using Coordinate Descent Optimization

11/23/2013 ∙ by Guosheng Lin, et al. ∙ 0

Wepresentanovelcolumngenerationbasedboostingmethod for multi-class classification. Our multi-class boosting is formulated in a single optimization problem as in Shen and Hao (2011). Different from most existing multi-class boosting methods, which use the same set of weak learners for all the classes, we train class specified weak learners (i.e., each class has a different set of weak learners). We show that using separate weak learner sets for each class leads to fast convergence, without introducing additional computational overhead in the training procedure. To further make the training more efficient and scalable, we also propose a fast co- ordinate descent method for solving the optimization problem at each boosting iteration. The proposed coordinate descent method is conceptually simple and easy to implement in that it is a closed-form solution for each coordinate update. Experimental results on a variety of datasets show that, compared to a range of existing multi-class boosting meth- ods, the proposed method has much faster convergence rate and better generalization performance in most cases. We also empirically show that the proposed fast coordinate descent algorithm needs less training time than the MultiBoost algorithm in Shen and Hao (2011).



There are no comments yet.


page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Boosting methods combine a set of weak classifiers (weak learners) to form a strong classifier. Boosting has been extensively studied

[3, 4] and applied to a wide range of applications due to its robustness and efficiency (e.g., real-time object detection [5, 6, 7]). Despite that fact that most classification tasks are inherently multi-class problems, the majority of boosting algorithms are designed for binary classification. A popular approach to multi-class boosting is to split the multi-class problem into a bunch of binary classification problems. A simple example is the one-vs-all approach. The well-known error correcting output coding (ECOC) methods [8] belong to this category. AdaBoost.ECC [9], AdaBoost.MH [10] and AdaBoost.MO [10] can all be viewed as examples of the ECOC approach. The second approach is to directly formulate multi-class as a single learning task, which is based on pairwise model comparisons between different classes. Shen and Hao’s direct formulation for multi-class boosting (referred to as MultiBoost) is such an example [1]. From the perspective of optimization, MultiBoost can be seen as an extension of the binary column generation boosting framework [11, 4] to the multi-class case. Our work here builds upon MultiBoost. As most existing multi-class boosting, for MultiBoost of [1], different classes share the same set of weak learners, which leads to a sparse solution of the model parameters and hence slow convergence. To solve this problem, in this work we propose a novel formulation (referred to as MultiBoost) for multi-class boosting by using separate weak learner sets. Namely, each class uses its own weak learner set. Compared to MultiBoost, MultiBoost converges much faster, generally has better generalization performance and does not introduce additional time cost for training. Note that AdaBoost.MO proposed in [10] uses different sets of weak classifiers for each class too. AdaBoost.MO is based on ECOC and the code matrix in AdaBoost.MO is specified before learning. Therefore, the underlying dependence between the fixed code matrix and generated binary classifiers is not explicitly taken into consideration, compared with AdaBoost.ECC. In contrast, our MultiBoost is based on the direct formulation of multi-class boosting, which leads to fundamentally different optimization strategies. More importantly, as shown in our experiments, our MultiBoost is much more scalable than AdaBoost.MO although both enjoy faster convergence than most other multi-class boosting.

In MultiBoost [1], sophisticated optimization tools like Mosek or LBFGS-B [12] are needed to solve the resulting optimization problem at each boosting iteration, which is not very scalable. Here we propose a coordinate descent algorithm (FCD) for fast optimization of the resulting problem at each boosting iteration of MultiBoost. FCD methods choose one variable at a time and efficiently solve the single-variable sub-problem. CD(coordinate decent) has been applied to solve many large-scale optimization problems. For example, Yuan et al. [13] made comprehensive empirical comparisons of regularized classification algorithms. They concluded that CD methods are very competitive for solving large-scale problems. In the formulation of MultiBoost (also in our MultiBoost), the number of variables is the product of the number of classes and the number of weak learners, which can be very large (especially when the number of classes is large). Therefore CD methods may be a better choice for fast optimization of multi-class boosting. Our method FCD is specially tailored for the optimization of MultiBoost. We are able to obtain a closed-form solution for each variable update. Thus the optimization can be extremely fast. The proposed FCD is easy to implement and no sophisticated optimization toolbox is required.

Main Contributions 1) We propose a novel multi-class boosting method (MultiBoost) that uses class specified weak learners. Unlike MultiBoost sharing a single set of weak learners across different classes, our method uses a separate set of weak learners for each class. We generate (the number of classes) weak learners in each boosting iteration—one weak learner for each class. With this mechanism, we are able to achieve much faster convergence. 2) Similar to MultiBoost [1], we employ column generation to implement the boosting training. We derive the Lagrange dual problem of the new multi-class boosting formulation which enable us to design fully corrective multi-class algorithms using the primal-dual optimization technique. 3) We propose a FCD method for fast training of MultiBoost. We obtain an analytical solution for each variable update in coordinate descent. We use the Karush-Kuhn-Tucker (KKT) conditions to derive effective stop criteria and construct working sets of violated variables for faster optimization. We show that FCD can be applied to fully corrective optimization (updating all variables) in multi-class boosting, similar to fast stage-wise optimization in standard AdaBoost (updating newly added variables only).

Notation Let us assume that we have classes. A weak learner is a function that maps an example to . We denote each weak learner by : , and ). is the space of all the weak learners;

is the number of weak learners. We define column vectors

as the outputs of weak learners associated with the -th class on example . Let us denote the weak learners’ coefficients for class . Then the strong classifier for class is . We need to learn strong classifiers, one for each class. Given a test data , the classification rule is . is a vector with elements all being one. Its dimension should be clear from the context.

2 Our Approach

We show how to formulate the multi-class boosting problem in the large margin learning framework. Analogue to MultiBoost, we can define the multi-class margins associate with training data as


for . Intuitively, is the difference of the classification scores between a “wrong” model and the right model. We want to make this margin as large as possible. MultiBoost with the exponential loss can be formulated as:


Here is defined in (1). We have also introduced a shorthand symbol . The parameter controls the complexity of the learned model.

The model parameter is .

Minimizing (2) encourages the confidence score of the correct label of a training example to be larger than the confidence of other labels. We define as a set of labels: . The discriminant function we need to learn is: . The class label prediction for an unknown example is to maximize over , which means finding a class label with the largest confidence: MultiBoost is an extension of MultiBoost [1] for multi-class classification. The only difference is that, in MultiBoost, different classes share the same set of weak learners . In contrast, each class associates a separate set of weak learners. We show that MultiBoost learns a more compact model than MultiBoost.

1: Input: training examples ; regularization parameter ; termination threshold and the maximum iteration number.

2: Initialize: Working weak learner set ; initialize ().

3: Repeat

4: Solve (2) to find weak learners: ; and add them to the working weak learner set .

5: Solve the primal problem (2) on the current working weak learner sets: .

to obtain (we use coordinate descent of Algorithm 2).

6: Update dual variables in (5) using the primal solution and the KKT conditions (5).

7: Until the relative change of the primal objective function value is smaller than the prescribed tolerance; or the maximum iteration is reached.

8: Output: discriminant function , .

Algorithm 1 CG: Column generation for MultiBoost

Column generation for MultiBoost To implement boosting, we need to derive the dual problem of (2). Similar to [1], the dual problem of (2) can be written as (3), in which is the index of class labels. is the dual variable associated with one constraint in (2):


Following the idea of column generation [4], we divide the original problem (2) into a master problem and a sub-problem, and solve them alternatively. The master problem is a restricted problem of (2) which only considers the generated weak learners. The sub-problem is to generate weak learners (corresponding classes) by finding the most violated constraint of each class in the dual form (3), and add them to the master problem at each iteration. The sub-problem for finding most violated constraints can be written as:


The column generation procedure for MultiBoost is described in Algorithm 1. Essentially, we repeat the following two steps until convergence: 1) We solve the master problem (2) with , to obtain the primal solution . is the working set of generated weak learners associated with the -th class. We obtain the dual solution from the primal solution using the KKT conditions:


2) With the dual solution , we solve the sub-problem (2) to generate weak learners: , and add to the working weak learner set . In MultiBoost, weak learners are generated for classes respectively in each iteration, while in MultiBoost, only one weak learner is generated at each column generation and shared by all classes. As shown in [1] for MultiBoost, the sub-problem for finding the most violated constraint in the dual form is:


At each column generation of MultiBoost, (6) is solved to generated one weak learner. Note that solving (6) is to search over all classes to find the best weak learner . Thus the computational cost is the same as MultiBoost. This is the reason why MultiBoost does not introduce additional training cost compared to MultiBoost. In general, the solution of MultiBoost is highly sparse [1]. This can be observed in our empirical study. The weak learner generated by solving (6) actually is targeted for one class, thus using this weak learner across all classes in MultiBoost leads to a very sparse solution. The sparsity of indicates that one weak learner is usually only useful for the prediction of a very few number of classes (typically only one), but useless for most other classes. In this sense, forcing different classes to use the same set of weak learners may not be necessary and usually it leads to slow convergence. In contrast, using separate weak learner sets for each class, MultiBoost tends to have a dense solution of . With weak learners generated at each iteration, MultiBoost converges much faster.

Fast coordinate descent To further speed up the training, we propose a fast coordinate descent method (FCD) for solving the primal MultiBoost problem at each column generation iteration. The details of FCD is presented in Algorithm 2. The high-level idea is simple. FCD works iteratively, and at each iteration (working set iteration), we compute the violated value of the KKT conditions for each variable in , and construct a working set of violated variables (denoted as ), then pick variables from the for update (one variable at a time). We also use the violated values for defining stop criteria. Our FCD is a mix of sequential and stochastic coordinate descent. For the first working set iteration, variables are sequentially picked for update (cyclic CD); in later working set iterations, variables are randomly picked (stochastic CD). In the sequel, we present the details of FCD. First, we describe how to update one variable of by solving a single-variable sub-problem. For notation simplicity, we define: is the orthogonal label coding vector: . Here is the indicator function that returns 1 if , otherwise .

denotes the tensor product. MultiBoost

in (2) can be equivalently written as:


We assume that binary weak learners are used here: . denotes the -th dimension of , and denotes the rest dimensions of excluding the -th. The output of only takes three possible values: . For the -th dimension, we define: ; so is a set of constraint indices that the output of is . denotes the -th variable of ; denotes the rest variables of excluding the -th. Let be the objective function of the optimization (7). can be de-composited as:


Here we have defined:


In the variable update step, one variable is picked at a time for updating and other variables are fixed; thus we need to minimize in (8) w.r.t , which is a single-variable minimization. It can be written as:


The derivative of the objective function in (10) with is:


By solving (11) and the bounded constraint , we obtain the analytical solution of the optimization in (10) (since ):


When is large, (12) can be approximately simplified as:


With the analytical solution in (12), the update of each dimension of can be performed extremely efficiently. The main requirement for obtaining the closed-form solution is that the use of discrete weak learners.

We use the KKT conditions to construct a set of violated variables and derive meaningful stop criteria. For the optimization of MultiBoost (7), KKT conditions are necessary conditions and also sufficient for optimality. The Lagrangian of (7) is: According to the KKT conditions, is the optimal for (10) if and only if satisfies and . For ,

Considering the complementary slackness: , if , we have ; if , we have . The optimality conditions can be written as:


For notation simplicity, we define a column vector as in (15). With the optimality conditions (14), we define in (16) as the violated value of the -th variable of the solution :


At each working set iteration of FCD, we compute the violated values , and construct a working set of violated variables; then we randomly (except the first iteration) pick one variable from for update. We repeat picking for times; is the element number of . is defined as


where is a tolerance parameter. Analogue to [14] and [13], with the definition of the variable violated values in (16), we can define the stop criteria as:


where can be the same tolerance parameter as in the working set definition (17). The stop condition (18) shows if the largest violated value is smaller than some threshold, FCD terminates. We can see that using KKT conditions is actually using the gradient information. An inexact solution for is acceptable for each column generation iteration, thus we place a maximum iteration number ( in Algorithm 2) for FCD to prevent unnecessary computation. We need to compute before obtaining , but computing in (15) is expensive. Fortunately, we are able to efficiently update after the update of one variable to avoid re-computing of (15). in (15) can be equally written as:


So the update of is then:


With the definition of in (19), the values and for one variable update can be efficiently computed by using to avoid the expensive computation in (9a) and (9b); and can be equally defined as:


Some discussion on FCD (Algorithm 2) is as follows: 1) Stage-wise optimization is a special case of FCD. Compared to totally corrective optimization which considers all variables of for update, stage-wise only considers those newly added variables for update. We initialize the working set using the newly added variables. For the first working set iteration, we sequentially update the new added variables. If setting the maximum working set iteration to ( in Algorithm 2), FCD becomes a stage-wise algorithm. Thus FCD is a generalized algorithm with totally corrective update and stage-wise update as special cases. In the stage-wise setting, usually a large (regularization parameter) is implicitly enforced, thus we can use the analytical solution in (13) for variable update.

2) Randomly picking one variable for update without any guidance leads to slow local convergence. When the solution gets close to the optimality, usually only very few variables need update, and most picks do not “hit”. In column generation (CG), the initial value of is initialized by the solution of last CG iteration. This initialization is already fairly close to optimality. Therefore the slow local convergence for stochastic coordinate decent (CD) is more serious in column generation based boosting. Here we have used the KKT conditions to iteratively construct a working set of violated variables, and only the variables in the working set need update. This strategy leads to faster CD convergence.

1: Input: training examples ; parameter ; tolerance: ; weak learner set ; initial value of ; maximum working set iteration: .

2: Initialize: initialize variable working set by variable indices in that correspond to newly added weak learners; initialize in (15); working set iteration index .

3: Repeat (working set iteration)

4: ; reset the inner loop index: ;

5: While ( is the size of )

6: pick one variable index from :

if sequentially pick one, else randomly pick one.

7: Compute and in (21) using .

8: update variable in (12) using and .

9: update in (20) using the updated .

10: End While

11: Compute the violated values in (16) for all variables.

12: Re-construct the variable working set in (17) using .

13: Until the stop condition in (18) is satisfied or maximum working set iteration reached: .

14: Output: .

Algorithm 2 FCD: Fast coordinate decent for MultiBoost

3 Experiments

We evaluate our method MultiBoost

on some UCI datasets and a variety of multi-class image classification applications, including digit recognition, scene recognition, and traffic sign recognition. We compare MultiBoost

against MultiBoost [1] with the exponential loss, and another there popular multi-class boosting algorithms: AdaBoost.ECC [9], AdaBoost.MH [10] and AdaBoost.MO [10]. We use FCD as the solver for MultiBoost, and LBFGS-B [12] for MultiBoost. We also perform further experiments to evaluate FCD in detail. For all experiments, the best regularization parameter for MultiBoost and MultiBoost is selected from to ; the tolerance parameter in FCD is set to (); We use MultiBoost-1 to denote MultiBoost using the stage-wise setting of FCD which only uses one iteration ( in Algorithm 2). In MultiBoost-1, we fix to be a large value: .

All experiments are run 5 times. We compare the testing error, the total training time and solver time on all datasets. The results show that our MultiBoost and MultiBoost-1 converge much faster then other methods, use less training time then MultiBoost, and achieve the best testing error on most datasets.

AdaBoost.MO [10] (Ada.MO) has a similar convergence rate as our method, but it is much slower than our method and becomes intractable for large scale datasets. We run Ada.MO on some UCI datasets and MNIST. Results are shown in Fig. 1 and Fig. 2. We set a maximum training time (1000 seconds) for Ada.MO; other methods are all below this maximum time on those datasets. If maximum time reached, we report the results of those finished iterations.

Figure 1:

Results of 2 UCI datasets: VOWEL and ISOLET. CW and CW-1 are our methods. CW-1 uses stage-wise setting. The number after the method name is the mean value with standard deviation of the last iteration. Our methods converge much faster and achieve competitive test accuracy. The total training time and the solver time of our methods both are less than MultiBoost of


UCI datasets: we use 2 UCI multi-class datasets: VOWEL and ISOLET. For each dataset, we randomly select 75% data for training and the rest for testing. Results are shown in Fig. 1.

Figure 2: Experiments on 3 handwritten digit recognition datasets: USPS, PENDIGITS and MNIST. CW and CW-1 are our methods. CW-1 uses stage-wise setting. Our methods converge much faster, achieve best test error and use less training time. Ada.MO has similar convergence rate as ours, but requires much more training time. With a maximum training time of 1000 seconds, Ada.MO failed to finish 500 iterations on all 3 datasets.

Handwritten digit recognition: we use 3 handwritten datasets: MNIST, USPS and PENDIGITS. For MNIST, we randomly sample 1000 examples from each class, and use the original test set of 10,000 examples. For USPS and PENDIGITS, we randomly select 75% for training, the rest for testing. Results are shown in Fig. 2.

Figure 3: Experiments on 3 image datasets: PASCAL07, LabelMe and CIFAR10. CW and CW-1 are our methods. CW-1 uses stage-wise setting. Our methods converge much faster, achieve best test error and use less training time.

3 Image datasets: PASCAL07, LabelMe, CIFAR10: For PASCAL07, we use 5 types of features provided in [15]. For labelMe, we use the subset: LabelMe-12-50k111 and generate GIST features. For these two datasets, we use those images which only have one class label. We use 70% data for training, the rest for testing. For CIFAR10222, we construct 2 datasets, one uses GIST features and the other uses the pixel values. We use the provided test set and 5 training sets for 5 times run. Results are shown in Fig. 3.

Figure 4: Experiments on 2 scene recognition datasets: SCENE15 and a subset of SUN. CW and CW-1 are our methods. CW-1 uses stage-wise setting. Our methods converge much faster and achieve best test error and use less training time.

Scene recognition: we use 2 scene image datasets: Scene15 [16] and SUN [17]. For Scene15, we randomly select 100 images per class for training, and the rest for testing. We generate histograms of code words as features. The code book size is 200. An image is divided into 31 sub-windows in a spatial hierarchy manner. We generate histograms in each sub-windows, so the histogram feature dimension is 6200. For SUN dataset, we construct a subset of the original dataset containing 25 categories. For each category, we use the top 200 images, and randomly select 80% data for training, the rest for testing. We use the HOG features described in[17]. Results are shown in Fig. 4.

Figure 5: Results on the traffic sign dataset: GTSRB. CW and CW-1 (stage-wise setting) are our methods. Our methods converge much faster, achieve best test error and use less training time.

Traffic sign recognition: We use the GTSRB333 traffic sign dataset. There are 43 classes and more than 50000 images. We use the provided 3 types of HOG features; so there are 6052 features in total. We randomly select 100 examples per class for training and use the original test set. Results are shown in Fig.5.

3.1 FCD evaluation

We perform further experiments to evaluate FCD with different parameter settings, and compare to the LBFGS-B [12] solver. We use 3 datasets in this section: VOWEL, USPS and SCENE15. We run FCD with different settings of the maximum working set iteration( in Algorithm 2) to evaluate how the setting of (maximum working set iteration) affects the performance of FCD. We also run LBFGS-B [12] solver for solving the same optimization (2) as FCD. We set for all cases. Results are shown in Fig. 6. For LBFGS-B, we use the default converge setting to get a moderate solution. The number after “FCD” in the figure is the setting of in Algorithm 2 for FCD. Results show that the stage-wise case () of FCD is the fastest one, as expected. When we set , the objective value of the optimization (2) of our method converges much faster than LBFGS-B. Thus setting of is sufficient to achieve a very accurate solution, and at the same time has faster convergence and less running time than LBFGS-B.

Figure 6: Solver comparison between FCD with different parameter setting and LBFGS-B [12]. One column for one dataset. The number after “FCD” is the setting for the maximum iteration () of FCD. The stage-wise setting of FCD is the fastest one. See the text for details.

4 Conclusion

In this work, we have presented a novel multi-class boosting method. Based on the dual problem, boosting is implemented using the column generation technique. Different from most existing multi-class boosting, we train a weak learner set for each class, which results in much faster convergence.

A wide range of experiments on a few different datasets demonstrate that the proposed multi-class boosting achieves competitive test accuracy compared with other existing multi-class boosting. Yet it converges much faster and due to the proposed efficient coordinate descent method, the training of our method is much faster than the counterpart of MultiBoost in [1].

Acknowledgement. This work was supported by ARC grants LP120200485 and FT120100969.


  • [1] Shen, C., Hao, Z.: A direct formulation for totally-corrective multi-class boosting. In: Proc. IEEE Conf. Comp. Vis. Patt. Recogn. (2011)
  • [2] Paisitkriangkrai, S., Shen, C., van den Hengel, A.: Sharing features in multi-class boosting via group sparsity. In: Proc. IEEE Conf. Comp. Vis. Patt. Recogn. (2012)
  • [3] Schapire, R.E., Freund, Y., Bartlett, P., Lee, W.S.: boosting the margin: A new explanation for the effectiveness of voting methods. Annals of Statistics 26 (1998) 1651–1686
  • [4] Shen, C., Li, H.: On the dual formulation of boosting algorithms. IEEE Trans. Pattern Anal. Mach. Intell. 32 (2010) 2216–2231
  • [5] Viola, P., Jones, M.J.:

    Robust real-time face detection.

    Int. J. Comput. Vision 57 (2004) 137–154
  • [6] Wang, P., Shen, C., Barnes, N., Zheng, H.: Fast and robust object detection using asymmetric totally-corrective boosting.

    IEEE Trans. Neural Networks & Learn. Syst.

    23 (2012) 33–46
  • [7] Paisitkriangkrai, S., Shen, C., Zhang, J.: Fast pedestrian detection using a cascade of boosted covariance features. IEEE Trans. Circuits & Syst. for Video Tech. 18 (2008) 1140–1151
  • [8] Dietterich, T.G., Bakiri, G.: Solving multiclass learning problems via error-correcting output codes. J. Artif. Int. Res. 2 (1995) 263–286
  • [9] Guruswami, V., Sahai, A.: Multiclass learning, boosting, and error-correcting codes.

    In: Proc. Annual Conf. Computational Learning Theory, New York, NY, USA, ACM (1999) 145–155

  • [10] Schapire, R.E., Singer, Y.: Improved boosting algorithms using confidence-rated predictions.

    In: Machine Learn. (1999) 80–91

  • [11] Demiriz, A., Bennett, K.P., Shawe-Taylor, J.: Linear programming boosting via column generation. Mach. Learn. 46 (2002) 225–254
  • [12] Zhu, C., Byrd, R.H., Lu, P., Nocedal, J.: Algorithm 778: L-BFGS-B: Fortran subroutines for large-scale bound constrained optimization. ACM Trans. Math. Software (1994)
  • [13] Yuan, G.X., Chang, K.W., Hsieh, C.J., Lin, C.J.: A comparison of optimization methods and software for large-scale l1-regularized linear classification. J. Mach. Learn. Res. (2010) 3183–3234
  • [14] Fan, R.E., Chang, K.W., Hsieh, C.J., Wang, X.R., Lin, C.J.: LIBLINEAR: A library for large linear classification. J. Mach. Learn. Res. 9 (2008) 1871–1874
  • [15] Guillaumin, M., Verbeek, J., Schmid, C.:

    Multimodal semi-supervised learning for image classification.

    In: Proc. IEEE Conf. Comp. Vis. Patt. Recogn. (2010)
  • [16] Lazebnik, S., Schmid, C., Ponce, J.: Beyond bags of features: Spatial pyramid matching for recognizing natural scene categories. In: Proc. IEEE Conf. Comp. Vis. Patt. Recogn. Volume 2. (2006) 2169 – 2178
  • [17] Xiao, J., Hays, J., Ehinger, K., Oliva, A., Torralba, A.: SUN database: Large-scale scene recognition from abbey to zoo. In: Proc. IEEE Conf. Comp. Vis. Patt. Recogn. (2010) 3485 –3492