1 Introduction
Multiclass classification problems – problems with more than two classes – are commonplace in real world scenarios. Some learning methods can handle multiclass problems inherently, e.g.
, decision tree inducers, but others may require a different approach. Even techniques such as decision tree inducers may benefit from methods that decompose a multiclass problem in some manner. Typically, a collection of binary classifiers is trained and combined in some way to produce a multiclass classification. This process is called binarization. Popular techniques for adapting binary classifiers to multiclass problems include pairwise classification
[12], onevsall classification [15], and error correcting output codes [6]. Ensembles of nested dichotomies [9] have been shown to be an effective substitute to these methods. Depending on the base classifier used, they can outperform both pairwise classification and errorcorrecting output codes [9].In a nested dichotomy, the set of classes is split into two subsets recursively until there is only one class in each subset. Nested dichotomies are represented as binary tree structures (Fig. 1
). At each node of a nested dichotomy, a binary classifier is learned to classify instances as belonging to one of the two subsets of classes. A nice feature of nested dichotomies is that class probability estimates can be computed in a natural way if the binary classifier used at each node can output twoclass probability estimates.
The number of nested dichotomies for a class problem increases exponentially with the number of classes. One approach is to sample nested dichotomies at random to form an ensemble of them [9]. However, this may result in binary problems that are difficult to learn for the base classifier.
This paper is founded on the observation that some classes are generally easier to separate than others. For example, in a dataset of images of handwritten digits, the digits ‘5’ and ‘6’ are are much more difficult to distinguish than the digits ‘0’ and ‘1’. This means that if ‘5’ and ‘6’ were put into opposite class subsets, the base classifier would have a more difficult task to discriminate the two subsets than if they were grouped together. Moreover, if the base classifier assigns high probability to an incorrect branch when classifying a test instance, it is unlikely that the final prediction will be correct. Therefore, we should try to group similar classes into the same class subsets whenever possible, and separate them in lower levels of the tree near the leaf nodes.
In this paper, we propose a method for semirandom class subset selection, which we call “randompair selection”, that attempts to group similar classes together for as long as possible. This means that the binary classifiers close to the root of the tree of classes can learn to distinguish higherlevel features, while the ones close to the leaf nodes can focus on the more finegrained details between similar classes. We evaluate this method against other published class subset selection strategies.
This paper is structured as follows. In Section 2, we give a review of other adaptations of ensembles of nested dichotomies. In Section 3, we describe the randompair selection strategy and give an overview of how it works. We also cover theoretical advantages of our method over other methods, and give an analysis of how this strategy affects the space of possible nested dichotomy trees to sample from. In Section 4, we evaluate these methods and compare them to other class subset selection techniques.
2 Related Work
The original framework of ensembles of nested dichotomies by Frank and Kramer was proposed in 2004 [9]. In this framework, a binary tree is sampled randomly from the set of possible trees, based on the assumption that each nested dichotomy is equally likely to be useful a priori
. By building an ensemble of nested dichotomies in this manner, Frank and Kramer achieved results that are competitive with other binarization techniques using decision trees and logistic regression as the twoclass models for each node.
There have been a number of adaptations of ensembles of nested dichotomies since, mainly focusing on different class selection techniques. Dong et al. propose to restrict the space of nested dichotomies to only consist of structures with balanced splits [7]. Doing this regulates the depth of the trees, which can reduce the size of the training data for each binary classifier and thus has a positive effect on the runtime. It was shown empirically that this method has little effect on accuracy. Dong et al. also consider nested dichotomies where the number of instances per subset is approximately balanced at each split, instead of the number of classes. This also reduces the runtime, but can aversely effect the accuracy in rare cases.
The original framework of ensembles of nested dichotomies uses randomization to build an ensemble, i.e., the structure of each nested dichotomy in the ensemble is randomly selected, but built from the same data. Rodriguez et al. explore the use of other ensemble techniques in conjunction with nested dichotomies [16]. The authors found that improvements in accuracy can be achieved by using bagging [4], AdaBoost [10] and MultiBoost [17] with random nested dichotomies as the base learner, compared to solely randomizing the structure of the nested dichotomies. The authors also experimented with different base classifiers for the nested dichotomies, and found that using ensembles of decision trees as base classifiers yielded favourable results compared to individual decision trees.
DuarteVillaseñor et al. propose to split the classes more intelligently than randomly by using various clustering techniques [8]. They first compute the centroid of each class. Then, at each node of a nested dichotomy, they select the two classes with the furthest centroids as initial classes for each subset. Once the two classes have been picked, the remaining classes are assigned to one of the two subsets based on the distance of their centroids to the centroids of the initial classes. DuarteVillaseñor et al. evaluate three different distance measures for determining the furthest centroids, taking into account the position of the centroids, the radius of the clusters and average distance of each instance from the centroid. They found that these class subset selection methods gave superior accuracy to the random methods previously proposed when the nested dichotomies were used for boosting.
3 RandomPair Selection
We present a class selection strategy for choosing subsets in a nested dichotomy called randompair selection. This has the same intention as the centroidbased methods proposed by DuarteVillaseñor et al. [8]. Our method differs in that it takes a more direct approach to discovering similar classes by using the actual base classifier to decide which classes are more easily separable. Moreover, it incorporates an aspect of randomization.
3.1 The Algorithm
The process for constructing a nested dichotomy with randompair selection is as follows:

Create a root node for the tree.

If the class set has only one class, then create a leaf node.

Otherwise, split into two subsets by the following:

Select a pair of classes at random, where is the set of all classes present at the current node.

Train a binary classifier using these two classes as training data. Then, use the remaining classes as test data, and observe which of the initial classes the majority of instances of each test class are classified as.^{1}^{1}1When the dataset is large, it may be sensible to subsample the training data at each node when performing this step.

Two subsets are created, using the initial classes:

The test classes are added to or based on whether is more likely to be classified as or .

A new binary model is trained using the full data at the node, using the new class labels and for each instance.


Create new nodes for both and and recurse for each child node from Step 2.
This selection algorithm is illustrated in Fig. 2. The process for making predictions when using this class selection method is identical to the process for the original ensembles of nested dichotomies. Assuming that the base classifier can produce class probability estimates, the probability of an instance belonging to a class is the product of the estimates given by the binary classifiers on the path from the root to the leaf node corresponding to the particular class.
3.2 Analysis of the Space of Nested Dichotomies
To build an ensemble of nested dichotomies, a set of nested dichotomies needs to be sampled from the space of all nested dichotomies. The size of this space grows very quickly as the number of classes increases. Frank and Kramer calculate that the number of potential nested dichotomies is for a class problem [9]. For a 10class problem, this equates to distinct systems of nested dichotomies. Using a classbalanced classsubset selection strategy reduces this number:
(1) 
where [7]. The number of classbalanced nested dichotomies is still very large, giving possible nested dichotomies for a 10class problem. The subset selection method based on clustering [8] takes this idea to the extreme, and gives only a single nested dichotomy for any given number of classes because the class subset selection is deterministic. Even though the system produced by this subset selection strategy is likely to be a useful one, it does not lend itself well to ensemble methods.
The size of the space of nested dichotomies that we sample using the randompair selection method varies for each dataset, and is dependent on the base classifier. The upper bound for the number of possible binary problems at each node is the number of ways to select two classes at random from a class dataset, i.e., . In practice, many of these randomly chosen pairs are likely to produce the same class subsets under our method, so the number of possible class splits is likely to be lower than this value. For illustrative purposes, we empirically estimate this value for the logistic regression base learner. We enumerate and count the number of possible class splits for our splitting method at each node of a nested dichotomy for a number of datasets, and plot this number against the number of classes at the corresponding node (Fig. 3a). We also show a similar plot for the case where C4.5 is used as the base classifier (Fig. 3b). Fitting a second degree polynomial to the data for logistic regression yields
(2) 
Assuming we apply logistic regression, we can estimate the number of possible class splits for an arbitrary number of classes based on this expression by making a rough estimate of the distribution of classes at each node. Nested dichotomies constructed with randompair selection are not guaranteed to be balanced, so we average the class subset proportions over a large sample of nested dichotomies on different datasets to find that the two class subsets contain and respectively of the classes on average. Given this information, we can estimate the number of possible nested dichotomies with logistic regression by the recurrence relation
(3) 
where when . Table 1 shows the number of distinct nested dichotomies that can be created for up to 12 classes for the randompair selection method, classbalanced and completely random selection when we apply this estimate.






2  1  1  1  
3  3  3  1  
4  15  3  5  
5  105  30  15  
6  945  90  36  
7  10,395  315  182  
8  135,135  315  470  
9  2,027,025  11,340  1,254  
10  34,459,425  113,400  7,002  
11  654,729,075  1,247,400  28,189  
12  13,749,310,575  3,742,200  81,451 
3.3 Advantages Over Centroid Methods
Randompair selection has two theoretical advantages compared to the centroidbased methods proposed by the authors of [8]: (a) an element of randomness makes it more suitable for ensemble learning, and (b) it adapts to the base classifier that is used.
In the centroidbased methods, each class split is deterministically chosen based on some distance metric. This means that the structure of every nested dichotomy in an ensemble will be the same. This is less important in ensemble techniques that alter the dataset or weights inside the dataset (e.g., bagging or boosting). However, an additional element of randomization in ensembles is typically beneficial. When randompair selection is employed, the two initial classes are randomly selected in all nested dichotomies, increasing the total number of nested dichotomies that can be constructed as discussed in the previous section.
Centroidbased methods assume that a smaller distance between two class centroids is indicative of class similarity. While it is true that this is often the case, sometimes the centroids can be relatively meaningless. An example is the CIFAR10 dataset, a collection of small natural images of various categories such as cats, dogs and trucks [13]. The classes are naturally divided into two subsets – animals and vehicles. Fig. 4 shows an image representation of the centroids of each class, and a sample image from the respective class below it. It is clear to see that most of these class centroids do not contain much useful information for discriminating between the classes.
This effect is clearer when evaluating a simple classifier that classifies instances according to the closest centroid of the training data. For illustrative purposes, see the confusion matrix of such a classifier when trained on the CIFAR10 dataset (Fig.
5). It is clear to see from the confusion matrix that the centroids cannot be relied upon to produce meaningful predictions in all cases for this data.A disadvantage of randompair selection compared to centroidbased methods is an increase in runtime. Under our method, we need to train additional base classifiers during the class subset selection process. However, the extra base classifiers are only trained on a subset of the data at a node, i.e., only two of the classes, and we can subsample this data during this step if we need to improve the runtime further.
4 Experimental Results
We present an evaluation of the randompair selection method on 18 datasets from the UCI repository [1]. Table 2 lists and describes the datasets we used. We specifically selected datasets with at least five classes, as our method will not have a large impact on datasets with few classes. This is due to the fact that there is a relatively small number of possible nested dichotomies for small numbers of classes.
4.1 Experimental Setup
All experiments were conducted in WEKA [11], and performed with 10 times 10fold cross validation.^{2}^{2}2Our implementations can be found in the ensemblesOfNestedDichotomies package in WEKA. The default settings in WEKA for the base learners and ensemble methods were used in our evaluation. We compared our class subset selection method (RPND) to nested dichotomies based on clustering (NDBC) [8], classbalanced nested dichotomies (CBND) [7], and completely random selection (ND) [9]. We did not compare against other variants of nested dichotomies such as databalanced nested dichotomies [7], nested dichotomies based on clustering with radius [8] and nested dichotomies based on clustering with average radius [8], because they were found to either have the same or worse performance on average in [7] and [8]
respectively. We used logistic regression and C4.5 as the base learners for our experiments, as they occupy both ends of the biasvariance spectrum. In our results tables, a bullet (
) indicates a statistically significant accuracy gain, and an open circle () indicates a statistically significant accuracy reduction () by using the randompair method compared with another method. To establish significance, we used the corrected resampled paired ttest
[14].Dataset  Classes  Instances  Attributes  Dataset  Classes  Instances  Attributes 

audiology  24  226  70  optdigits  10  5620  65 
krkopt  18  28056  7  pageblocks  5  5473  11 
LED24  10  5000  25  pendigits  10  10992  17 
letter  26  20000  17  segment  7  2310  20 
mfeatfactors  10  2000  217  shuttle  7  58000  10 
mfeatfourier  10  2000  77  usps  10  9298  257 
mfeatkarhunen  10  2000  65  vowel  11  990  14 
mfeatmorphological  10  2000  7  yeast  10  1484  9 
mfeatpixel  10  2000  241  zoo  7  101  18 
4.2 Single Nested Dichotomy
We expect that intelligent class subset selection will have a larger impact in small ensembles of nested dichotomies. This is due to the fact as ensembles grow larger, the worse performing ensemble members will not have as great an influence over the final predictions. Therefore, we first compare a single nested dichotomy using randompair selection to a single nested dichotomy obtained with other class selection methods.
Table 3
shows the classification accuracy and standard deviations of each method when training a single nested dichotomy. When logistic regression is used as the base learner, compared to random methods (CBND and ND), we obtain a significant accuracy gain in most cases, and comparable accuracy in all others. When using C4.5 as the base learner, our method is preferable to random methods in some cases, with all other datasets showing a comparable accuracy.
In comparison to NDBC, gives similar accuracy overall, with three significantly better results, four significantly worse results, and the rest comparable over both base learners. It is to be expected that NDBC sometimes has better performance than our method when only a single nested dichotomy is built. This is because NDBC deterministically selects the class split that is likely to be the most easily separable. Our method attempts to produce an easily separable class subset selection from a pool of possible options, where each option is as likely as any other.
4.3 Ensembles of Nested Dichotomies
Ensembles of nested dichotomies typically outperform single nested dichotomies. The original method for creating an ensemble of nested dichotomies is a randomization approach, but it was later found that better performance can be obtained by bagging and boosting nested dichotomies [16]. For this reason, we consider three types of ensembles of nested dichotomies in our experiments: bagged, boosted with AdaBoost and boosted with MultiBoost (the latter two applied with resampling based on instance weights). We built ensembles of 10 nested dichotomies for these experiments.
Bagging.
Table 4 shows the results of using bagging to construct an ensemble of nested dichotomies for each method and for both base learners. When logistic regression is used as a base learner, our method outperforms all other methods in many cases. When C4.5 is used as a base learner, our method compares favourably with NDBC and achieves comparable accuracy to the random methods. Our method is better in a bagging scenario than NDBC because of the first problem highlighted in Section 3.3, i.e., using the furthest centroids to select a class split results in a deterministic class split. Evidently, with bagged datasets, this method of class subset selection is too stable to be utilized effectively. Our method, on the other hand, is sufficiently unstable to be useful in a bagged ensemble.
AdaBoost.
Table 5 shows the results of using AdaBoost to build an ensemble of nested dichotomies for each method and for both base learners. When comparing with the random methods, we observe a similar result to the bagged ensembles. When using logistic regression, we see a significant improvement in accuracy in many cases, and when C4.5 is used, we typically see comparable results, with a small number of significant accuracy gains. When comparing with NDBC, we see a small improvement for the vast majority of datasets, but these differences are almost never individually significant. In one instance (krkopt with C4.5 as the base learner), we achieve a significant accuracy gain using our method.
MultiBoost.
Table 6 shows the results of using MultiBoost to build an ensemble of nested dichotomies for each method and for both base learners. Compared to the random methods, again we see similar results to the other ensemble methods – using logistic regression as the base learner results in many significant improvements, and using C4.5 as the base learner typically produces comparable results, with few significant improvements. In comparison to NDBC, we see many small (although statistically insignificant) improvements across both base learners, with some significant gains in accuracy on some datasets.
4.4 Training Time
Fig. 6 shows the training time in milliseconds for training a single RPND and a single NDBC, with logistic regression and C4.5 as the base learners for each of the datasets used in this evaluation. As can be seen from the plots, there is a computational cost for building an RPND over an NDBC, which is to be expected as there is an additional classifier trained and tested at each split node of the tree. The gradient of both plots is approximately one, which indicates that our method does not add additional computational complexity to the problem. The runtime is comparatively worse for logistic regression than for C4.5.
4.5 Case Study: CIFAR10
To test how well our method adapts to other base learners, we trained nested dichotomies with convolutional networks as the base learners to classify the CIFAR10 dataset [13]. Convolutional networks learn features from the data automatically, and perform well on high dimensional, highly correlated data such as images. We implemented the nested dichotomies and convolutional networks in Python using Lasagne [5]
, a wrapper for Theano
[2, 3]. The convolutional network that we used as the base learner is relatively simple; it has two convolutional layers with 32 filters each, one maxpool layer with stride after each convolutional layer, and one fullyconnected layer ofunits before a softmax layer.
As discussed in Section 3.3, the centroids for a dataset like CIFAR10 appear to not be very descriptive, and as such, we expect NDBC with convolutional networks as the base learner to produce class splits that are not as well founded as those in RPND. We present a visualisation of the NDBC produced from the CIFAR10 dataset, and an example of a nested dichotomy built with randompair selection (Fig. 7). We can see that both methods produce a reasonable dichotomy structure, but there are some cases in which the randompair method results in more intuitive splits. For example, the root node of the RPND splits the full set of classes into the two natural subsets (vehicles and animals), whereas the NDBC omits the ‘car’ class from the lefthand subset. Two pairs of similar classes in the animal subset – ‘deer’ and ‘horse’, and ‘cat’ and ‘dog’ – are kept together until near the leaves in the RPND, but are split up relatively early in the NDBC. Despite this, the accuracy and runtime of both methods were comparable. Of course, the quality of the nested dichotomy under randompair selection is dependent on the initial pair of classes that is selected. If two classes that are similar to each other are selected to be the initial random pair, the tree can end up with splits that make less intuitive sense.
5 Conclusion
In this paper, we have proposed a semirandom method of class subset selection in ensembles of nested dichotomies, where the class selection is directly based on the ability of the base classifier to separate classes. Our method nondeterministically produces an easily separable classsplit, which not only improves the accuracy over random methods for a single classifier, but also for ensembles of nested dichotomies. Our method also outperforms other nonrandom methods when nested dichotomies are used in a bagged ensemble and an ensemble boosted with MultiBoost, and otherwise gives comparable results.
In the future, it would be interesting to explore selecting several random pairs of classes at each node, and choosing the best of the pairs to create the final class subsets. This will obviously increase the runtime, but may help to produce more accurate individual classifiers and small ensembles. We also wish to explore the use of convolutional networks in nested dichotomies further.
Acknowledgements.
This research was supported by the Marsden Fund Council from Government funding, administered by the Royal Society of New Zealand. The authors also thank NVIDIA for donating a K40c GPU to support this research.
References

[1]
Asuncion, A., Newman, D.: UCI machine learning repository (2007)

[2]
Bastien, F., Lamblin, P., Pascanu, R., Bergstra, J., Goodfellow, I.J., Bergeron, A., Bouchard, N., Bengio, Y.: Theano: new features and speed improvements. Deep Learning and Unsupervised Feature Learning NIPS 2012 Workshop (2012)
 [3] Bergstra, J., Breuleux, O., Bastien, F., Lamblin, P., Pascanu, R., Desjardins, G., Turian, J., WardeFarley, D., Bengio, Y.: Theano: a CPU and GPU math expression compiler. In: Proceedings of the Python for Scientific Computing Conference (SciPy) (Jun 2010), Oral Presentation
 [4] Breiman, L.: Bagging predictors. Machine learning 24(2), 123–140 (1996)
 [5] Dieleman, S., Schlüter, J., Raffel, C., Olson, E., Sønderby, S.K., Nouri, D., Maturana, D., Thoma, M., Battenberg, E., Kelly, J., Fauw, J.D., Heilman, M., diogo149, McFee, B., Weideman, H., takacsg84, peterderivaz, Jon, instagibbs, Rasul, D.K., CongLiu, Britefury, Degrave, J.: Lasagne: First release. (Aug 2015), http://dx.doi.org/10.5281/zenodo.27878

[6]
Dietterich, T.G., Bakiri, G.: Solving multiclass learning problems via errorcorrecting output codes. Journal of artificial intelligence research pp. 263–286 (1995)
 [7] Dong, L., Frank, E., Kramer, S.: Ensembles of balanced nested dichotomies for multiclass problems. In: Knowledge Discovery in Databases: PKDD 2005, pp. 84–95. Springer (2005)

[8]
DuarteVillaseñor, M.M., CarrascoOchoa, J.A., MartínezTrinidad, J.F., FloresGarrido, M.: Nested dichotomies based on clustering. In: Progress in Pattern Recognition, Image Analysis, Computer Vision, and Applications, pp. 162–169. Springer (2012)
 [9] Frank, E., Kramer, S.: Ensembles of nested dichotomies for multiclass problems. In: Proceedings of the twentyfirst international conference on Machine learning. p. 39. ACM (2004)

[10]
Freund, Y., Schapire, R.E.: Game theory, online prediction and boosting. In: Proceedings of the ninth annual conference on Computational learning theory. pp. 325–332. ACM (1996)
 [11] Hall, M., Frank, E., Holmes, G., Pfahringer, B., Reutemann, P., Witten, I.H.: The weka data mining software: an update. ACM SIGKDD explorations newsletter 11(1), 10–18 (2009)
 [12] Hastie, T., Tibshirani, R., et al.: Classification by pairwise coupling. The annals of statistics 26(2), 451–471 (1998)
 [13] Krizhevsky, A.: Learning Multiple Layers of Features from Tiny Images. Master’s thesis, University of Toronto, Toronto (2009)
 [14] Nadeau, C., Bengio, Y.: Inference for the generalization error. Machine Learning 52(3), 239–281 (2003)
 [15] Rifkin, R., Klautau, A.: In defense of onevsall classification. The Journal of Machine Learning Research 5, 101–141 (2004)
 [16] Rodríguez, J.J., GarcíaOsorio, C., Maudes, J.: Forests of nested dichotomies. Pattern Recognition Letters 31(2), 125–132 (2010)
 [17] Webb, G.I.: MultiBoosting: A technique for combining boosting and wagging. Machine learning 40(2), 159–196 (2000)
Comments
There are no comments yet.