 # Better Multi-class Probability Estimates for Small Data Sets

Many classification applications require accurate probability estimates in addition to good class separation but often classifiers are designed focusing only on the latter. Calibration is the process of improving probability estimates by post-processing but commonly used calibration algorithms work poorly on small data sets and assume the classification task to be binary. Both of these restrictions limit their real-world applicability. Previously introduced Data Generation and Grouping algorithm alleviates the problem posed by small data sets and in this article, we will demonstrate that its application to multi-class problems is also possible which solves the other limitation. Our experiments show that calibration error can be decreased using the proposed approach and the additional computational cost is acceptable.

## Authors

##### This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

## 1 Introduction

In classification, an object of interest is predicted to belong to one of discrete and predefined categories called classes. An example of a classification problem would be recognizing handwritten digits. In many applications it is also important to quantify the uncertainty of these predictions. In the handwritten digits example, how certain can we be that the digit is one and not seven or any other digit? If the results of a classifier are used as input for making decisions or if there are costs involved in the classification decision, then it is important, in addition to good classification accuracy, that the probabilities predicted by a classifier are accurate. A classifier is said to be well calibrated if the predicted probability of an event is close to the proportion of these events among a group of similar predictions [Dawid1982]. However, the main objective for classifier design is often good class separation and not accurate probability estimation. Therefore, many commonly used classifiers are not well calibrated. The process for improving a classifiers probability estimates by post-processing the probability estimates is called calibration. Most commonly used calibration algorithms only work on binary problems and need a fair amount of data, separate from training and testing data to avoid bias, which severely restricts their application in real-world problems.

To tackle these two limitations, we will demonstrate two ways to generalize a binary calibration method that has been previously shown to work on small data sets to work on multi-class problems. Using the proposed calibration approach lead to statistically significant improvement in calibration error metrics. The rest of this article is structured as follows. Section 2 will shortly review relevant literature on the topic, Section 3 explains the experiments that were used for testing the proposed approaches and results from those experiments are presented in Section 4. The results are discussed in Section 5 and Section 6 concludes the article.

## 2 Background

Calibration algorithms need training data and to avoid biasing this data needs to be separate from the data that is used for training the classifier. A minimum of about 1000 to 2000 training samples are needed for the calibration data set depending on the learning algorithm to avoid overfitting. Non-parametric calibration algorithms are particularly prone to overfitting on small data sets and their performance seems to improve with increasing calibration data set sizes even further [NiculescuMizil2005ICML]. This means that the amount of training data in total needs to be large so that enough data can be set aside for calibration. In addition, a separate data set needs to be held out for testing. However, relatively small data sets are quite common in many real-world modelling tasks.

It has been previously shown that calibrating binary classifiers with traditional calibration approach does not work very well when available data is limited. However, it is possible to solve the problem, at least partially, by generating more calibration data with a Monte Carlo cross validation approach [Alasalmi2018, Alasalmi2020] using isotonic regression (IR) [NiculescuMizil2005ICML] or ensemble of near isotonic regression models (ENIR) [Naeini2018] calibration algorithms. Many classification problems are not binary but instead the problem often is to classify the data into multiple classes () but most calibration algorithms work on binary () classification problems only. This is also true for the above mentioned solution that uses Data Generation and Grouping (DGG) algorithm [Alasalmi2020] which works with binary calibration algorithms only.

A solution to this problem is to break the multi-class problem into several binary problems, solve each binary classification and calibration problem independently, and combine the results to multi-class probability estimates [Zadrozny2002]. The premise is obviously that better calibrated binary probabilities result in better calibrated multi-class probabilities. The question then becomes how to divide the problem into binary problems and how to combine the results. Two intuitive ways to break the multi-class problem into binary problems are one-vs-rest and all pairs.

In the one-vs-rest approach the binary problems are such that one of the classes is treated as the positive class while the rest are treated as the negative class collectively and this is repeated for each class. The number of binary problems then becomes the same as the number of classes . Probability estimates from using the one-vs-rest approach can be combined by simply linearly normalizing the binary probabilities for each class so that they sum up to one. This results in comparable error rates with combining the probabilities using least squares or coupling algorithms [Zadrozny2002]. By using one class as the positive class and the rest of the data as the negative class leads to class imbalance which becomes more pronounced as the number classes grows. However, the number of binary problems in this approach remains reasonable.

In the all pairs approach all possible pairs of classes are enumerated and one class in each pair is selected as the positive class while the other class serves as the negative class. There are possible pairs of classes in this approach meaning that the number of binary problems is larger than with the one-vs-rest approach when as can be seen from Table 1. However, the binary problems are faster to learn in all pairs approach as only instances from the two classes are included in each. The binary problems are also more balanced in the all pairs approach. After learning and calibrating the binary classifiers, the probabilities for the multi-class problem can be combined with pairwise coupling which was originally developed by Hastie and Tibshirani [Hastie1998] and later improved by Wu et al. [Wu2004].

The two above mentioned intuitive ways for breaking up the multi-class problem are two special cases of a more general idea that uses so called error correcting output coding (ECOC) matrices [Allwein2000]. ECOC matrices can be either complete or sparse. However, the number of binary problems grows exponentially as the number of classes grows when using complete ECOC matrices and there are computational problems with sparse ECOC matrices making both infeasible in practice [Gebel2009].

## 3 Experiments

In this study, the feasibility of the DGG data generation algorithm for multi-class classification problem calibration was tested. One-vs-rest approach with normalization and all pairs approach with pairwise coupling were compared here when using the DGG algorithm along with ENIR calibration. The procedure in the context of binary calibration is described more thoroughly in [Alasalmi2020]. Calibration error was quantified with logarithmic loss (LL) and mean squared error (MSE). LL is defined in Equation 1 and MSE in Equation 2. In the equations stands for the number of observations, stands for the number of class labels, is the natural logarithm, equals if observation belongs to class , otherwise it is , and stands for the predicted probability that observation belongs to class . A smaller value of each metric indicates better calibration.

 LL=−1NN∑i=1K∑j=1yi,jlog(pi,j) (1)
 MSE=1NN∑i=1K∑j=1(yi,j−pi,j)2 (2)

A stratified 10-fold cross validation was used to create data samples and Student’s paired t-test with unequal variance assumption and the Welch modification to the degrees of freedom

[welch1947] was used to determine if there was a statistically significant difference between calibration scenarios.

and the task is to classify each protein to the correct location based on the analysis results. Development index data set is available from kaggle data sets. Rest of the data sets used in the experiments are freely available from the UCI machine learning repository

[Lichman:2013].

DGG data generation with ENIR calibration has been shown to work well especially with naive Bayes (NB) and random forest (RF) classifiers on binary problems and they are both capable of producing multi-class probability estimates without modification so they were selected as the base classifiers for our experiments. A total of five different calibration scenarios were compared in this study: multi-class uncalibrated probabilities (Multi-class Raw), one-vs-rest with either uncalibrated (One-vs-rest Raw) or calibrated (One-vs-rest DGG + ENIR) probabilities, and all pairs with either uncalibrated (All pairs Raw) or calibrated (All pairs DGG + ENIR) probabilities. In addition to calibration error metrics, computation times were recorded on a computational server (Intel Xeon E5-2650 v2 @ 2.60GHz, 196GB RAM) for each calibration scenario.

## 4 Results

Results of the experiments are summarized in Table 3 which shows how many of the data sets had statistically significant changes in calibration performance after our calibration treatment on the data sets grouped by the classifier used, the approach to form the binary problems, and by the number of classes. Full results, MSEs and LLs, for each of the tested data sets in each calibration scenario are reported in Tables 4 and 5 for naive Bayes and random forest, respectively.

Breaking up the multi-class problem into one-vs-rest binary problems and combining the results by normalization was able to improve calibration of naive Bayes even without calibrating the binary classifier probabilities on almost all data sets. The same was not true for the all pairs approach that performs worse on some and achieves approximately the same level of performance as uncalibrated multi-class classification on some data sets. Calibrating the binary naive Bayes classifiers in the one-vs-rest approach was able to improve the error metrics on ten of the twelve data sets compared to both uncalibrated multi-class and uncalibrated one-vs-rest scenarios. One exception to this was on the Waveform data set where LL was not significantly different from the uncalibrated one-vs-rest scenario even though MSE was. Calibration did, however, improve both MSE and LL on that data set compared to uncalibrated multi-class classification.

Calibrating the binary naive Bayes classifiers in the all pairs approach improved calibration on seven of the twelve data sets compared to uncalibrated multi-class classification. On two data sets MSE increased while LL decreased and on one of the data sets the treatment increased both MSE and LL.

Overall the one-vs-rest approach with DGG + ENIR calibration coupled with normalization was the best performing calibration scenario for naive Bayes. One-vs-rest calibration performed better than all pairs on five data sets, there was no statistically significant difference on six data sets, and all pairs was better on one data set.

With the random forest classifier, breaking up the multi-class problem into binary problems increased calibration error metrics on four data sets with the one-vs-rest approach and on eight data sets with the all pairs approach. After calibrating the binary problems, calibration improved on six data sets with the one-vs-rest approach and on five data sets with the all pairs approach compared to the corresponding uncalibrated scenario. Compared to the uncalibrated multi-class scenario, calibration performance with the one-vs-rest approach improved on four data sets while being similar on the other eight data sets. The calibrated all pairs was able to improve calibration only on three data sets, was neutral on two data sets, and decreased calibration performance on seven data sets compared to the uncalibrated multi-class scenario.

As with naive Bayes, the one-vs-rest approach fared better than the all pairs approach overall. On seven data sets the one-vs-rest approach did better than the all pairs approach, on four data sets there was no difference, and on one data set the all pairs approach resulted in lower calibration error.

Average computation times for training and calibrating the classifiers were recorded and the results are shown in Table 6. For the one-vs-rest and the all pairs approaches the calibration times are presented as time consumed for each binary problem to make the numbers comparable when taking into account the number of binary problems on each data set. Naive Bayes was extremely fast to train and although breaking up the classification problem into several binary problems increased the computation times this increase was negligible in practice.

For random forest, too, the multi-class classifier was clearly faster to train than either the all pairs or the one-vs-rest. The all pairs classifier was, however, clearly faster to train than the one-vs-rest classifier but with such small data sets this difference is still not very meaningful in practice.

DGG data generation and ENIR calibration took approximately the same time for each binary problem for both the one-vs-rest and the all pairs approaches as the number of generated calibration data points is the same in both approaches. What was a bit surprising was that there was no difference in calibration times, per binary problem, between the classifiers. The overall calibration time then depends mostly on the number of binary problems.

## 5 Discussion

Naive Bayes is known to be poorly calibrated because its assumptions about feature independence rarely hold. It is not a big surprise that calibration improves its performance but it is surprising that using the one-vs-rest approach can improve its calibration even without calibrating the binary classifiers. Calibrating the binary naive Bayes classifiers works for both one-vs-rest and all pairs approaches. The calibrated one-vs-rest approach seems to be better suited for naive Bayes than the all pairs and the difference is often statistically significant.

Random forest classifier is not as poorly calibrated as naive Bayes but has still been shown to improve with calibration on some binary problems even with small data sets by using DGG for generating the calibration data set. It is clear from our experiments that the one-vs-rest approach works better with random forest than the all pairs approach does. As the all pairs approach actually decreases calibration performance on some data sets, especially if the number of classes is high, the one-vs-rest is the recommended approach for random forest.

Computation time grew linearly as a function of the number of binary problems because the complexity of DGG data generation depends mainly on the amount of data to be generated which was held constant for each scenario. This indicates that as the number of classes grows so does the calibration time. This might become more of an issue with the all pairs approach than with the one-vs-rest approach. However, the training times for calibration were only a few seconds per binary problem while the prediction times are negligible. In addition, parallel implementation would be trivial to implement which would decrease computation time considerably.

Comparison of the proposed method with calibration approaches that can directly calibrate multi-class probabilities is left for future work.

## 6 Conclusions

Data Generation and Grouping with IR or ENIR calibration can be generalized to multi-class problems as we have shown in this work using ENIR calibration. Using our proposed approach, calibration error can be decreased on many classification problems as demonstrated by our experiments. This is an important finding as traditional calibration algorithms perform poorly on small data sets and not all classification problems are binary. DGG data generation adds computational complexity which grows linearly as a function of binary problems. As the number of binary problems grows more rapidly on the all pairs approach, the one-vs-rest approach has an advantage as the number of classes grows. More importantly, the one-vs-rest approach performs better than the all pairs approach in many cases and did not increase calibration error on any of the tested data sets whereas the all pairs approach does on some of the data sets. The computation times for training the calibration algorithm were merely seconds per binary problem on the tested data sets which is not something that would discourage the usage of this algorithm if good calibration is needed, especially with a parallel implementation.

###### Acknowledgements.
The authors would like to thank Infotech Oulu, Jenny and Antti Wihuri Foundation, Tauno Tönning Foundation, and Walter Ahlström Foundation for financial support of this work.