Binary Classifier Calibration: Non-parametric approach

01/14/2014
by   Mahdi Pakdaman Naeini, et al.
University of Pittsburgh
0

Accurate calibration of probabilistic predictive models learned is critical for many practical prediction and decision-making tasks. There are two main categories of methods for building calibrated classifiers. One approach is to develop methods for learning probabilistic models that are well-calibrated, ab initio. The other approach is to use some post-processing methods for transforming the output of a classifier to be well calibrated, as for example histogram binning, Platt scaling, and isotonic regression. One advantage of the post-processing approach is that it can be applied to any existing probabilistic classification model that was constructed using any machine-learning method. In this paper, we first introduce two measures for evaluating how well a classifier is calibrated. We prove three theorems showing that using a simple histogram binning post-processing method, it is possible to make a classifier be well calibrated while retaining its discrimination capability. Also, by casting the histogram binning method as a density-based non-parametric binary classifier, we can extend it using two simple non-parametric density estimation methods. We demonstrate the performance of the proposed calibration methods on synthetic and real datasets. Experimental results show that the proposed methods either outperform or are comparable to existing calibration methods.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

01/13/2014

Binary Classifier Calibration: Bayesian Non-Parametric Approach

A set of probabilistic predictions is well calibrated if the events that...
11/16/2015

Binary Classifier Calibration using an Ensemble of Near Isotonic Regression Models

Learning accurate probabilistic models from data is crucial in many prac...
09/23/2019

Verified Uncertainty Calibration

Applications such as weather forecasting and personalized medicine deman...
12/25/2015

Histogram Meets Topic Model: Density Estimation by Mixture of Histograms

The histogram method is a powerful non-parametric approach for estimatin...
09/28/2021

When in Doubt: Improving Classification Performance with Alternating Normalization

We introduce Classification with Alternating Normalization (CAN), a non-...
10/03/2020

Appliance identification using a histogram post-processing of 2D local binary patterns for smart grid applications

Identifying domestic appliances in the smart grid leads to a better powe...
12/09/2021

Obtaining Calibrated Probabilities with Personalized Ranking Models

For personalized ranking models, the well-calibrated probability of an i...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

Abstract

Accurate calibration of probabilistic predictive models learned is critical for many practical prediction and decision-making tasks. There are two main categories of methods for building calibrated classifiers. One approach is to develop methods for learning probabilistic models that are well-calibrated, ab initio. The other approach is to use some post-processing methods for transforming the output of a classifier to be well calibrated, as for example histogram binning, Platt scaling, and isotonic regression. One advantage of the post-processing approach is that it can be applied to any existing probabilistic classification model that was constructed using any machine-learning method.

In this paper, we first introduce two measures for evaluating how well a classifier is calibrated. We prove three theorems showing that using a simple histogram binning post-processing method, it is possible to make a classifier be well calibrated while retaining its discrimination capability. Also, by casting the histogram binning method as a density-based non-parametric binary classifier, we can extend it using two simple non-parametric density estimation methods. We demonstrate the performance of the proposed calibration methods on synthetic and real datasets. Experimental results show that the proposed methods either outperform or are comparable to existing calibration methods.

1 Introduction

The development of accurate probabilistic prediction models from data is critical for many practical prediction and decision-making tasks. Unfortunately, the majority of existing machine learning and data mining models and algorithms are not optimized for this task and predictions they produce may be miscalibrated.

Generally, a set of predictions of a binary outcome is well calibrated if the outcomes predicted to occur with probability

do occur about fraction of the time, for each probability that is predicted. This concept can be readily generalized to outcomes with more than two values. Figure 2 shows a hypothetical example of a reliability curve degroot1983comparison ; niculescu2005predicting , which displays the calibration performance of a prediction method. The curve shows, for example, that when the method predicts to have probability , the outcome occurs in about fraction of the instances (cases). The curve indicates that the method is fairly well calibrated, but it tends to assign probabilities that are too low. In general, perfect calibration corresponds to a straight line from to . The closer a calibration curve is to this line, the better calibrated is the associated prediction method.

Figure 1: The solid line shows a calibration (reliability) curve for predicting . The dotted line is the ideal calibration curve.
Figure 2: Scatter plot of non-linear separable simulated data

If uncertainty is represented using probabilities, then optimal decision making under uncertainty requires having models that are well calibrated. Producing well calibrated probabilistic predictions is critical in many areas of science (e.g., determining which experiments to perform), medicine (e.g., deciding which therapy to give a patient), business (e.g., making investment decisions), and many other areas. At the same time, calibration has not been studied nearly as extensively as discrimination (e.g., ROC curve analysis) in machine learning and other fields that research probabilistic modeling.

One approach to achieve a high level of calibration is to develop methods for learning probabilistic models that are well-calibrated, ab initio. However, data mining and machine learning research has traditionally focused on the development of methods and models for improving discrimination, rather than on methods for improving calibration. As a result, existing methods have the potential to produce models that are not well calibrated. The miscalibration problem can be aggravated when models are learned from small-sample data or when the models make additional simplifying assumptions (such as linearity or independence).

Another approach is to apply post-processing methods (e.g., histogram binning, Platt scaling, or isotonic regression) to the output of classifiers to improve their calibration. The post-processing step can be seen as a function that maps output of a prediction model to probabilities that are intended to be well calibrated. Figure 2 shows an example of such a mapping. This approach frees the designer of the machine learning model from the need to add additional calibration measures and terms into the objective function used to learn a model. The advantage of this approach is that it can be used with any existing classification method, since calibration is performed solely as the post-processing step.

The objective of the current paper is to show that the post-processing approach for calibrating binary classifiers is theoretically justified. In particular, we show in the large sample limit that post-processing will produce a perfectly calibrated classifier that has discrimination perform (in terms of area under the ROC curve) that is at least as good as the original classifier. In the current paper we also introduce two simple but effective methods that can address the miscalibration problem.

Existing post-processing calibration methods can be divided into parametric and non-parametric methods. An example of a parametric method is Platt’s method that applies a sigmoidal transformation that maps the output of a predictive model platt1999probabilistic to a calibrated probability output. The parameters of the sigmoidal transformation function are learned using a maximum likelihood estimation framework. The key limitation of the approach is the (sigmoidal) form of the transformation function, which only rarely fits the true distribution of predictions.

The above problem can be alleviated using non-parametric methods. The most common non-parametric methods are based either on binning zadrozny2001obtaining or isotonic regression ayer1955empirical . In the histogram binning approach, the raw predictions of a binary classifier are sorted first, and then they are partitioned into subsets of equal size, called bins. Given a prediction , the method finds the bin containing that prediction and returns as the fraction of positive outcomes in the bin.

Zadronzy and Elkan zadrozny2002transforming developed a calibration method that is based on isotonic regression. This method only requires that the mapping function be isotonic (monotonically increasing) niculescu2005predicting . The pair adjacent violators (PAV) algorithm is one instance of an isotonic regression algorithm ayer1955empirical . The isotonic calibration method based on the (PAV) algorithm can be viewed as a binning algorithm where the position of boundaries and the size of bins are seleted according to how well the classifier ranks the examples in the training data zadrozny2002transforming . Recently a variation of the isotonic-regression-based calibration method was described for predicting accurate probabilities with a ranking lossmenon2012predicting .

In this paper, section 2 introduces two measures, maximum calibration error(MCE) and expected calibration error(ECE), for evaluating how well a classifier is calibrated. In section 3 we prove three theorems to show that by using a simple histogram-binning calibration method, it is possible to improve the calibration capability of a classifier measured in terms of and without sacrificing its discrimination capability measured in terms of area under (ROC) curve (). Section 4 introduces two simple extensions of the histogram binning method by casting the method as a simple density based non-parametric binary classification problem. The results of experiments that evaluate various calibration methods are presented in section 5. Finally, section 6 states conclusions and describes several areas for the future work.

2 Notations and Assumptions

This section present the notation and assumptions we use for formalizing the problem of calibrating a binary classifier. We also define two measures for assessing the calibration of such classifiers.

Assume a binary classifier is defined as a mapping . As a result, for every input instance the output of the classifier is where . For calibrating the classifier we assume there is a training set where is the i’th instance and , and is the true class of i’th instance. Also we define as the probability estimate for instance achieved by using the histogram binning calibration method, which is intended to yield a more calibrated estimate than does . In addition we have the following notation and assumptions that are used in the remainder of the paper:

  • is total number of instances

  • is total number of positive instances

  • is total number of negative instances

  • is the space of uncalibrated probabilities which is defined by the classifier output

  • is the space of transformed probability estimates using histogram binning

  • is the total number of bins defined on in the histogram binning model

  • is the i’th bin defined on

  • is total number of instances for which the predicted value is located inside

  • is number of positive instances for which the predicted value is located inside

  • is number of negative instances for which the predicted value is located inside

  • is an empirical estimate of

  • is the value of as goes to infinity

  • is an empirical estimate of

  • is the value of as goes to infinity

2.1 Calibration Measures

In order to evaluate the calibration capability of a classifier, we use two simple statistics that measure calibration relative to the ideal reliability diagram degroot1983comparison ; niculescu2005predicting (Figure 2 shows an example of a reliability diagram). These measures are called Expected Calibration Error (ECE), and Maximum Calibration Error (MCE). In computing these measures, the predictions are sorted and partitioned into ten bins. The predicted value of each test instance falls into one of the bins. The calculates Expected Calibration Error over the bins, and calculates the Maximum Calibration Error among the bins, using empirical estimates as follows:

where is the true fraction of positive instances in bin , is the mean of the post-calibrated probabilities for the instances in bin , and is the empirical probability (fraction) of all instances that fall into bin . The lower the values of and , the better is the calibration of a model.

3 Calibration Theorems

In this section we study the properties of the histogram-binning calibration method. We prove three theorems that show that this method can improve the calibration power of a classifier without sacrificing its discrimination capability.

The first theorem shows that the of the histogram binning method is concentrated around zero:

Theorem 3.1.

Using histogram binning calibration, with probability at least we have .

Proof.

For proving this theorem, we first use a concentration result for . Using Hoeffding’s inequality we have the following:

(1)

Let’s assume is a bin defined on the space of transformed probabilities for calculating the of histogram binning method. Assume after using histogram binning over (space of uncalibrated probabilities which is generated by the classifier ), will be mapped into . We define as the true fraction of positive instances in bin , and as the mean of the post-calibrated probabilities for the instances in bin . Using the notation defined in section 2, we can write and as follows:

by defining and using the triangular inequality we have that:

(2)

Using the above result and the concentration inequality 7 for we can conclude:

(3)

Where the last part is obtained by using a union bound and is the number of bins on the space for which their calibrated probability estimate will be mapped into the bin .

Using a union bound again over different bins like defined on the space , we achieve the following probability bound for over the space of calibrated estimates :

By setting we can show that with probability the following inequality holds . ∎

Corollary 3.2.

Using histogram binning calibration method, MCE converges to zero with the rate of .

Next, we prove a theorem for bounding the ECE of the histogram-binning calibration method as follows:

Theorem 3.3.

Using histogram binning calibration method, ECE converges to zero with the rate of .

Proof.

The proof of this theorem uses the concentration inequality 3.Due to space limitations, the details of the proof is stated in the supplementary part of the paper. ∎

The above two theorems show that we can bound the calibration error of a binary classifier, which is measured in terms of and , by using a histogram-binning post-processing method. We next show that in addition to gaining calibration power, by using histogram binning we are guaranteed not to sacrifice discrimination capability of the base classifier measured in terms of . Recall the definitions of and , where is the probability prediction of the base classifier for the input instance , and is the transformed estimate for instance that is achieved by using the histogram-binning calibration method.

We can define the of the histogram-binning calibration method as:

  • () is the difference between the AUC of the base classifier estimate and the AUC of transformed estimate using the histogram-binning calibration method. Using the notation in Section 2, it is defined as

Using the above definition, our third theorem bounds the of histogram binning classifier as follows:

Theorem 3.4.

Using the histogram-binning calibration method, the worst case is upper bounded by .

Proof.

Due to space limitations, the proof of this theorem is stated in the appendix section in the supplementary part of the paper. ∎

Using the above theorems, we can conclude that by using the histogram-binning calibration method we can improve calibration performance of a classifier measured in terms of and without losing discrimination performance of the base classifier measured in terms of .

We will show in Section 4 that the histogram binning calibration method is simply a non-parametric plug-in classifier. By casting histogram binning as a non-parametric histogram binary classifier, there are other results that show the histogram classifier is a mini-max rate classifier for Lipschitz Bayes decision boundaries devroye1996probabilistic . Although the results are valid for histogram classifiers with fixed bin size, our experiments show that both fixed bin size and fixed frequency histogram classifiers behave quite similarly. We conjecture that a histogram classifier with equal frequency binning is also a mini-max (or near mini-max) rate classifierscott2003near ; klemela2009multivariate ; this is an interesting open problem that we intend to study in the future. These results make histogram binning a reasonable choice for binary classifier calibration under the condition and as . This could be achieved by setting , which is the optimum number of bins in order to have optimal convergence rate results for the non-parametric histogram classifier devroye1996probabilistic .

4 Calibration Methods

In this section, we show that the histogram-binning calibration method zadrozny2001obtaining is a simple nonparametric plug-in classifier. In the calibration problem, given an uncalibrated probability estimate , one way of finding the calibrated estimate is to apply Bayes’ rule as follows:

(4)

where and are the priors of class and that are estimated from the training dataset. Also, and are predictive likelihood terms. If we use the histogram density estimation method for estimating the predictive likelihood terms in the Bayes rule equation 4 we obtain the following: , where , , and are the empirical estimates of the probability of a prediction when class falls into bin . Now, let us assume ; using the assumptions in Section 2, by substituting the value of empirical estimates of , , , from the training data and performing some basic algebra we obtain the following calibrated estimate: , where and are the number of positive and negative examples in bin .

The above computations show that the histogram-binning calibration method is actually a simple plug-in classifier where we use the histogram-density method for estimating the predictive likelihood in terms of Bayes rule as given by 4

. By casting histogram binning as a plug-in method for classification, it is possible to use more advanced frequentist methods for density estimation rather than using simple histogram-based density estimation. For example, if we use kernel density estimation (KDE) for estimating the predictive likelihood terms, the resulting calibrated probability

is as follows:

(5)

where are training instances, and and are respectively the number of positive and negative examples in training data. Also and are the bandwidth of the predictive likelihood for class and class . The bandwidth parameters can be optimized using cross validation techniques. However, in this paper we used Silverman’s rule of thumb silverman1986density for setting the bandwidth to , where

is the empirical unbiased estimate of variance. It is possible to use the same bandwidth for both class

and class , which leads to the Nadaraya-Watson kernel estimator that we use in our experiments. However, we noticed that there are some cases for which KDE with different bandwidths performs better.

There are different types of smoothing kernel functions, as the Gaussian, Boxcar, Epanechnikov, and Tricube functions. Due to the similarity of the results we obtained when using different type of kernels, we only report here the results of the simplest one, which is the Boxcar kernel.

It has been shown in wasserman2006all that kernel density estimators are mini-max rate estimators, and under the loss function the risk of the estimator converges to zero with the rate of , where is a measure of smoothness of the target density, and is the dimensionality of the input data. From this convergence rate, we can infer that the application of kernel density estimation is likely to be practical when is low. Fortunately, for the binary classifier calibration problem, the input space of the model is the space of uncalibrated predictions, which is a one-dimensional input space. This justifies the application of KDE to the classifier calibration problem.

The KDE approach presented above represents a non-parametric frequentist approach for estimating the likelihood terms of equation 4. Instead of using the frequentist approach, we can use Bayesian methods for modeling the density functions. The Dirichlet Process Mixture (DPM) method is a well-known Bayesian approach for density estimation antoniak1974mixtures ; ferguson1973bayesian ; escobar1995bayesian ; maceachern1998estimating . For building a Bayesian calibration model, we model the predictive likelihood terms and in Equation 4 using the method. Due to a lack of space, we do not present the details of the DPM model here, but instead refer the reader to antoniak1974mixtures ; ferguson1973bayesian ; escobar1995bayesian ; maceachern1998estimating .

There are different ways of performing inference in a model. One can choose to use either Gibbs sampling (non-collapsed or collapsed) or variational inference, for example. In implementing our calibration model, we use the variational inference method described in kurihara2007accelerated . We chose it because it has fast convergence. We will refer to it as .

5 Empirical Results

SVM Hist Platt IsoReg KDE DPM
RMSE 0.50 0.39 0.50 0.46 0.38 0.39
AUC 0.50 0.84 0.50 0.65 0.85 0.85
ACC 0.48 0.78 0.52 0.64 0.78 0.78
MCE 0.52 0.19 0.54 0.58 0.09 0.16
ECE 0.28 0.07 0.28 0.35 0.03 0.07
(a) SVM Linear
SVM Hist Platt IsoReg KDE DPM
RMSE 0.21 0.09 0.19 0.08 0.09 0.08
AUC 1.00 1.00 1.00 1.00 1.00 1.00
ACC 0.99 0.99 0.99 0.99 0.99 0.99
MCE 0.35 0.04 0.32 0.03 0.07 0.03
ECE 0.14 0.01 0.15 0.00 0.01 0.00
(b) SVM Quadratic Kernel
Table 1: Experimental Results on Simulated dataset

This section describes the set of experiments that we performed to evaluate the performance of calibration methods described above. To evaluate the calibration performance of each method, we ran experiments on both simulated and on real data. For the evaluation of the calibration methods, we used different measures. The first two measures are Accuracy (Acc) and the Area Under the ROC Curve (AUC), which measure discrimination. The three other measures are the Root Mean Square Error (RMSE), Expected Calibration Error (ECE), and Maximum Calibration Error (MCE), which measure calibration.

Simulated data. For the simulated data experiments, we used a binary classification dataset in which the outcomes were not linearly separable. The scatter plot of the simulated dataset is shown in Figure 2. The data were divided into instances for training and calibrating the prediction model, and instances for testing the models.

To conduct the experiments on simulated datasets, we used two extreme classifiers: support vector machines (SVM) with linear and quadratic kernels. The choice of SVM with linear kernel allows us to see how the calibration methods perform when the classification model makes over simplifying (linear) assumptions. Also, to achieve good discrimination on the data in figure

2, SVM with quadratic kernel is intuitively an ideal choice. So, the experiment using quadratic kernel SVM allows us to see how well different calibration methods perform when we use an ideal learner for the classification problem, in terms of discrimination.

Base SVM
AUC 0.82 0.84 0.85 0.85 0.85 0.49
MCE 0.40 0.15 0.07 0.05 0.03 0.52
ECE 0.14 0.05 0.03 0.02 0.01 0.28
(a) SVM Linear
Base SVM
AUC 0.99 1.00 1.00 1.00 1.00 1.00
MCE 0.14 0.09 0.03 0.01 0.01 0.36
ECE 0.03 0.01 0.00 0.00 0.00 0.15
(b) SVM Quadratic Kernel
Table 2: Experimental results on size of calibration dataset

As seen in Table 1

, KDE and DPM based calibration methods performed better than Platt and isotonic regression in the simulation datasets, especially when the linear SVM method is used as the base learner. The poor performance of Platt is not surprising given its simplicity, which consists of a parametric model with only two parameters. However, isotonic regression is a nonparametric model that only makes a monotonicity assumption over the output of base classifier. When we use a linear kernel SVM, this assumption is violated because of the non-linearity of data. As a result, isotonic regression performs relatively poorly, in terms of improving the discrimination and calibration capability of a base classifier. The violation of this assumption can happen in real data as well. In order to mitigate this pitfall, Menon et. all

menon2012predicting proposed using a combination of optimizing as a ranking loss measure, plus isotonic regression for building a ranking model. However, this is counter to our goal of developing post-processing methods that can be used with any existing classification models. As shown in Table 1(b), even if we use an ideal SVM classifier for these linearly non-separable datasets, our proposed methods perform better or as well as does isotonic regression based calibration.

As can be seen in Table 1(b), although the SVM base learner performs very well in the sense of discrimination based on AUC and Acc measures, it performs poorly in terms of calibration, as measured by RMSE, MCE, and ECE. Moreover, all of the calibration methods retain the same discrimination performance that was obtained prior of post-processing, while improving calibration.

Also, Table 2 shows the results of experiments on using the histogram-binning calibration method for different sizes of calibration sets on the simulated data with linear and quadratic kernels. In these experiments we set the size of training data to be and we fixed instances for testing the methods. For capturing the effect of calibration size, we change the size of calibration data from up to , running the experiment times for each calibration set and averaging the results. As seen in Table 2, by having more calibration data, we have a steady decrease in the values of the and errors.

Real data. In terms of real data, we used a KDD-98 data set, which is available at UCI KDD repository. The dataset contains information about people who donated to a particular charity. Here the decision making task is to decide whether a solicitation letter should be mailed to a person or not. The letter costs (which costs ). The training set includes instances in which it is known whether a person made a donation, and if so, how much the person donated. Among all these training cases, were responders. The validation set includes instances from the same donation campaign of which where responders.

Following the procedure in zadrozny2001obtaining ; zadrozny2002transforming , we build two models: a response model for predicting the probability of responding to a solicitation, and the amount model for predicting the amount of donation of person . The optimal mailing policy is to send a letter to those people for whom the expected donation return

is greater than the cost of mailing the letter. Since in this paper we are not concerned with feature selection, our choice of attributes are based on

mayer2003experimental for building the response and amount prediction models. Following the approach in zadrozny2001learning , we build the amount model on the positive cases in the training data, removing the cases with more than

as outliers. Following their construction we also provide the output of the response model

as an augmented feature to the amount model .

In our experiments, in order to build the response model, we used three different classifiers: , and . For building the amount model, we also used a support vector regression model. For implementing these models we used the liblinear package REF08a . The results of the experiment are shown in Table 3. In addition to previous measures of comparison, we also show the amount of profit obtained when using different methods. As seen in these tables, the application of calibration methods results in at least more in expected net gain from sending solicitations.

LR Hist Plat IsoReg KDE DPM
RMSE 0.500 0.218 0.218 0.218 0.218 0.219
AUC 0.613 0.610 0.613 0.612 0.611 0.613
ACC 0.56 0.95 0.95 0.95 0.95 0.95
MCE 0.454 0.020 0.013 0.030 0.004 0.017
ECE 0.449 0.007 0.004 0.013 0.002 0.003
Profit 10560 13183 13444 13690 12998 13696
(a) Logistic Regression
NB Hist Plat IsoReg KDE DPM
RMSE 0.514 0.218 0.218 0.218 0.218 0.218
AUC 0.603 0.600 0.603 0.602 0.602 0.603
ACC 0.622 0.949 0.949 0.949 0.949 0.949
MCE 0.850 0.008 0.008 0.046 0.005 0.010
ECE 0.390 0.004 0.004 0.023 0.002 0.003
Profit 7885 11631 10259 10816 12037 12631
(b) Naïve Bayes
SVM Hist Plat IsoReg KDE DPM
RMSE 0.696 0.218 0.218 0.219 0.218 0.218
AUC 0.615 0.614 0.615 0.500 0.614 0.615
ACC 0.95 0.95 0.95 0.95 0.95 0.95
MCE 0.694 0.011 0.013 0.454 0.003 0.019
ECE 0.660 0.004 0.004 0.091 0.002 0.004
Profit 10560 13480 13080 11771 13118 13544
(c) SVM Linear
Table 3: Experimental Results on KDD 98 dataset

5.1 The Calibration Dataset

In all of our experiments, we used the same training data for model calibration as we used for model construction. In doing so, we did not notice any over-fitting. However, if we want to be completely sure not to over-fit on the training data, we can do one of the following:

  • Data Partitioning: This approach uses different data sets for model training and model calibration. The amount of data that is needed to calibrate models is generally much less than the amount needed to train them, because the calibration feature space has a single dimension. We observed that approximately instances are sufficient for obtaining well calibrated models, as is seen in table [2].

  • Leave-one-out: If the amount of available training data is small, and it not possible to do data partitioning, we can use a leave-one-out (or k-fold variation) scheme for building the calibration dataset. In this approach we learn a model based on instances, test it on the one remaining instance, and save the resulting one calibration instance , where is the predicted value for using the model trained on the remaining data points. We repeat the process for all examples and we have the calibration dataset

6 Conclusion

In this paper, we described two measures for evaluating the calibration capability of a binary classifier called maximum calibration error (MCE) and expected calibration error (ECE). We also proved three theorems that justify post processing as an approach for calibrating binary classifiers. Specifically, we showed that by using a simple histogram-binning calibration method we can improve the calibration of a binary classifier, in terms of and , without sacrificing the discrimination performance of the classifier, as measured in terms of . The other contribution of this paper is to introduce two extensions of the histogram-binning method that are based on kernel density estimation and on the Dirichlet process mixture model. Our experiments on simulated and real data sets showed that the proposed methods performed well and are promising, when compared to two popular, existing calibration methods.

In future work, we plan to investigate the conjecture that histogram-binning that uses equal frequency bins is a mini-max (or near mini-max) rate classifier, as equal width binning is known to be. Our extensive experimental studies comparing histogram binning with equal frequency and equal width bins provides support that this conjecture is true. We also would like to show similar theoretical proofs for kernel density estimation. Another direction for future research is to extend the methods described in this paper to multi-class calibration problems.

7 Appendix A

In this appendix, we give the sketch of the proofs for the ECE and AUC bound theorems mentioned in Section 3 (Calibration Theorems). It would be helpful to review the Section 2 (Notations and Assumptions) of the paper before reading the proofs.

7.1 ECE Bound Proof

Here we show that using histogram binning calibration method, ECE converges to zero with the rate of . Lest’s define as the expected calibration loss on bin for the histogram binning method. Following the assumptions mentioned in Section 3 about MCE bound theorem, we have . Also, using the definition of ECE and the notations in Section 2, we can rewrite ECE as the convex combination of s. As a result, in order to bound ECE it suffices to show that all of its components are bounded. Recall the concentration results proved in MCE bound theorem in the paper we have:

(6)

also let’s recall the following two identities:

Lemma 7.1.

if X is a positive random variable then

Lemma 7.2.

Now, using the concentration result in Equation 6 and applying the two above identities we can bound to write , where is a constant. Finally, since is the convex combination of ’s we can conclude that using histogram binning method, converges to zero with the rate of .

7.2 AUC Bound Proof

Here we show that the worst case AUC loss using histogram binning calibration method would be at the rate of . For proving the theorem, let’s first recall the concentration results for and . Using Hoeffding’s inequality we have the following:

(7)
(8)

The above concentration inequalities show that with probability we have the following inequalities:

(9)
(10)

The above results show that for the large amount of data with high probability, is concentrated around and is concentrated around around .

Based on agarwal2006generalization the empirical AUC of a classifier is defined as follow:

(11)

Where and as mentioned in section [2] (assumptions and notations) in main script are respectively the total number of positive and negative examples. Computing the expectation of the equation 11 gives the actual AUC as following:

(12)

It would be nice to mention that using the MacDiarmid concentration inequality it is also possible to show that the empirical is highly concentrated around true agarwal2006generalization .

Recall is the space of output of base classifier (). Also, is the space of output of transformed probability estimate using histogram binning. Assume are the non-overlapping bins defined on the in the histogram binning approach. Also, assume and are the base classifier outputs for two different instance where and . In addition, assume and are respectively, the transformed probability estimates for and using histogram binning method.

Now using the above assumptions we can write the AUC loss of using histogram binning method as following:

(13)
(14)

By partitioning the space of uncalibrated estimates one can write the as following:

Where we make the following reasonable assumption that simplifies our calculations:

  • Assumption : if

Now we will show that the first summation part in equation LABEL:eq:Loss_AUC1 will be less than or equal to zero. Also, the second summation part will go to zero with the convergence rate of .

First Summation Part

Recall that in the histogram binning method the calibration estimate if . Also, notice that if , and then we have for sure. So, using the above facts we can rewrite the first summation part in equation LABEL:eq:Loss_AUC1 as following:

We can rewrite the above equation as following:

Next by using the Bayes’ rule and omitting the common denominators among the terms we have the following:

We next show that the term inside the parentheses in equation LABEL:eq:Loss_AUC1_1 is less or equal to zero by using the i.i.d. assumption and the notations we mentioned in Section 2, as following:

Now if we have the case then term would be exactly zero. if we have the case that then the inside term would be equal to:

(20)

where the last inequality is true with high probability which comes from the concentration results for and in equation 7.

Second Summation Part

Using the fact that in the second summation part and , we can rewrite the second summation part as:

Using the Bayes rule and iid assumption of data we can rewrite the equation LABEL:eq:Loss2_1 as following:

(22)

Where the last equality comes from the fact that and are concentrated around their empirical estimates and which are equal to by construction (we build our histogram model based on equal frequency bins).

Using the i.i.d. assumptions about the calibration samples, we can rewrite the equation 22 as following:

(23)

Where the last inequality comes from the fact that the order of ’s is completely reverse in comparison to the order of and applying Chebychev’s Sum inequality.

Theorem 7.1.

(Chebyshev’s sum inequality) if and then

Now the facts we proved above about and in equations 23 and 20 shows that the worst case is upper bounded by Using histogram binning calibration method.

  • It should be noticed, the above proof shows that the worst case AUC loss at the presence of large number of training data point is bounded by . However, it is possible that we even gain AUC power by using histogram binning calibration method as we did in the case we applied calibration over the linear SVM model in our simulated dataset.

References

  • (1) Shivani Agarwal, Thore Graepel, Ralf Herbrich, Sariel Har-Peled, and Dan Roth. Generalization bounds for the area under the roc curve. Journal of Machine Learning Research, 6(1):393, 2006.
  • (2) C.E. Antoniak. Mixtures of dirichlet processes with applications to bayesian nonparametric problems. The annals of statistics, pages 1152–1174, 1974.
  • (3) M. Ayer, HD Brunk, G.M. Ewing, WT Reid, and E. Silverman. An empirical distribution function for sampling with incomplete information. The annals of mathematical statistics, pages 641–647, 1955.
  • (4) M.H. DeGroot and S.E. Fienberg. The comparison and evaluation of forecasters. The statistician, pages 12–22, 1983.
  • (5) Luc Devroye, László Györfi, and Gábor Lugosi.

    A probabilistic theory of pattern recognition

    , volume 31.
    New York: Springer, 1996.
  • (6) M.D. Escobar and M. West. Bayesian density estimation and inference using mixtures. Journal of the american statistical association, pages 577–588, 1995.
  • (7) Rong-En Fan, Kai-Wei Chang, Cho-Jui Hsieh, Xiang-Rui Wang, and Chih-Jen Lin. LIBLINEAR: A library for large linear classification. Journal of Machine Learning Research, 9:1871–1874, 2008.
  • (8) T.S. Ferguson. A bayesian analysis of some nonparametric problems. The annals of statistics, pages 209–230, 1973.
  • (9) Jussi Klemela. Multivariate histograms with data-dependent partitions. Statistica sinica, 19(1):159, 2009.
  • (10) K. Kurihara, M. Welling, and N. Vlassis. Accelerated variational dirichlet process mixtures. Advances in Neural Information Processing Systems, 19:761, 2007.
  • (11) S.N. MacEachern and P. Muller. Estimating mixture of dirichlet process models. Journal of Computational and Graphical Statistics, pages 223–238, 1998.
  • (12) Uwe F Mayer and Armand Sarkissian. Experimental design for solicitation campaigns. In Proceedings of the ninth ACM SIGKDD international conference on Knowledge discovery and data mining, pages 717–722. ACM, 2003.
  • (13) Aditya Menon, Xiaoqian Jiang, Shankar Vembu, Charles Elkan, and Lucila Ohno-Machado. Predicting accurate probabilities with a ranking loss. arXiv preprint arXiv:1206.4661, 2012.
  • (14) A. Niculescu-Mizil and R. Caruana.

    Predicting good probabilities with supervised learning.

    In Proceedings of the 22nd international conference on Machine learning, pages 625–632, 2005.
  • (15) J. Platt et al. Probabilistic outputs for support vector machines and comparisons to regularized likelihood methods. Advances in large margin classifiers, 10(3):61–74, 1999.
  • (16) Clayton Scott and Robert Nowak. Near-minimax optimal classification with dyadic classification trees. Advances in neural information processing systems, 16, 2003.
  • (17) Bernard W Silverman. Density estimation for statistics and data analysis, volume 26. Chapman & Hall/CRC, 1986.
  • (18) L. Wasserman. All of nonparametric statistics. Springer, 2006.
  • (19) B. Zadrozny and C. Elkan.

    Obtaining calibrated probability estimates from decision trees and naive bayesian classifiers.

    In Machine Learning-International Workshop then Conference, pages 609–616, 2001.
  • (20) B. Zadrozny and C. Elkan. Transforming classifier scores into accurate multiclass probability estimates. In Proceedings of the eighth ACM SIGKDD international conference on Knowledge discovery and data mining, pages 694–699, 2002.
  • (21) Bianca Zadrozny and Charles Elkan. Learning and making decisions when costs and probabilities are both unknown. In Proceedings of the seventh ACM SIGKDD international conference on Knowledge discovery and data mining, pages 204–213. ACM, 2001.