THORS: An Efficient Approach for Making Classifiers Cost-sensitive

11/07/2018 ∙ by Ye Tian, et al. ∙ 0

In this paper, we propose an effective THresholding method based on ORder Statistic, called THORS, to convert an arbitrary scoring-type classifier, which can induce a continuous cumulative distribution function of the score, into a cost-sensitive one. The procedure, uses order statistic to find an optimal threshold for classification, requiring almost no knowledge of classifiers itself. Unlike common data-driven methods, we analytically show that THORS has theoretical guaranteed performance, theoretical bounds for the costs and lower time complexity. Coupled with empirical results on several real-world data sets, we argue that THORS is the preferred cost-sensitive technique.



There are no comments yet.


page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Classification is one of the most important tasks in machine learning and data mining. A classifier is usually trained from a set of training instances with discrete and finite class labels to predict the class labels of new instances. Many effective classification algorithms have been developed, such as linear algorithms

(Balakrishnama and Ganapathiraju, 1998)

, neural network

(Krizhevsky et al., 2012), Bayesian classifier (McCallum et al., 1998)

, decision tree

(Safavian and Landgrebe, 1991) and instance-based classifiers (Sheng and Ling, 2009). However, most of the currently-available algorithms implicitly assume that all errors are equally costly, which may be inadequate for problems with various misclassification costs (Domingos, 1999). In many KDD applications, costs are often different for different types of errors. For example, in fraud detection, undetected frauds with high transaction amounts are obviously more costly (Fan et al., 1999; Zonneveldt et al., 2010). Besides, in medical diagnosis, it’s far more serious to diagnose someone with a life-threatening disease as healthy than diagnose someone healthy as ill (Tong et al., 2018; Viaene and Dedene, 2005). As a result, a lot of work related to cost-sensitive learning has been done recently and they seek to minimize total misclassification costs rather than error rate. Sheng and Ling (2009) divide the existing cost-sensitive algorithms into two categories: One is to design cost-sensitive classifiers that are cost-sensitive in themselves (Chai et al., 2004; Drummond and Holte, 2000; Turney, 1994) and the other is to design a “wrapper” that converts any existing cost-insensitive classifiers into cost-sensitive ones, called cost-sensitive meta-learning or wrapper method (Domingos, 1999; Elkan, 2001; Fan et al., 1999; Sheng and Ling, 2006; Sun et al., 2007; Ting, 1998; Witten et al., 2016; Zadrozny and Elkan, 2001; Zhao, 2008). Our work belongs to the second category.

The wrapper method can be further categorized as thresholding, sampling and weighting (Sheng and Ling, 2006). Thresholding

finds the best probability (or other scores) which minimizes the total misclassification cost from the training instances as the threshold, and uses it to predict the class label of test instances. Metacost

(Domingos, 1999)

is a thresholding method. It firstly learns a classifier on each multiple bootstrap replicates of the training set to obtain reliable probability estimations of training instances by voting, and then relabels training instances according to their estimated minimal cost classes, finally uses the relabeled training instances to build a cost-sensitive classifier. Metacost can be applied to multi-class problems and to arbitrary cost matrices. Instead of relabeling the training instances, Cost Sensitive Classifier (CSC)

(Witten et al., 2016) relabels the test instances. Elkan (2001) obtains the theoretical threshold for making optimal cost-sensitive classification in two-class case, and argues that changing the balance of negative and positive training instances has little effect on the classifier learned by standard decision tree learning methods (Elkan, 2001). Noting that estimating the probability accurately is crucial in thresholding-based meta-learning methods, Zadrozny and Elkan (2001) propose several methods to improve the calibration of probability estimates, while Sheng and Ling (2006) develop an empirical thresholding method which does not require accurate estimation of probabilities Sheng and Ling (2006). Alternatively, sampling method modifies the class distribution of the training data, then applies cost-insensitive classifiers on the sampled data directly. Weighting (Ting, 1998) can be viewed as a sampling method, in which different types of instances in the training data are weighted according to the misclassification costs during classifier learning, such that the classifier strives to make fewer errors of the more costly type, resulting in lower overall cost. Methods based on weighting include Sun et al. (2007); Zadrozny et al. (2003); Zhao (2008).

While a plethora of cost-sensitive methods have been investigated, several issues still remain to be addressed. Firstly, empirical comparisons instead of theoretical properties are frequently reported for the existing algorithms. The theoretical error bounds for the costs are less explored for most cost-sensitive classifiers. While common practices that directly limit the empirical false negative rate to no more than specified level show that the resulting classifiers are likely to have a much larger false negative rate. Secondly, the time complexity of a cost-sensitive classifier is an important concern since existing re-sampling and weighting methods are computationally more involved. We propose a THresholding method based on ORder Statistic method, named THORS, to convert an arbitrary scoring-type classifier, which induces a continuous cumulative distribution function of scores, into a cost-sensitive one. It uses the order statistics of classification scores on validation set to build an optimal classification threshold, instead of estimating the optimal threshold probability. We analytically show that the existence of the optimal threshold and the error bounds of costs because it does not need to re-sample the training instances. THORS also has lower time complexity compared with the existing popular cost-sensitive classifiers. It usually leads to smaller total cost than empirical approaches, Metacost, and Cost-proportionate Rejection Sampling (CRS) methods, even on the heavily imbalanced dataset.

The remainder of the paper is organized as follows. In Section 2 and 3, detailed THORS algorithm and its theoretical properties are described. In Section 4, we evaluate our method on three real data sets against other existing methods, such as theoretical thresholding, empirical thresholding, and meta-learning algorithms including Metacost and CRS. We conclude this paper by summarizing the main findings and outlining future research in Section 5.

2 The THORS Algorithm

The theory of cost-sensitive learning presented by (Elkan, 2001; Sheng and Ling, 2009)

describes how the misclassification cost plays its role in various related cost-sensitive algorithms. Without loss of generality, we assume binary classification (i.e., positive and negative class) in this paper, where the objective is to predict the value of a binary-dependent variable, referred to as the class, based on a vector of independent variables (also called attributes or features). In THORS, the full data is divided into training set and validation set, which are used to train the classifier and find optimal threshold, respectively. Then the scoring function

is trained using the training set and assigns a classification score to an observation , the class label is predicted by whether its score is larger than a threshold

. Most popular classification methods are of this type, including SVMs, Naïve Bayes, logistic regression and neural networks. The classification scores can be strict probabilities or uncalibrated numeric values as long as a higher score indicates a higher probability of an observation belonging to the positive class. The optimal threshold

is selected as the minimizer of estimated misclassification expected cost in (2) on validation set, which usually uses cross-validation to search the best threshold value from the training dataset.

2.1 Cost Matrix

In cost-sensitive learning, the costs of false positive (actual negative but predicted as positive; denoted as FP), false negative (FN), true positive (TP) and true negative (TN) can be given in a cost matrix, as shown in Table 1, where denotes the cost of classifying the instance be class when it is actually in class (). (TP and TN) is usually regarded as the “benefit”(i.e., negated cost) when the instance is predicted correctly.

ActualPredict Negative Positive
Table 1: Cost Matrix of All the Instances in A Binary Classification

Usually, the minority class is regarded as the positive class, and it is often more expensive to misclassify an actual positive instance into negative, than an actual negative instance into positive. That is, the value of FN or is usually larger than that of FP or . This is true for the fraud detection example mentioned earlier (fraud transaction is usually rare, but predicting an actual fraud transaction as negative is usually more costly) and the medical diagnosis example. Without loss of generality, we consider the case where and for some . Under the case we consider, the expected cost for classifying instances is


where is the marginal probability of class and is the probability of classifying instance into class 1 when it actually in class 0, which is called the False Positive Rate (FPR); is the probability of classifying instance into class 0 when it actually in class 1, and is called False Negative Rate (FNR). Obviously, the classification error is a weighted sum of FNR and FPR. In practice, both and are unknown. They should be estimated on validation set and plugged into (1), where the corresponding estimations are and and we have the estimated expected cost


2.2 Thresholding on Order Statistic

Firstly we split the full data into training set and validation set. The training set will be used for training the classifier and the validation set will help us to find the optimal threshold. Intuitively, the optimal threshold splits the classification scores into two parts and produces minimal estimated expected misclassification cost in (2) on the validation set. Our proposed THORS algorithm picks one of the scores as the optimal threshold which is an order statistic of scores to obtain a minimal total misclassification cost on validation set. Recently, Tong et al. (2018)

show that a binary classifier by choosing order statistic as an optimal threshold guarantees the desired high-probability control of type I error

. We prove that the errors and total misclassification cost by THORS algorithm are similarly bounded theoretically.

For a given binary classifier , a new instance with the predictor vector is predicted as class positive (1) if , otherwise it will be classified into class negative (0), that is,


where is the optimal threshold needing to be learned from the validation set. The score function is learned from the training data set. is the score of instance with feature vector under the classifier . Applying the classifier to the validation set of size with class 0 and class 1 instances, we obtain sorted scores as , and scores in class 0 and class 1 are denoted as and , respectively. Then we have the following theorem for selecting the optimal threshold.

Theorem 2.1.

Let and . and are estimations of the marginal probability of class 0 and 1 samples in the population. Then the optimal threshold can be decided by minimizing the following part of estimated expected cost (2) on validation set,


That is, the optimal threshold , where


It is worth noting that Theorem 2.1 does not rely on any distributional assumptions or on base algorithm characteristics. Besides, from Theorem 2.1 it is known that if the rank is fixed then and become fixed as well. Therefore we can estimate (4) for each choice of on validation set and choose the optimal one to make (5) minimal in nearly linear time, which is much more efficient than most of the empirical methods.

We summarize the THORS method as the following Algorithm 1.

      : Validation set with size , including class 0 and class 1 samples
      : Classification score function
      : Estimated optimal threshold
Other parameters:
      : Feature vector of instance
      : Feature vector of -th order statistic
      : True class of -th order statistic

2:   sort
4:  for  in  do
5:     if  then
7:     else
9:     end if
12:  end for
14:  threshold
15:  return  
Algorithm 1 THORS algorithm

3 Properties

For the THORS method described in the previous section, we can show that its FPR, FNR and expected misclassification cost are bounded by the following theorems. These properties hold for any score-type classifier that introduces a continuous cumulative distribution function of the score. For discontinuous cases, properties can be approximately correct.

3.1 Theoretical Upper Bound

Under Algorithm 1, let be the score for a new instance with class label and feature under the classifier , the corresponding FNR and FPR are and

, respectively. Due to the randomness of the order statistic and the relationship between order statistic of scores and the threshold we choose, here both two types of error rates are random variables. Noting that

is fixed in the computation of conditional probabilities, for the bounds of cumulative distribution functions of FNR and FPR, we have the following result.

Theorem 3.1.

Let and be FNR and FPR of classifying new instance with true class label using the classifier under THORS algorithm, respectively. When the distribution of the score is continuous, for any , we have


where the numbers of class 0 and 1 in validation set are denoted as and and definitions of , are the same as Theorem 2.1. That is, the bounds of cumulative distribution function of FNR and FPR are only dependent on , which will be fixed for specified validation set.

According to Theorem 3.1, we have the following high probability upper bound for the expected cost on new instances.

Theorem 3.2.

Let be the expected misclassification cost of the THORS algorithm on new instances, then there exist some constant such that


where and satisfy


Here represents the number of new instances and are defined the same as Theorem 3.1.

3.2 Time Complexity

For empirical method the time complexity is , where range is the searching range for optimal threshold and is the searching precision. And is related to the type of classifier itself and is the size of validation set used for estimate the threshold (Sheng and Ling, 2006). And the time complexity of Metacost method (Domingos, 1999) is in which is the number of resampled instances to generate and is the size of training set. is the constant related to classifier and the sampling algorithm. For Cost-proportionate Rejection Sampling (CRS) (Witten et al., 2016) that is , where is acceptance probability and both , are defined in (Witten et al., 2016). is a constant related to the performance of classifier itself. To obtain a good performance, are needed to be chosen not so small, leading to high time-complexity. However, our numerical studies in the next section show that THORS can find the optimal threshold more faster than empirical method and Metacost in a short time. Although the time complexity of CRS is also small, the performance of it is much worse than THORS for real data, which will be shown by the following section.

Theorem 3.3.

The time-complexity of THORS algorithm is , where , are constants related to the sorting method and the classifier itself respectively ( is very small). And is the size of validation set.

3.3 Short Theoretical Bounds of Misclassification Cost Expectation

We will see that the range of expected misclassification cost will approximate to short bounds under a high probability when the validation set size is large enough.

Theorem 3.4.

The expected misclassification cost is bounded by some constants, that is


Particularly, the cost is upper-bounded,




where is the size of test set.

When is large enough, can be taken as a small number and then the length of the interval could be relatively small and the upper bound would approximate to the least upper bound.
Furthermore, the following theorem is useful for estimating the size of a validation set to control expected cost with given precision.

Theorem 3.5.

Let and be the size of class 0 and class 1 instances in validation set, and be empirical expected cost for each sample on validation set and the expected cost for each sample in population when threshold is , respectively. Denote as the empirical FPR and FNR on validation set, and as FPR and FNR in population, based on optimal threshold obtaining by THORS. If the following four assumptions hold:
(A1) is continuous;
(A2) , uniformly for fixed , as ;
(A3) , as ;
(A4) has a unique minimal point at which reaches its minimum,
where and are defined as Theorem 2.1, then there exist


as .

It is easy to see that assumption (A1) will always hold for a scoring-type classifier. Besides, it’s natural to assume that the empirical cost or error rate on validation set converges to population cost or error rate in probability. Thus assumption (A2) and (A3) follow reasonably. The assumption (A4) is related to the problem itself, most well-defined problems satisfy such assumption. This assumption guarantees that our solution can be stable. For (18) and (19), the ratios and can be regarded as constant as goes to infinity in practice if these assumptions hold. Then if is given, then can be obtained. Let and approximately, can be solved from (13) if we let (13) be a fixed number between 0 and 1. The size is a conservative estimation of the minimal size to control the cost under a fixed probability. We will calculate this for specific datasets in the following section.

4 Case Studies

In this section, we will focus on three real datasets from UCI Machine Learning Repository, in which their imbalance rates decrease from 59:1 to 1.84:1. THORS will be applied on them and results will be compared with other thresholding and meta-learning methods, including both total cost on test set and average running time on the 8 GB RAM laptop with Intel® Core™  i5-6300U CPU. The results show that THORS outperforms the alternatives even when the data set is heavily imbalanced.

4.1 Scania Trucks Data

We implement THORS on a Scania trucks dataset of UCI Machine Learning Repository. It records 60,000 component failures for a specific component of the APS system. And these samples fall into two categories: 1,000 failures for a specific component of the APS system and 59,000 ones not related to APS. To formulate this problem into a cost-sensitive classification one, we denote APS related failures as class 1 (positive) and unrelated ones as class 0 (negative). In this case, FP refers to the cost that an unnecessary check needs to be done by a mechanic at a workshop, while FN refers to the cost of missing a faulty truck, which may cause a breakdown. And costs for FN and FP

are set as 500 and 1, respectively. The imbalance rate here is 59:1, which is a heavily imbalanced case. There are 171 attributes for each observation. We pre-process the original data before starting classification. We choose 10 prominent attributes used for training the classifier through ANOVA F-value for the provided samples. Then the data is divided into three parts, which are training set, validation set and test set. Base classifiers we choose for this problem are logistic regression with cost weighting (Logit), decision tree (DT, combined with Adaboosting), Naïve Bayes (NB) and linear discriminant analysis (LDA). Among 60,000 instances, 24,000 observations are used in training base classifier and 24,000 ones are applied to choose the optimal threshold. Remained 12,000 observations are divided into test set. As a comparison, besides our algorithm we also choose two other thresholding methods, including empirical method

(Sheng and Ling, 2006) and theoretical thresholding (Elkan, 2001). Also we compare our results with other meta-learning methods such as Metacost (Domingos, 1999) and CRS (Cost-proportionate Rejection Sampling) (Zadrozny et al., 2003)

. Null model (default base classifier) is used as the baseline. Each algorithm is run for 20 times. Average costs with corresponding standard variances, comparison of performance, and the running time for each algorithm on each classifier are reported.

THORS Null Theoretical Empirical Metacost CRS
Table 2:

Average Costs and Standard Deviations for Each Algorithm on Each Classifier for Trucks Data

Figure 1: Box-plots of Costs on Test Data by Different Methods for Trucks Data

(An entry means THORS wins times and lose times) Base ClassifierAlgorithm Theoretical Null Empirical Metacost CRS Logit 20/0 20/0 15/5 20/0 20/0 DT 20/0 20/0 10/10 20/0 20/0 NB 20/0 20/0 20/0 8/12 20/0 LDA 20/0 20/0 20/0 20/0 20/0

Table 3: Summary of the Experimental Results for Trucks Data

We list average costs and their deviance for each algorithm on each classifier in Table 2, from which we can see that for Logit, DT, and LDA classifier, THORS reaches the least average cost with small deviance in various methods. Figure 1 exhibited average costs and their deviance for each algorithm on each classifier, from which we can see that for Logit, DT, and LDA classifier, THORS reaches the least average cost with small deviance in various methods. Figure 2 presents the box-plots of cost for each approach on different classifiers. It’s obvious to see that the box-plot of THORS is always at the bottom, which means that THORS is always among the best algorithms for various classifiers. And Table 3 reports the detailed comparison results. THORS wins at least half of 20 rounds in almost all cases (except for Metacost method on NB classifier), beating other strategies in these cases.

Base AlgorithmThresholding THORS Empirical Metacost CRS
Logit 1.60 17.61 6.92 0.27
DT 2.93 45.60 33.06 0.94
NB 1.33 11.77 0.71 0.07
LDA 1.60 13.25 1.98 0.10
Table 4: Average Running Time of Some Methods for Trucks Data (unit: s)
Figure 2: Size of Validation Set and Upper Bound of Cost for Trucks Data

Average running time for one round of each algorithm is listed in Table 4. And relationship between estimated minimal size of validation set (after logarithmic transform), , and 95% upper bound are exhibited in Figure 2. From Table 4, it is noticed that THORS is always more time-economic in the comparison with Metacost and empirical method. Although CRS always takes the least time due to its simple re-sampling procedure, the performance of it is always much worse than that of THORS. Finally from Figure 2, it’s available to control 95% upper bound under for four classifiers using present 24,000 instances in validation set by THORS. We can see that four curves are approximately straight lines, showing an exponential relation between the upper bound and validation set size. And it’s a tradeoff for the increasing sample size and a tighter upper bound. There is no obvious difference for the requirements of validation set size under the same ratio between the upper bound and between four classifiers for THORS algorithm.

4.2 Income Data

The second dataset we choose is an adult dataset containing 32,561 income observations from UCI Machine Learning Repository. Samples belong to two classes: 24,720 people whose income is below 50,000 (class 0, negative) and 7,841 ones whose income is over 50,000 (class 1, positive). The imbalance rate here is 3.15:1, much smaller than the first dataset. We also choose 10 predictors as predictors from 14 other characteristics of each person applying ANOVA F-value, for training base classifiers. To create a cost-sensitive problem we let cost for FN and FP

be 100 and 10, respectively. Among these 32,561 observations, 13,025 instances are used to train base models, 13,024 are utilized for thresholding, and remained 6,512 samples are for evaluating the performance of these approaches. Base classifiers we choose are logistic regression with cost weighting (Logit), Naïve Bayes (NB), linear discriminant analysis (LDA), and random forest (RF). We also make a comparison of the performance of THORS with other thresholding schemes such as empirical thresholding method, theoretical thresholding, and other meta-learning strategies, including Metacost and CRS. And each algorithm is also run for 20 times.

THORS Null Theoretical Empirical Metacost CRS
Table 5: Average Costs and Standard Deviations for Each Algorithm on Each Classifier for Income Data
Figure 3: Box-plots of Costs by Different Thresholding Methods

(An entry w/l means our approach win w times and lose l times) Base AlgorithmThresholding Theoretical Null Empirical Metacost CRS Logit 20/0 12/8 20/0 18/2 20/0 LDA 18/2 20/0 20/0 20/0 20/0 NB 20/0 20/0 20/0 20/0 20/0 RF 17/3 20/0 13/7 11/9 20/0

Table 6: Summary of the Experimental Results for Income Data

Table 5 shows the average costs and corresponding deviance on the test set of various thresholding and meta-learning approaches. And win/loss comparison results are summarized in Table 6, from which we can notice that THORS can always beat other methods in more than half of 20 rounds. In addition, THORS even wins all the 20 rounds for NB base classifier in the comparison with all other algorithms. We can see that for all the four classifiers, THORS always gets the minimal average cost with small deviance, showing the power of THORS. Box-plots for different approaches are shown in Figure 3, from which it can be observed that box-plot of THORS always stays near the bottom, representing a low cost on test set.

Base AlgorithmThresholding THORS Empirical Metacost CRS
Logit 1.01 15.88 41.51 0.66
LDA 0.98 13.11 27.88 0.46
NB 0.83 10.94 0.43 0.05
RF 1.32 54.39 13.84 0.53
Table 7: Average Running Time of Some Methods (unit: s)
Figure 4: Minimal size of validation Set under 95% specific upper bound

Table 7 exhibits the average running time for a single round for various cost-sensitive algorithms, showing us that THORS is very efficient. And Figure 4 presents the relation between 95% upper bound and the conservative estimation of minimal size of validation set. We can see the similar approximately linear relation between the logarithmic size and the upper bound as in the previous case. It’s easy to control the expected costs under 1.3 in the probability of 95% for the present validation set size. And the size will also boom when the upper bound decreases. This result also indicates the trade-off we mentioned in the case of the trucks data set.

4.3 Telescope Data

In this case, we investigate the MAGIC gamma telescope data, which is from UCI Machine Learning Repository. All the 19,020 instances are divided into two classes, including 12,332 class 0 samples and 6,688 class 1 ones. The imbalance rate is 1.84:1. Except for classes, there are 10 other attributes, which will be set as predictors in the models. To formulate this problem into a cost-sensitive one, we set the costs as 100 and 20 for False Negative case and False Positive case. Among 19,020 instances, 7,608 observations are used for training base classifier, denoted as training set, 7,608 ones are applied to choose the optimal threshold, denoted as validation set, and remained 3,804 observations are set as the test set. And the base classifiers we choose include logistic regression with cost weighting (Logit), linear discriminant analysis (LDA), Naïve Bayes (NB), and random forest (RF). The same as previous two datasets, we will compare results of THORS with other thresholding methods, including the null model, theoretical method and empirical method, and other meta-learning approaches, including Metacost and CRS. Each algorithm is also run for 20 rounds.

THORS Null Theoretical Empirical Metacost CRS
Table 8: Average Costs and Standard Deviations for Each Algorithm on Each Classifier for Telescope Data
Figure 5: Box-plots of Costs by Different Thresholding Methods for Telescope Data

(An entry means our approach win times and lose times) Base AlgorithmThresholding Theoretical Null Empirical Metacost CRS Logit 20/0 13/7 20/0 20/0 20/0 LDA 9/11 20/0 16/4 19/1 16/4 NB 20/0 20/0 13/7 20/0 20/0 RF 7/13 20/0 20/0 20/0 19/1

Table 9: Summary of the Experimental Results for Telescope Data

It can be noticed from Table 9 that for almost all the cases, THORS defeats other methods over half of 20 rounds (except for theoretical method for LDA and RF classifier). And from Table 8, we can observe that the difference between average costs of THORS and theoretical method for LDA and RF classifier is very small. We can also see from Figure 5 that box-plot of THORS always stay at the bottom of the figure.

Base AlgorithmThresholding THORS Empirical Metacost CRS
Logit 0.55 11.45 14.74 0.56
LDA 0.50 9.51 9.93 0.39
NB 0.48 9.10 7.52 0.30
RF 0.77 37.98 24.85 0.34
Table 10: Average Running Time of Some Methods (unit: s)
Figure 6: Minimal size of validation Set under 95% specific upper bound

Table 10 lists all the average running times for each algorithm, and from this we can observe that THORS is very efficient. And Figure 6 shows us that it is possible to control the expected cost under about in 95% probability for the present validation set size. Under the same ratio between upper bound and , RF classifier seems to need a little more samples in the validation set than other three base classifiers.

5 Discussion

We propose an effective and efficient thresholding algorithm THORS to make an arbitrary scoring-type classifier whose cumulative distribution function of scores is continuous into a cost-sensitive one. Its idea of using order statistic to classify is intuitive and simple. THORS usually results in an excellent performance in terms of cost and time complexity. It can always induce the lowest cost in a short time among various algorithms. Besides, different from other popular cost-sensitive methods, we prove that THORS has several theoretical properties including the bound of the expected cost and an asymptotic boundary, which can also be used to estimate the size of validation to find optimal threshold controlling the expected cost with a specific probability. Finally, THORS often achieves drastic savings in computational resources for its low time complexity, which is desirable for applications that involve a massive amount of data.

One direction for the future work is the application of THORS to multiclass cost-sensitive problems. Here we introduce a simple idea for extending THORS to multiclass cases. Supposed that we have classes noted as class . And the costs of misclassifying them are decreasing correspondingly. Then we will find an optimal threshold vector with length . Here the full data is also divided into the training set and validation set to train base models and find optimal thresholds, respectively. Firstly we can sort the misclassification cost of different classes by decreasing order . Next, we apply THORS on the validation set in terms of class to get a threshold, . Estimated expected cost on validation set can be similarly defined as (2). Then optimal order statistic of one score corresponding to minimal empirical expected cost will be picked up as the first threshold. Next, we can apply THORS again on remained unlabeled observations in validation set and get the second threshold . Similarly, all thresholds will be determined. With the threshold vector , we can construct the following cost-sensitive classifier: Given a new data point, we firstly decide whether it should be classified into class or not on the basis of . If it does not fall into class , then we calculate the score estimating its possibility belonging to class and assign it to class if the score is lower than . The process holds on until the instance is labeled. Another interesting problem is the case that the cumulative distribution function of the score is not continuous. There may be other ways to yield comparable conclusions in discontinuity case. Besides, THORS can be combined with other approaches like resampling, which may lead to a better performance for imbalanced problem.


This work is supported by the National Key Research and Development Plan (No. 2016YFC0800100) and the NSF of China (No. 11671374, 71771203).


  • Balakrishnama and Ganapathiraju (1998) Balakrishnama, S. and Ganapathiraju, A. (1998). Linear discriminant analysis-a brief tutorial. Institute for Signal and information Processing, 18:1–8.
  • Bennett (1962) Bennett, G. (1962). Probability inequalities for the sum of independent random variables. Journal of the American Statistical Association, 57(297):33–45.
  • Chai et al. (2004) Chai, X., Deng, L., Yang, Q., and Ling, C. X. (2004).

    Test-cost sensitive naive bayes classification.

    In null, pages 51–58. IEEE.
  • Domingos (1999) Domingos, P. (1999). Metacost: A general method for making classifiers cost-sensitive. In Proceedings of the fifth ACM SIGKDD international conference on Knowledge discovery and data mining, pages 155–164. ACM.
  • Drummond and Holte (2000) Drummond, C. and Holte, R. C. (2000). Exploiting the cost (in) sensitivity of decision tree splitting criteria. In ICML, volume 1.
  • Elkan (2001) Elkan, C. (2001). The foundations of cost-sensitive learning. In

    International joint conference on artificial intelligence

    , volume 17, pages 973–978. Lawrence Erlbaum Associates Ltd.
  • Fan et al. (1999) Fan, W., Stolfo, S. J., Zhang, J., and Chan, P. K. (1999). Adacost: misclassification cost-sensitive boosting. In Icml, pages 97–105.
  • Hoeffding (1963) Hoeffding, W. (1963). Probability inequalities for sums of bounded random variables. Journal of the American statistical association, 58(301):13–30.
  • Krizhevsky et al. (2012) Krizhevsky, A., Sutskever, I., and Hinton, G. E. (2012). Imagenet classification with deep convolutional neural networks. In Advances in neural information processing systems, pages 1097–1105.
  • McCallum et al. (1998) McCallum, A., Nigam, K., et al. (1998). A comparison of event models for naive bayes text classification. In AAAI-98 workshop on learning for text categorization, volume 752, pages 41–48. Citeseer.
  • Safavian and Landgrebe (1991) Safavian, S. R. and Landgrebe, D. (1991). A survey of decision tree classifier methodology. IEEE transactions on systems, man, and cybernetics, 21(3):660–674.
  • Sheng and Ling (2006) Sheng, V. S. and Ling, C. X. (2006). Thresholding for making classifiers cost-sensitive. In AAAI, pages 476–481.
  • Sheng and Ling (2009) Sheng, V. S. and Ling, C. X. (2009). Cost-sensitive learning. In Encyclopedia of Data Warehousing and Mining, Second Edition, pages 339–345. IGI Global.
  • Sun et al. (2007) Sun, Y., Kamel, M. S., Wong, A. K., and Wang, Y. (2007). Cost-sensitive boosting for classification of imbalanced data. Pattern Recognition, 40(12):3358–3378.
  • Ting (1998) Ting, K. M. (1998). Inducing cost-sensitive trees via instance weighting. In European Symposium on Principles of Data Mining and Knowledge Discovery, pages 139–147. Springer.
  • Tong et al. (2018) Tong, X., Feng, Y., and Li, J. J. (2018). Neyman-pearson classification algorithms and np receiver operating characteristics. Science advances, 4(2):eaao1659.
  • Turney (1994) Turney, P. D. (1994).

    Cost-sensitive classification: Empirical evaluation of a hybrid genetic decision tree induction algorithm.

    Journal of artificial intelligence research, 2:369–409.
  • Viaene and Dedene (2005) Viaene, S. and Dedene, G. (2005). Cost-sensitive learning and decision making revisited. European journal of operational research, 166(1):212–220.
  • Witten et al. (2016) Witten, I. H., Frank, E., Hall, M. A., and Pal, C. J. (2016). Data Mining: Practical machine learning tools and techniques. Morgan Kaufmann.
  • Wu (2005) Wu, S. (2005). Some results on extending and sharpening the weierstrass product inequalities. Journal of mathematical analysis and applications, 308(2):689–702.
  • Zadrozny and Elkan (2001) Zadrozny, B. and Elkan, C. (2001). Learning and making decisions when costs and probabilities are both unknown. In Proceedings of the seventh ACM SIGKDD international conference on Knowledge discovery and data mining, pages 204–213. ACM.
  • Zadrozny et al. (2003) Zadrozny, B., Langford, J., and Abe, N. (2003). Cost-sensitive learning by cost-proportionate example weighting. In Data Mining, 2003. ICDM 2003. Third IEEE International Conference on, pages 435–442. IEEE.
  • Zhao (2008) Zhao, H. (2008). Instance weighting versus threshold adjusting for cost-sensitive classification. Knowledge and Information Systems, 15(3):321–334.
  • Zonneveldt et al. (2010) Zonneveldt, S., Korb, K., and Nicholson, A. (2010). Bayesian network classifiers for the german credit data. Technical report, Technical report, Technical report, 2010/1, Bayesian Intelligence. http://www. Bayesian-intelligence. com/publications. php.

Appendix: Proofs

Proof of Theorem 1


For (3) and the relation between and illustrated in Theorem 2.1, class 0 observations scored as will be classified into class 1 and class 1 observations scored as will be classified into class 0 if we choose as the threshold. Then empirical False Negative Rate (FNR) denoted as and empirical False Positive Rate (FPR) denoted as on validation set satisfy


And for each instance, the cost can be expressed as


And the marginal probabilities of class 0 and 1, are estimated as and . Thus, by (2), the expected cost on validation set satisfies


which means that we only need to minimize to minimize . This completes the proof. ∎

Proof of Theorem 2


In the following notation “” represents “defined as”. By definitions of and in Theorem 2.1, we know that threshold satisfies


indicating that FPR of instance


and also


Similarly, for FNR denoted as , we have


Above all, two types of error can be controlled as


For the randomness of order statistic, here are all random variables instead of constants. Now let’s investigate distributions of . Firstly we denote the conditional cumulative distribution function as and as . Hence,


Similarly there hold


Connecting (A.10) and (A.11) and distributions of , (6) and (7) hold. This completes the proof. ∎

Proof of Theorem 3


The expression of expected cost on test set can be derived in the following. Denoting the cost, true class, predicted class of instance and as and , then we have




Then we denote by , by and cumulative distribution function of by , respectively. The expectation of and can be derived as