Classification is a fundamental data analysis procedure, which is ubiquitously used across different fields. Thousands of classification algorithms (classifiers) have been developed during the past decades . These classifiers range from simple models such as k-nearest neighbors (k-NN) 
to more sophisticated models such as support vector machine (SVM)
and random forests (RF).
Despite the advances on the development of new classifiers, no single classification algorithm can always achieve the best performance on all data sets . This indicates that different classifiers are complementary to each other in different contexts. Therefore, it is still necessary to develop new and alternative classifiers based on some principles that remain unexplored.
The motivation behind this research is based on the following observations. First, existing non-lazy classifiers typically formulate the classification problem as an optimization problem. Such optimization-based learning strategies can always generate the target classifiers, regardless of the statistical significance of learnt models. Second, classifiers such as logistic regression are able to provide probability values for categorizing an unknown test instance. However, it is not an easy task to determine a universal probability threshold to ensure that the classification of the test instance into the corresponding class is statistically significant. Last but not least, existing classifiers cannot control the number of misclassified test instances in terms of metrics such as false discovery rate (FDR). Such capability is quite important in the scenario of biological data analysis, in which the prediction results will be further validated by wet-lab experiments that can be costly and time-consuming. Thus, we need to add some notion of statistical significance to classifiers.
In fact, the classification problem has already been formulated as a hypothesis testing issue in . More recently, several research efforts ,  further extend the initial formulation in  from different aspects. However, the following observations motivate this research. First of all, existing testing-based classification methods deserve certain theoretical drawbacks, as discussed and summarized in Section 2. Second, only simulation data sets and several small real data sets have been empirically tested, making it difficult to convince people on the practical usage of such testing-based formulation. Third, the connection between this new formulation and existing classification methods have never been discussed. Finally, the potential benefit of the testing-based classification model remains unexplored.
Based on the above observations, we present a new testing-based classification formulation, in which the null hypothesis is that, informally, the test instance doesn’t belong to any class. To precisely define the null hypothesis, we focus on the classification problem in a two-class setting. First, we can calculate the distance between the test instance and each training instance in the training data set. In this way, we will generate two sets of distances for one test instance that needs to be classified. Then, the hypothesis testing issue can be casted as a two-sample testing problem , in which each sample corresponds to a set of distances. In this formulation, the null hypothesis is that two sets of distances are drawn from the same cumulative distribution.
Two-sample testing is a fundamental problem in statistics. We employ the classical Wilcoxon-Mann-Whitney (WMW) test for quantifying the statistical significance in terms of p-values. To alleviate the effect of outlying and irrelevant training instances, we further apply the WMW test to two distance sets that are generated from k-NNs of the test instance.
The testing-based classification formulation has several salient features. First of all, it can provide p-values for each test instance to quantify the statistical significance of classifying this instance to certain classes. Accordingly, we can detect outlying test instances that do not belong to any class if the p-values with respect to all classes are larger than the significance level threshold. Second, we can control the FDR of test instances that are assigned to each class based on their p-values.
We evaluate our method on forty data sets from the UCI  repository and the KEEL-dataset repository  with respect to the standard classification task. The experimental results show that our method is able to achieve the same level performance as the state-of-the-art classifiers. Meanwhile, it can handle outlying test instances and control the FDR of test instances assigned to each class in a natural manner.
The main contributions of this paper can be summarized as follows.
(1) The binary classification issue is formulated as a two-sample testing problem. Since two-sample testing is a fundamental problem in statistics and many well-known tests are available in the literature, it can be expected that we may introduce many effective testing-based classifiers in the near future.
(2) The classification model that integrates hypothesis testing and the k-NN method is presented. This formulation can alleviate the effect of outlying and irrelevant training instances to improve the classification accuracy significantly.
(3) A comprehensive performance comparison over 40 real data sets is conducted. The experimental results demonstrate the fact that the testing-based classifier is able to achieve the same level performance as standard classifiers such as SVM and decision tree.
(4) Some interesting connections between our testing-based classifiers and existing classification methods are presented.
The rest of this paper is organized as follows. Section 2 discusses some previous works that are related to our method. Section 3 presents the details of our method. Section 4 reports experimental results on 40 real data sets. Section 5 discusses the relationship between our method and other approaches. Finally, Section 6 concludes this paper.
2 Related Work
2.1 Instance-based learning
Instance-based learning is a lazy learning scheme in which the training instances are simply stored. When a new instance is encountered, a set of similar training instances are retrieved to classify the unknown testing instance. The most basic instance-based method is the k-nearest neighbor algorithm (k-NN)  , which assigns a new instance to the most common class among its k-NNs in training instances.
Essentially, our method can be considered as an instance-based learning approach since the two-sample test is conducted on the distance sets generated from all training instances or k-NNs. This indicates that it is feasible to apply techniques developed for instance-based learning during the past decades (e.g. , , ) to further improve our method.
2.2 Classification based on hypothesis testing
Liao & Akritas  introduce a classification method based on hypothesis testing, which is abbreviated to TBC. Suppose there are two classes (positive vs. negative) in the training set, i.e., a binary classification problem, the issue is to allocate a new instance to one of the two classes. The basic idea of TBC is that, if is placed into the wrong class, then the difference of two samples will be blurred. To implement this idea, two tests with respect to the equality of the means of two samples are conducted, in which is placed into the set of positive instances and the set of negative instances, respectively. Accordingly, we will obtain two p-values and , where () is generated from the test in which is assumed to belong to the positive (negative) class. If , then is classified as a positive instance. Otherwise, will be classified as a negative instance. This method works well when the theoretical p-values can be computed and compared. However, TBC has two problems. First, when the number of features of data set is larger than the sample size of one class, the p-values cannot be computed at all because of the singularity of the sample covariance matrix. Second, when the instances from two class are well separated, the p-values will equal to zero.
Ghimire & Wang  improve the TBC method by introducing a minimum distance into the method and come up with a new classifier for image pixels. Their new method works well in the context of image pixel classification.
Modarres , ,  studies the properties of squared Euclidean interpoint distances (IPDs) between different samples which are taken from multivariate Bernoulli, multivariate Poisson and multinomial distributions. And he also discusses some applications based on IPDs within one sample and across two samples in different distributions.
Afterwards, Guo & Modarres 
develop a classification method based on hypothesis testing, which is abbreviated to IDC. It is capable of classifying high dimensional instances by employing testing methods based on the IPDs between different instances. Several different test statistics based on IPDs have been discussed in and we will take the Baringhaus and Franz (BF) statistic as the example. Given two sets of training instances, i.e., one positive set and one negative set , IDC first computes the average IPDs within , within and between and , which are denoted by , and respectively. Then, it calculates . Similarly, and can be obtained by placing into and , respectively. Note that () can be used to measure the change in the value of BF when is assigned to (). Therefore, if , is classified as a positive instance; otherwise, will be labelled as negative instance.
2.3 Asymmetric classification error control
In binary classification, most classifiers are constructed to minimize the overall classification error, which is a weighted sum of type I error (misclassifying a negative instance as a positive one) and type II error (misclassifying a positive instance as a negative one). However, in many realistic applications, different types of errors are often asymmetric, which have different costs and need to be treated with different weights.
The cost-sensitive classification (CSC) method ,  can solve this problem to some extent. It takes the misclassification costs into consideration and aims to minimize the total cost of both errors. Another method is the Neyman-Pearson (NP) classification , which is inspired by classical NP hypothesis testing. It is a novel statistical framework for handling asymmetric type I/II error priorities and can seek a classifier that minimizes the type II error while maintaining the type I error below a user-specified level , . CSC and NP classification are fundamentally different approaches that have their own pros and cons . A main advantage of the NP classification is that it is a general framework that allows users to control type I classification error under with a high probability.
It is very easy to control the type I error in terms of FDR in our formulation since the p-values of each test instance with respect to different classes will be generated in the classification phase. In other words, such testing-based classification formulation provides a unified framework for controlling the asymmetric classification error in a natural way.
3.1 Two-sample testing
Given two independent random samples and , where is drawn from the population and is drawn from the population, the general two-sample testing problem is concerned with the null hypothesis that the two samples are drawn from identical populations :
are the cumulative distribution functions for thepopulation and the population, respectively.
3.2 Problem formulation
We consider the binary classification problem, in which the training set is composed of two disjoint sets and . and are called the positive training set and the negative training set, respectively. Given a test instance , the classification task is to decide its class label (positive vs. negative).
We formulate the binary classification problem as a two-sample testing problem. In this formulation, the first sample is a set of n observations, where the ith observation is the distance between the test instance and the ith training instance in , i.e. . Similarly, each observation in the second sample is the distance between the test instance and each training instance in , i.e. .
To conduct the standard classification task, we may test the null hypothesis against two alternative hypotheses and to obtain two one-sided p-values ( and ). If , we will label as a positive instance. Otherwise, we will classify as a negative instance.
To handle the multi-classification problem with classes (), we can explore the one-vs-rest strategy by regarding the set of instances from one class as the positive training set and using the set of instances from the remaining classes as the negative training set. For each of binary classification problems, we first conduct the two-sample testing to generate a one-sided p-value for the corresponding class. Then, we can assign the test instance to the class that has the smallest p-value.
3.3 K-NN variants
In the above problem formulation, the distances to all training instances are utilized in the hypothesis testing. However, the existence of outlying and irrelevant training instances may decrease the classification accuracy. To alleviate this issue, we can conduct the hypothesis testing on two samples that are derived from the k-NNs of the test instance.
Under , two natural k-NN variants can be formulated. Similar to the k-NN classifier, the first variant is to directly take the k-NNs of the test instance to generate two samples. The distances from the test instance to these k nearest training instances are divided into two groups according to the class label, where each group corresponds to one sample in our scenario. The second variant is to take nearest instances from and retrieve nearest instances from to generate two distance sets, where . The rationale behind the second variant is that, if the null hypothesis is true, then the number of k-NNs from each class is proportional to the number of training instances in that class. Since when , we can take the same number of k-NNs from each class in this case.
3.4 The choice of testing methods
The testing method for two-sample differences has been extensively investigated in the literature. One widely used test for this issue is the WMW test, which is also called the Mann-Whitney U test or Wilcoxon rank-sum test . To obtain the test statistic in WMW test, and are merged to form a combined sample . Then, the observations in are ordered:
According to the ordered list, is defined as the rank of in and . If the null hypothesis is true, then
Based on the above normal approximation, we can calculate the one-sided p-value to test against () for some .
In our classification model, the choice of testing method is very flexible since the samples to be tested are unidimensional. That is, we can use any univariate two-sample testing method in our classifier. Therefore, we can also employ the testing methods such as pooled t-test, two-sample Kolmogorov-Smirnov test  and precedence test instead of the WMW test. In Section 5, we will further show that the use of different testing methods will establish the connection between our formulation and existing classification models.
3.5 Handling outliers and FDR control
As we have argued, the testing-based classification model has the advantage of controlling the FDR of classified test instances and handling outlying instances under the same framework. In general, we will assign the test instance to the class that has the smallest p-value among Q p-values, where Q is the number of classes. However, it is inappropriate to do so when all Q p-values are not significant. Luckily, we can use FDR  to tackle this problem. We can obtain Q sets of p-values from all test instances because our method returns Q p-values to classify every test instance. Every p-value set is firstly sorted in a non-descending order: , where is the number of all test instances. Given a significance level , let be the largest index for which
If , then the corresponding test instance will be assigned to the current class. After conducting FDR control on all Q p-value sets, we can label the test instances that are not classified to any class as outliers.
4.1 Data sets and experimental settings
We have conducted experiments on 40 data sets from the UCI  repository and the KEEL-dataset repository . Among these data sets, the number of instances ranges from 80 to 10092 and the number of features varies from 2 to 90. Most data sets have less than 10 classes and only six of them have more than 10 classes. The detailed characteristics of these data sets are given in Appendix A. Moreover, the instances with missing values are discarded and the numeric feature values are normalized into the interval in the pre-processing process.
In the experiment, we perform 10-fold cross-validation (CV) and count the number of instances which have been correctly classified to compute a classification accuracy value. For every data set, we repeat the 10-fold CV experiment 10 times and record the average and standard deviation of 10 accuracy values as the final results.
4.2 All instances vs. k-NNs
In the first experiment, we compare several variants of our formulation to check which one is better in practice. Since our method is a classifier that combines instance-based learning and hypothesis testing, we will use the abbreviation IBT to denote such a classification model. To distinguish different variants, IBT-U is used to denote the classification model when the Mann-Whitney U test is applied to the distance sets derived from all training instances. Similarly, IBT-U-K is used to denote the classification model in which the distance sets are generated according to k-NNs of the test instance. Furthermore, two k-NN variants are denoted by IBT-U-K-D (k-NNs are obtained Directly without considering the class label) and IBT-U-K-S (k-NNs are obtained Separately from different classes), respectively.
Additionally, the parameter k for two k-NN variants is specified as 3,5,7 and 9, respectively. The detailed experimental results on these three variants are given in Appendix B, C and D and their average accuracies are summarized in Table 1 and Table 2.
As shown in Table 1, the performance of IBT-U is much worse than that of two k-NN variants. This indicates that it is plausible to explore the k-NN strategy in the testing-based classification model. As shown in Table 2, the average classification accuracies of two k-NN variants are quite similar when k is varied from 3 to 9. In the forthcoming sections, we will use IBT-U-K-D (k=3) as a representative of our classifiers in the performance comparison.
4.3 Our method vs. Other testing-based classifiers
In the second experiment, we compare our method with two previous methods, TBC  and IDC , which also use hypothesis testing to solve a classification problem. The detailed experimental results are given in Appendix E and their average accuracies are presented in Table 3.
In the implementation of TBC, we employ the Hotelling’s test as the testing method, which has been utilized in . And we use the Hotelling’s statistics instead of p-values in the classification since the generated p
-values are often zeros. In the implementation of IDC, we use the Baringhaus and Franz (BF) statistic as the test statistic and assume equal prior probabilities in splite of unequal sample sizes.
For TBC, the classification accuracies on five data sets (Cleveland, Dermatology, Hepatitis, Movement_libras and Winequality-red) are 0 because the number of features of these data sets is larger than the sample size of one class, so we only use the rest 35 data sets to compute the average classification accuracy. For IDC, it can be applied to all data sets, so we simply compute the average of 40 accuracy values. According the comparison result, it’s obvious to see that our method performs significantly better than TBC and IDC.
Among these three methods, our method can achieve the best performance due to the following reasons. First, our method only consider the k-NNs of test instance while TBC and IDC utilize all training instances without considering the existence of outlying and irrelevent ones. Second, our method employs a hypothesis testing strategy that is totally different from that used in TBC and IDC.
4.4 Our method vs. Classic classifiers
In the third experiment, we compare our method with three classic classifiers: k-NN, support vector machine (SVM) and decision tree (DT). The detailed experimental results are given in Appendix F and G and their average accuracies are presented in Table 4.
For SVM, k-NN and DT, we use the functions fitcecoc, fitcknn and fitctree with their default parameter settings in Matlab 2018b, respectively. The reason for using fitcecoc function is that it can generate a multi-class model for SVM.
As shown in Table 4, our method is able to achieve the same level performance as these classic classifiers. Concretely, there are 13, 19 and 18 data sets on which our method can produce higher classification accuracies than k-NN, SVM and DT among the 40 data sets, respectively. In a word, our method is competitive to these classic classifiers with respect to the overall performance.
4.5 Handling outliers through FDR control
In the last experiment, we investigate the potential of our method on outlier detection and FDR control. Thebalance data set from UCI is used as an example, which has 625 instances and three classes (L, B and R). There are 288, 49 and 288 instances in the three classes respectively, as shown in Table 5. If we take a subset of the 576 (288+288) instances from the class L and R as training instances and use the 49 instances from the class B as test instances, then it is obvious that all test instances should be considered as outliers.
We randomly take 80 percent of instances from the class L and R to compose the training set. In order to obtain the average performance, 10 different random training sets are generated. We use IBT-U as the classifier and the significance level for FDR is set to be 0.05. The experimental results show that 48 of 49 test instances can be labelled as outliers on average. Specifically, there are at most 2 test instances which cannot be labelled as outliers and they are usually different when the training set is different. Therefore, our method is able to recognize outliers and control the FDR of classification results in the same time.
5 Relationship to Other Approaches
Our classification method is a two-phase approach: two distance sets are first generated and then the two-sample test is conducted. As we have discussed, we may use different significance testing methods in the second phase. In this section, we will show that the use of different testing methods will lead to different classifiers that have close relationship with existing classification models.
5.1 Connection to Nearest Centroid Classifier
The nearest centroid (mean) classifier is one of the most widely used instance-based classification models . In the training phase, only the centroid for each class is calculated and stored. In the classification phase, the distance between one unknown instance and each centroid is calculated to find the nearest centroid. Then, this new test instance is assigned to the class of its nearest centroid.
If the pooled t-test is employed as the significance testing procedure in our model, then we can reveal some interesting connections between our method and the nearest centroid classifier. To simplify the analysis, we first consider the scenario of univariate data set and then discuss the case of multivariate data set.
Given two one-dimensional sets and , their centroids (means) can be easily computed by and . Given an unknown instance , the distances between and these two centroids can be measured by and . The nearest centroid classification method will assign to the positive or the negative class according to whether .
In our method, two samples and are obtained and their means are denoted by and . Then, we test the null hypothesis against two alternative hypotheses and on the two samples to obtain two one-sided p-values ( and ). At last, our method will assign to the positive (negative) class if ().
Note that when the pooled t-test is employed in our method, we will obtain two t statistics ( and ). We can get
Similarly, we can also get . Therefore, our method will assign to the positive class if . Otherwise, we will label as a negative instance.
According to the triangle inequality, we can get
in which the equality holds if and only if or . Similarly, we can get in which the equality holds if and only if or .
When and , our method will assign the test instance to the same class label as the nearest centroid classification method. Obviously, the above analysis establish the equivalence between our method and the nearest centroid classifier under very strict constraints: (1) one-dimensional data set, (2) the test instance is no less (more) than all training instances in each class.
For the multivariate case, it is very difficult to analyze their relationship in a quantitative manner. One naive connection is that if , then our method and the nearest centroid classification method will produce the same classification result.
5.2 Connection to k-NN Classifier
The k-NN classifier is one of the most popular classification methods in the literature . In our formulation, if the precedence test  is employed as the significance testing method, then we may uncover some interesting connections between our method and the k-NN classifier.
We still consider the binary classification problem in which the training data is composed of positive instances from and negative instances from . Given an unknown instance , the k-NN classification method finds its k nearest neighbors (k-NNs) to conduct the classification. These k-NNs can be divided into two groups: positive instances from and instances from , where . If , then will be classified as a positive instance. Otherwise, is assigned to the negative class.
The precedence test is a two-sample test based on the order of early failures . Given two independent samples, and , let and denote their order statistics. The precedence test is based on the number of observations from one sample which exceed (precede) some threshold specified by the other sample. More precisely, the test statistic is the number of observations in that precede the r-th order statistic from . Alternatively, one can use the number of observations in that exceed the s-th order statistic from as the test statistic . Large values of these two test statistics will lead to the rejection of the null hypothesis that two distributions are equal.
In our problem formulation, () is the distance set between and the instances in (). Then, will be the k distance values between and its -NNs. If we use the precedence test as the significance testing method and suppose that , we can set to obtain the corresponding test statistic for testing the null hypothesis against the alternative hypothesis (). Alternatively, if we let , we can obtain another test statistic for testing the null hypothesis against the alternative hypothesis (). And we can also get two p-values, and . At last, will be assigned to the positive (negative) class if the former (latter) is smaller.
If we further assume that the positive training set and the negative training set have the same size, i.e., , then the two p-values will be totally determined by the two test statistics: or . Therefore, our method and the k-NN classifier will generate the same classification result under the above assumptions. From this aspect, we may regard our method equipped with the precedence test as a generalized ”statistical” k-NN classifier.
Due to the importance of the classification problem, many effective classification algorithms have been proposed from different societies. However, most work on classification does not address the issue of statistical significance. Towards this direction, several initial research efforts have investigated the feasibility of constructing a classifier through significance testing. Unfortunately, this interesting idea has not receive much attention during the past 10 years. This is mainly because the following reasons: (1) there are still no such testing-based classifiers that can achieve the same level performance as the state-of-the-art methods on real data sets; (2) the potential benefit of deploying such testing-based classifiers is still not clear.
Based on the above observations, this paper takes one step further towards this direction by formulating the classification problem as a two-sample testing problem. This new formulation enables us to generate several testing-based classifiers that have comparable performance with standard classifiers such as SVM. In addition, we show that it is quite easy to handle outlying test instances and control the FDR of classification results based on the p-values associated with each test instance.
We believe this paper will significantly contribute to the development of testing-based classification model, which will become a new promising classifier family. As the study on the testing-based classification model is still in its infancy stage, many research issues remain unexplored and should be further investigated in the future work. For example, since all the existing testing-based classifiers are based on the idea of instance-based learning, how to build a non-lazy testing-based classifier will be an interesting and challenging issue.
The detailed characteristics of the forty data sets is given by Table 5.
|ID||Names||Instances||Features||Classes||Class Distribution||Download Links|
The detailed experimental results of IBT-U are given by Table 6.
The detailed experimental results of IBT-U-K-D are given by Table 7.
The detailed experimental results of IBT-U-K-S are given by Table 8.
The detailed experimental results of TBC and IDC are given in Table 9.
The detailed experimental results of k-NN are given in Table 10.
The detailed experimental results of SVM and DT are given in Table 11.
This work was partially supported by the Natural Science Foundation of China (Nos. 61572094, 61771331) and the Fundamental Research Funds for the Central Universities (No. DUT2017TB02).
-  M. F. Delgado, E. Cernadas, S. Barro, and D. G. Amorim, “Do we need hundreds of classifiers to solve real world classification problems?” Journal of Machine Learning Research, vol. 15, no. 1, pp. 3133–3181, 2014.
-  T. M. Cover and P. E. Hart, “Nearest neighbor pattern classification,” IEEE Transactions on Information Theory, vol. 13, no. 1, pp. 21–27, 1967.
-  C. Cortes and V. Vapnik, “Support vector networks,” Machine Learning, vol. 20, no. 3, pp. 273–297, 1995.
-  L. Breiman, “Random forests,” Machine Learning, vol. 45, pp. 5–32, 2001.
-  O. Wagih, J. Reimand, and G. D. Bader, “MIMP: Predicting the impact of mutations on kinase-substrate phosphorylation,” Nature Methods, vol. 12, no. 6, pp. 531–3, 2015.
-  S.-M. Liao and M. Akritas, “Test-based classification: A linkage between classification and statistical testing,” Statistics & probability letters, vol. 77, no. 12, pp. 1269–1281, 2007.
-  S. Ghimire and H. Wang, “Classification of image pixels based on minimum distance and hypothesis testing,” Computational Statistics & Data Analysis, vol. 56, no. 7, pp. 2273–2287, 2012.
-  L. Guo and R. Modarres, “Interpoint distance classification of high dimensional discrete observations,” International Statistical Review, 2018.
-  J. D. Gibbons and S. Chakraborti, Nonparametric statistical inference, 5th ed. CRC Press, 2011.
-  D. Dheeru and E. Karra Taniskidou, “UCI machine learning repository.” 2017. [Online]. Available: http://archive.ics.uci.edu/ml
-  J. Alcalá-Fdez, A. Fernández, J. Luengo, J. Derrac, and S. García, “Keel data-mining software tool: Data set repository, integration of algorithms and experimental analysis framework,” Journal of Multiple-Valued Logic & Soft Computing, vol. 17, pp. 255–287, 2011.
-  T. M. Mitchell, Machine Learning, 1st ed. New York, NY, USA: McGraw-Hill, Inc., 1997.
-  D. R. Wilson and T. R. Martinez, “Reduction techniques for instance-based learning algorithms,” Machine learning, vol. 38, no. 3, pp. 257–286, 2000.
-  S. Garcia, J. Derrac, J. Cano, and F. Herrera, “Prototype selection for nearest neighbor classification: Taxonomy and empirical study,” IEEE transactions on pattern analysis and machine intelligence, vol. 34, no. 3, pp. 417–435, 2012.
-  J. Derrac, S. García, and F. Herrera, “Fuzzy nearest neighbor algorithms: Taxonomy, experimental analysis and prospects,” Information Sciences, vol. 260, pp. 98–119, 2014.
-  R. Modarres, “On the interpoint distances of Bernoulli vectors,” Statistics & Probability Letters, vol. 84, pp. 215–222, 2014.
-  ——, “Multivariate Poisson interpoint distances,” Statistics & Probability Letters, vol. 112, pp. 113–123, 2016.
-  ——, “Multinomial interpoint distances,” Statistical Papers, vol. 59, no. 1, pp. 341–360, 2018.
C. Elkan, “The foundations of cost-sensitive learning,” in
International joint conference on artificial intelligence, vol. 17, no. 1. Lawrence Erlbaum Associates Ltd, 2001, pp. 973–978.
-  B. Zadrozny, J. Langford, and N. Abe, “Cost-sensitive learning by cost-proportionate example weighting,” in Data Mining, 2003. ICDM 2003. Third IEEE International Conference on. IEEE, 2003, pp. 435–442.
-  C. Scott and R. Nowak, “A Neyman-Pearson approach to statistical learning,” IEEE Transactions on Information Theory, vol. 51, no. 11, pp. 3806–3819, 2005.
-  X. Tong, Y. Feng, and A. Zhao, “A survey on Neyman-Pearson classification and suggestions for future research,” Wiley Interdisciplinary Reviews: Computational Statistics, vol. 8, no. 2, pp. 64–81, 2016.
-  X. Tong, Y. Feng, and J. J. Li, “Neyman-Pearson classification algorithms and NP receiver operating characteristics,” Science Advances, vol. 4, no. 2, 2018. [Online]. Available: http://advances.sciencemag.org/content/4/2/eaao1659
H. B. Mann and D. R. Whitney, “On a test of whether one of two random variables is stochastically larger than the other,”Annals of Mathematical Statistics, vol. 18, no. 1, pp. 50–60, 1947.
-  J. Wang, W. W. Tsang, and G. Marsaglia, “Evaluating Kolmogorov’s distribution,” Journal of Statistical Software, vol. 8, no. 18, 2003.
-  Y. Benjamini and Y. Hochberg, “Controlling the false discovery rate: A practical and powerful approach to multiple testing,” Journal of the Royal Statistical Society, vol. 57, no. 1, pp. 289–300, 1995.
-  J. Friedman, T. Hastie, and R. Tibshirani, The elements of statistical learning. Springer series in statistics New York, NY, USA, 2001.
-  X. Wu, V. Kumar, J. R. Quinlan, J. Ghosh, Q. Yang, H. Motoda, G. J. McLachlan, A. Ng, B. Liu, S. Y. Philip et al., “Top 10 algorithms in data mining,” Knowledge and information systems, vol. 14, no. 1, pp. 1–37, 2008.
-  N. Balakrishnan and H. T. Ng, Precedence-type tests and applications. John Wiley & Sons, Hoboken, NJ, 2006.