Improving Positive Unlabeled Learning: Practical AUL Estimation and New Training Method for Extremely Imbalanced Data Sets

04/21/2020 ∙ by Liwei Jiang, et al. ∙ 0

Positive Unlabeled (PU) learning is widely used in many applications, where a binary classifier is trained on the datasets consisting of only positive and unlabeled samples. In this paper, we improve PU learning over state-of-the-art from two aspects. Firstly, existing model evaluation methods for PU learning requires ground truth of unlabeled samples, which is unlikely to be obtained in practice. In order to release this restriction, we propose an asymptotic unbiased practical AUL (area under the lift) estimation method, which makes use of raw PU data without prior knowledge of unlabeled samples. Secondly, we propose ProbTagging, a new training method for extremely imbalanced data sets, where the number of unlabeled samples is hundreds or thousands of times that of positive samples. ProbTagging introduces probability into the aggregation method. Specifically, each unlabeled sample is tagged positive or negative with the probability calculated based on the similarity to its positive neighbors. Based on this, multiple data sets are generated to train different models, which are then combined into an ensemble model. Compared to state-of-the-art work, the experimental results show that ProbTagging can increase the AUC by up to 10 two artificial PU data sets.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Positive unlabeled (PU) learning is a variant of positive negative (PN) learning and has a wide range of applications, where negative samples are unavailable. For example, in the field of credit card fraud detection (Li et al., 2009), the bank can only obtain partial fraud cases through user complaints, but the other unlabelled users may also include fraudulent ones; in the field of e-commerce product recommendation (Lee et al., 2012), a user’s favorite products are known according to the user’s shopping cart, but it is difficult to obtain the products that the user does not like; in the field of network operation and maintenance, only a few abnormal events can be observed, but there are still many abnormal events that have not been found. In these cases, traditional semi-learning algorithms (Kingma et al., 2014; Miyato et al., 2018; Oliver et al., 2018; Zhu & Ghahramani, 2002)

and supervised learning algorithms

(Krizhevsky et al., 2012; Simonyan & Zisserman, 2015; Chollet, 2017) cannot be applied because of the absence of labeled negative samples.

In this work, we improve PU learning over state-of-the-art from two aspects, namely, model evaluation and model training. We not only propose a practical AUL estimation method without requiring prior knowledge of unlabeled samples, but also design a new method to improve the performance of extremely imbalanced PU learning, called ProbTagging.

Model evaluation plays an important role in PU learning, with the help of which we can validate a model and choose the best one among candidate models. However, existing evaluation methods either require class prior knowledge of unlabeled samples (Kiryo et al., 2017; Sakai et al., 2018) or directly use PN labeled data as the testing data set (Elkan & Noto, 2008; Liu et al., 2003; Mordelet & Vert, 2014)

. In practice, it is very difficult, if not impossible at all, to obtain either ground truth of unlabeled samples or PN labeled data. To overcome this problem, we attempt to evaluate PU models using raw PU data sets only (without any class prior knowledge of the unlabeled samples). The major challenge is that the commonly used evaluation metrics are computed using fully labeled PN data sets rather than PU data sets. To solve this issue, we propose a practical AUL estimation method for PU learning, where only raw PU data sets are required. We show that the AUL computed with a PU data set is an asymptotic unbiased estimation for that computed with the corresponding PN data set.

In many practical PU learning applications, positive samples and unlabeled ones are extremely imbalanced, say, the number of positive samples is less than 5% of total samples. Examples include credit card fraud detection and e-commerce recommendation. There is much research work to discuss PU learning (Elkan & Noto, 2008; Liu et al., 2003; Mordelet & Vert, 2014; Kiryo et al., 2017; Sakai et al., 2018). Most of them consider to use unlabeled samples to generate negative samples (Elkan & Noto, 2008; Liu et al., 2003; Mordelet & Vert, 2014), or use unlabeled samples to estimate the risk of negative samples (Kiryo et al., 2017; Sakai et al., 2018). But serious discussions of PU learning under extremely imbalanced data are insufficient. To this end, we design a new training method, called ProbTagging.

ProbTagging is designed for PU learning with extremely imbalanced data sets and its key idea is to find as many positive samples as possible in unlabeled samples to mitigate the data imbalance. ProbTagging generates multiple PN data sets (even if not completely reliable) from a PU data set by labeling each unlabeled sample positive or negative with a certain probability according to the similarity to other labeled positive samples. The generated PN data sets contain more positive samples compared with raw PU data sets. After generating multiple PN data sets, several models are trained using those generated PN data sets, which are then aggregated to obtain a final model. Compared with state-of-the-art work, the experimental results show that ProbTagging can increase the AUC by up to 10%, based on three industrial and two artificial PU data sets.

The rest of this paper is organized as follows: Section 2 discusses the background and related work. Section 3 presents the AUL estimation method and theoretical proofs. Section 4 and Section 5 describes the ProbTagging training method as well as the comparison results with baseline solutions, respectively. Finally, Section 6 concludes the paper.

2 Background and Related Work

In this section, we formulate the problem setting, introduce notations used in this paper, and review related work of PN learning and PU learning.

2.1 Problem Settings

Let and

be random variables from some distribution

, where denotes the dimension of . Let be the joint probability density of , and the marginal probability density. Let and

be the class prior probabilities for the positive and negative classes with

. Moreover, we denote and to be the distribution of positive and negative samples defined by and , respectively. A raw set of samples is independent and identically distributed from as .

Let be an arbitrary real-valued decision function for binary classification, which produces the cumulative function of with selected from a distribution :

Especially, the empirical cumulative function of with uniformly selected from a set of samples is denoted by

In binary classification, a threshold is set to distinguish negative and positive samples. More precisely, those samples with are classified by as positive samples. For , the threshold is chosen to be

which is also known as the

-th quantile on

. The value of means that there are samples with greater than or equal to , therefore, samples with a ratio of are classified by as positive samples.

2.2 PN Learning

In the PN learning, two sets and of data are obtained from the raw set by and , respectively. The raw set is called a PN data set in the sense of PN learning. The sensitivity of on a raw set of data is denoted as

Basically all supervised learning is PN learning in the case of binary classification, e.g. logistic regression

(Tolles & Meurer, 2016)

, SVM (support vector machine)

(Tax & Duin, 2004)

and deep neural network

(Krizhevsky et al., 2012; Simonyan & Zisserman, 2015; Chollet, 2017).

2.3 PU Learning

PU learning is a kind of the classification learning in the case that we have only unlabeled samples and some distinguished positive samples.

In the PU learning, the raw set of data is observed to be an observed set , and two sets and of data called the positive(P) and unlabeled(U) data are obtained from the observed set by and , respectively. The raw set is called a PU data set in the sense of PU learning.

In our case, only part of positive samples are observed to be positive, while the rest samples (the rest of positive samples and all negative samples) remain unknown. We use to denote the observed tag for every with for observed and for unknown. We assume that

  1. A positive sample is independently observed to be positive with probability and remains unknown with probability , i.e. and .

  2. All negative samples remain unknown with certainty, i.e. .

The sensitivity of on an observed set of data is denoted as

Practical PU learning includes two aspects: model evaluation and model training.

In many scenes of PU model evaluation (Elkan & Noto, 2008; Liu et al., 2003; Mordelet & Vert, 2014; Li & Hua, 2014), there is lack of PN data sets for testing, in which case we have only an observed data set rather than a ground-truth data set to evaluate a model. Some risk estimators (Kiryo et al., 2017; Sakai et al., 2018) can be applied to evaluate the models using PU test sets, however, the class prior knowledge is also required. (Xie & Li, 2018)

conducts unbiased AUC risk estimation for semi-supervised learning but not for PU learning.

In the field of PU model training, some work (Lee & Liu, 2003; Elkan & Noto, 2008) directly uses unlabeled samples as weighted negative samples and trains them together with positive samples to get a binary classifier. Work in (Li & Liu, 2003; Liu et al., 2003; Zhang & Zuo, 2009) uses the two-step method, which performs two steps as follows: firstly identifies negative examples from the unlabeled examples, and secondly builds a classifier to classify rest of unlabeled samples iteratively. The most important difference of these two-step methods is the use of different algorithms in the first step. The BaggingPU method (Mordelet & Vert, 2014) uses bagging classifiers for PU learning. Several binary classifiers of BaggingPU are trained with all positive samples (work as positive samples) and some samples randomly extracted from the unlabeled samples (work as negative samples), which are then aggregated to the final model. (Du Plessis et al., 2014)

first shows that PU learning can be solved by cost-sensitive learning, and the intrinsic bias can be eliminated by some non-convex loss functions. Later, a convex formulation for PU classification proposed by

(Du Plessis et al., 2015) can also cancel the bias. The work in (Kiryo et al., 2017) improves the unbiased risk estimators (Du Plessis et al., 2014, 2015) and proposes nnPU (non-negative PU learning) for PU learning. In addition, (Sakai et al., 2018) first proposes the risk estimator for AUC optimization of PU data. The state-of-the-art risk estimators (Kiryo et al., 2017; Sakai et al., 2018) do own a beautiful theoretical foundation, but when observed positive samples are rare, these estimators may not have very accurate estimation of negative samples’ risk.

3 AUL Estimation without Class Prior Knowledge

Below we provide a theorem that establishes the relationship between the observed sensitivity and latent ground true sensitivity . Based on this, we give an AUL estimation for PU testing.

Lift curve (Tufféry, 2011) is a variant of the ROC curve. For a lift curve of on , its abscissa is , and ordinate is . The area under lift curve is called AUL. For a PN data set, it can be calculated by (Tufféry, 2011) proves that It implies that also can decide whether a model is better than another. In PU testing, we approximate of on by measuring the of on , i.e. , and then evaluate models. Theorem 3.1 explains that the gap between and becomes very small when is sufficiently large.

Theorem 3.1.

For every , let . If is the unique solution of , and is continuous at , then for every , we have

(1)

where for some with .

Moreover, if and are strictly increasing with respect to , then

(2)
Proof.

It can be easily seen that Eq (1) implies Eq (2), and we only prove Eq (1) here.

We make the left hand side bounded by the summation of four terms, i.e.

where

and .

We select for example. We introduce a trade-off on . It is not difficult to see that

According to the property of the distribution and sample -th quantiles (Serfling, 2009), we have

where

(3)

By Hoeffding’s Theorem (Serfling, 2009), we have

where

(4)

Finally, we have

Similarly for , after introducing trade-off on , we have

where and are defined for by Eq (3) and (4) replacing the indices. Again by Hoeffding’s Theorem and similar techniques, we can furthermore obtain that

Therefore, the proof is completed by combining with the above inequalities with , and such that . ∎

We note that Theorem 3.1 implies that in the general case, the values of observed and latent ground true sensitivities are expected to coincide when the number of samples is sufficiently large. In other words, is an asymptotic unbiased estimation for , and the so is for .

A raw PU test set can be directly used to calculate and according to prediction probability given by the model. In Theorem 3.1, we prove the upper bound of the estimation error of , which has a negative exponential relation with the number of samples. When goes to infinity, the probability that and are equal is 1. This shows that using to estimate is feasible.

According to the relationship and Theorem 3.1, is a valid replacement for AUC on PU model evaluation. It is not necessary to have knowledge of class prior probability when choosing the best one among candidate models.

4 Design of ProbTagging

In this section, we introduce the idea of ProbTagging, and present a specific algorithm of ProbTagging.

4.1 Overview

ProbTagging is designed for PU learning with extremely imbalanced data sets, where the number of positive samples is less than 5% of the overall samples. The cases widely exist in many application scenarios, such as credit card fraud detection, e-commerce product recommendation, etc. The key idea of ProbTagging is to find as many positive samples as possible in unlabeled samples to mitigate the data imbalance, thus ProbTagging neither treats unlabeled samples as weighted negative samples (Elkan & Noto, 2008), nor directly generates many new data sets that are considered to be negative data sets by sampling uniformly from unlabeled samples (Mordelet & Vert, 2014).

Specifically, ProbTagging takes the following ideas.

  1. For each unlabeled sample, calculate the similarity between it and positive samples, and then tag it positive or negative with probability according to the similarity;

  2. Repeat Step 1 for times, and obtain PN data sets (even if not completely reliable).

  3. Train models using the PN data sets obtained in Step 2, and then aggregate them to a final model.

Figure 1 shows the schematic diagram of ProbTagging. We can see that new PN data sets have more positive samples, and ProbTagging provides a sample enhancement technique for extremely imbalanced PU learning. ProbTagging is an abstract of training methods. In this way, one can choose different calculation methods of similarity in Step 1 and different models as their base classifiers in Step 3.

Figure 1: Schematic diagram of the ProbTagging when . Blue circles indicate positive samples, orange triangles indicate unlabeled samples, and green squares indicate negative samples.

4.2 Algorithm

This part provides a specific algorithm of ProbTagging. We suggest that similarity between unlabeled samples and positive samples can be calculated by the -NN algorithm, and give a strategy for selecting the parameter with the characteristics of the ProbTagging.

4.2.1 Three Steps of ProbTagging

We describe the algorithm of each step of ProbTagging three steps.

Step 1. We calculate the -nearest neighbors in the Euclidean distance of each sample in the observed set. Let be the set of -nearest neighbors in of sample . For each , the similarity is selected to be the proportion of positive samples in , i.e.

We tag the unlabeled samples according to

via uniform distribution. More precisely, for each

, let uniformly random from , where denotes the uniform distribution in the range . Let

After tagging all unlabeled samples, we obtain a PN data set . Two data sets and are obtained from the PN data set by and , respectively.

Step 2. Repeat Step 1 for times. Let be the -th PN data set we obtain. In this implementation, the base classifier is selected to be GBDT (Friedman, 2002). We denote the to be model trained by GBDT with .

Step 3. The final model is defined to be the average value of . i.e.

4.2.2 Parameter Selection

For selecting a suitable parameter , let be the expected number of unlabeled samples that are tagged positive. That is,

We choose the Credit Card Fraud data set in Table 1 as an illustrative example, the function image of with respect to is shown Figure 2. (The information of all data sets is later described in Section 5.1.1.)

As we can see, the value of changes in two stages as increases.

  1. Initial stage: increases significantly as increases. In this stage, if the value of slightly increases, a remarkable number of unlabeled samples will be tagged positive, which makes good use of the only positive samples.

  2. Stable stage: changes little and decreases slightly as increases. In this stage, increasing brings few benefits. As a trade-off, small does not make full use of the positive samples, and large requires too much calculation.

Therefore, suitable values of are chosen around the interchange of the two stages. For example, the value of can be between 16 and 20 in Figure 2.

Figure 2: The image of with respect to induced by a training set of some fold of Credit Card Fraud data set.

5 Experimental Evaluation

In this section, we first introduce the preparation of data sets and the evaluation method of the experiments. Finally, we show the performance of ProbTagging on different data sets compared to several other PU learning methods.

5.1 Experimental Setting

5.1.1 Data Sets

Five data sets are used in our experiment, three of which are real-world raw PU data sets in the bank scenario, and the rest two are from UCI and kaggle databases. See Table 1 for more information about data sets. As we can see in Table 1, these data sets are all extremely imbalanced.

The real-world data sets Bank1, Bank2 and Bank3 from bank scenario are used for distinguishing those credit card customers that have demands for savings cards. The acquisition process of the data sets is set up in two stage. In the first stage, the bank can obtain the features of the credit card customers; and in the second stage, the credit card customers who have processed the savings card at this stage are considered as positive samples. The rest of credit card customers do not have a savings card in the designated observation period (the second stage), but it does not mean that these customers do not need savings cards at all, therefore the labels of these credit card customers remain unknown. We regard those customers as unlabeled samples. Clearly, it can be seen that PU learning fits in the bank scenario.

The data sets APS from UCI (Dua & Graff, 2016) and Credit Card Fraud from kaggle (machine learning group ULB, 2017) are PN data sets for binary classification. They need to be manually manipulated to PU data sets by selecting a proportional parameter , which indicates that of positive samples are treated as observed positive samples and the remaining positive samples and all negative samples are treated as unlabeled samples. Different data sets can be generated as the value of varies.

In the previous work (Elkan & Noto, 2008; Liu et al., 2003; Mordelet & Vert, 2014; Kiryo et al., 2017; Sakai et al., 2018), the experimental data were all PN data from public databases. But in real PU learning scenarios, negative samples are absent. Therefore, in order to make the research work closer to the real PU learning scenario, we use raw PU data sets such as Bank1, Bank2 and Bank3 to perform model training and evaluation for the first time.

Data sets #Ins. #Attr. #Positive ins.
Bank1 1365550 30 5173
Bank2 1365550 30 4989
Bank3 1365550 30 5154
APS 16000 169 375
Credit Card Fraud 284807 30 492
Table 1: Data sets description.

5.1.2 Evaluation Method

We use different methods to evaluate the models for data sets from different scenes.

For a PU data set (Bank1, Bank2 and Bank3), we divide it into a training set and a test set, the PU model is trained with the training set and evaluated with the test set by AUL estimation method. We recall that in Section 3, is proved to be a reasonable metric in evaluating PU models using PU data sets.

For a PN data set (APS and Credit Card Fraud), we divide it into a PN training set and a PN test set. The PN training set is manually manipulated into a PU training set by selecting parameter . Then the model is trained with the PU training set. Then the model is trained with the PU training set, and AUC is computed using the PN test set. In order to compare the two evaluation metrics AUC and , the PN test set is manually manipulated into a PU test set with the same parameter , and then evaluate the PU model with the PU test set by AUL estimation method.

We compare our method against five PU learning methods: Elkan’s method (Elkan & Noto, 2008), Liu’s method (Liu et al., 2003), BaggingPU (Mordelet & Vert, 2014), nnPU (Kiryo et al., 2017) and PU-AUC (Sakai et al., 2018).

For fair comparison, we use GBDT as classifier of Elkan’s method, Liu’s method, BaggingPU and ProbTagging in the experiments. To accelerate training, we use GPU version xgboost

(Chen & Guestrin, 2016) for Bank1, Bank2 and Bank3 and lightgbm (Ke et al., 2017) for APS and Credit Card Fraud as their specific implementation methods of GBDT. In particular, both GPU version xgboost and lightgbm use their default parameters.

The implementation used for ProbTagging is described in Section 4.2.1. the parameter is selected according to 4.2.2. ProbTagging and BaggingPU both use

base classifiers. We use the open source code of

(Kiryo et al., 2017) to implement nnPU. There are three versions of nnPU: linear model, 3-layerfully-connected neural network and multi-layer fully-connected neural network in the open source code. We directly use the network structure and parameters given in the open source code and choose the best result of the three versions as the final result. In addition, we follow the implementation details of (Sakai et al., 2018) to implement PU-AUC.

Class prior knowledge is required in nnPU and PU-AUC. For APS and Credit Card Fraud, the class prior probabilities are set to the actual values in their original PN data. For Bank1, Bank2 and Bank3, since class prior knowledge is unknown and impossible to be obtained, it is reasonable to set the positive-class prior probability to the proportion of positive samples in original PU data sets.

We use cross-validation for each data set, and take the average result of each fold as the final result. For Bank1, Bank2, Bank3 and Credit Card Fraud, we use 3-fold cross-validation; for APS, we use 5-fold cross-validation.

5.2 Results

5.2.1 Results of Real-world Data Sets

The three data sets Bank1, Bank2 and Bank3 are real-world industrial PU data sets. Therefore, no PN labels are available for reference. The AUL estimation method proposed in Section 3 is used to evaluate models.

The results of real-world bank data sets are shown in Table 2. We see that ProbTagging performs best on Bank1 and Bank3 and slightly worse than BaggingPU on Bank2. In summary, ProbTagging performs well on real-world extremely imbalanced PU data sets.

AUL Bank1 Bank2 Bank3
ProbTagging 0.6082 0.5971 0.6099
Elkan’s method 0.6052 0.5943 0.6063
Liu’s method 0.5884 0.5776 0.5849
BaggingPU 0.6072 0.5976 0.6086
nnPU 0.5286 0.5186 0.5202
PU-AUC 0.5455 0.5484 0.5533
Table 2: AUL estimation results of real-world bank data sets.

5.2.2 Results of Public Data Sets

For a PN data set, we need to manipulate it to PU training set according to the parameter . Table 3 shows the AUC results and AUL estimation results for each data set under different methods with the parameter . In this case, ProbTagging performs very well on both APS and Credit Card Fraud.

It is worth mentioning that in Table 3, the AUL estimation on PU test sets gives the same assertion as the AUC on PN test sets about which model is better. We see that ProbTagging not only performs well under ordinary circumstances but also shows its robustness in the case of high noise.

APS Credit Card Fraud
AUC AUL AUC AUL
ProbTagging 0.9921 0.9878 0.9757 0.9794
Elkan’s method 0.9896 0.9848 0.9612 0.9681
Liu’s method 0.9850 0.9806 0.9652 0.9703
BaggingPU 0.9883 0.9837 0.9682 0.9730
nnPU 0.9233 0.9271 0.8916 0.8858
PU-AUC 0.9660 0.9653 0.8768 0.8778
Table 3: AUC and AUL estimation results of public data sets with = 0.5.

Figure 3 and 4 present the results of different methods on Credit Card Fraud and APS as the value of changes, respectively. We note that a large value of () indicates a low quality of the data sets: we have many unlabeled samples but few positive samples. When () increases, the experimental results of each method become worse due to the decreasing number of observed positive samples. However, ProbTagging does not have a noticeable decline when () is particularly large. We see that ProbTagging not only performs well in smooth cases but also shows its robustness in extremely imbalanced situations.

Figure 3: AUC results of Credit Card Fraud data sets.
Figure 4: AUC results of APS data sets.

5.2.3 Discussions

Learning in extremely imbalanced PU data sets is very challenging. In the real-world PU data sets from industrial scenes in Table 2, all methods do not have very well performance. This is because the industrial data sets tend to be very noisy. However, we see that the improvements from ProbTagging still bring many benefits to actual industrial scenarios.

We can see that nnPU and PU-AUC do not have very well performance on extremely imbalanced data sets, where the positive samples are rare (just a few percent or even a few thousandths of unlabeled samples). This is because when there are very few positive samples, the contribution of positive samples to the risk estimators calculation in nnPU and PU-AUC is very small, which is intended to be significant, and thus the risk of negative samples cannot be estimated correctly.

Each base classifier of BaggingPU uses the same positive samples, therefore they still have a strong correlation, and thus the variance of the final model is not effectively reduced. In the experiments, BaggingPU is worse than the ProbTagging.

The work of (Elkan & Noto, 2008) proposes that: under the assumption that the observed positive examples are selected randomly from total positive samples, PU model predicts probabilities that differ by only a constant factor from the true conditional probabilities of being positive. Because AUC depends on the sort of test samples, the constant factor does not affect the value of AUC. Therefore, when a data set satisfies the assumption, the PU model trained by Elkan’s method is close to the corresponding PN model in terms of AUC. We can see that Elkan’s method sometimes has a better performance than BaggingPU in the experiments.

There is a challenge for Liu’s method that the initial model needs to be strong enough due to the large correlation between the initial and the final model, it is difficult to find pure negative or positive samples at the first step. In the experiments, the performance of Liu’s method is a little worse than BaggingPU and Elkan’s method.

6 Conclusion

In this paper, we improve PU learning over state-of-the-art from two aspects: model evaluation and model training. We propose the practical AUL estimation without class prior knowledge. The AUL estimation method not only presents a new idea of model evaluation, but also could be helpful for other learning or mathematical problems in the future.

In addition, we also design a new training method called ProbTagging to improve the performance of extremely imbalanced PU learning. It is noted that ProbTagging is an abstract of learning methods, where the calculation of similarity and the base classifier vary with the learning task. The specific algorithm of ProbTagging in this paper uses -NN for similarity calculation and GBDT as base classifier. Compared with state-of-the-art work, the experimental results show that ProbTagging can increase the AUC by up to 10%. ProbTagging also provides a sample enhancement technique, and thus we can consider applying ProbTagging to other fields besides PU learning, such as multi-class semi-supervised learning.

References

  • Chen & Guestrin (2016) Chen, T. and Guestrin, C. Xgboost: A scalable tree boosting system. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining, pp. 785–794, 2016.
  • Chollet (2017) Chollet, F.

    Xception: Deep learning with depthwise separable convolutions.

    In

    Proceedings of the IEEE conference on computer vision and pattern recognition

    , pp. 1251–1258, 2017.
  • Du Plessis et al. (2015) Du Plessis, M., Niu, G., and Sugiyama, M. Convex formulation for learning from positive and unlabeled data. In International Conference on Machine Learning, pp. 1386–1394, 2015.
  • Du Plessis et al. (2014) Du Plessis, M. C., Niu, G., and Sugiyama, M. Analysis of learning from positive and unlabeled data. In Advances in neural information processing systems, pp. 703–711, 2014.
  • Dua & Graff (2016) Dua, D. and Graff, C. Aps failure at scania trucks data set, 2016. URL https://archive.ics.uci.edu/ml/datasets/APS+Failure+at+Scania+Trucks.
  • Elkan & Noto (2008) Elkan, C. and Noto, K. Learning classifiers from only positive and unlabeled data. In Proceedings of the 14th ACM SIGKDD international conference on Knowledge discovery and data mining, pp. 213–220. ACM, 2008.
  • Friedman (2002) Friedman, J. H.

    Stochastic gradient boosting.

    Computational statistics & data analysis, 38(4):367–378, 2002.
  • Ke et al. (2017) Ke, G., Meng, Q., Finley, T., Wang, T., Chen, W., Ma, W., Ye, Q., and Liu, T.-Y.

    Lightgbm: A highly efficient gradient boosting decision tree.

    In Advances in neural information processing systems, pp. 3146–3154, 2017.
  • Kingma et al. (2014) Kingma, D. P., Mohamed, S., Rezende, D. J., and Welling, M. Semi-supervised learning with deep generative models. In Advances in neural information processing systems, pp. 3581–3589, 2014.
  • Kiryo et al. (2017) Kiryo, R., Niu, G., du Plessis, M. C., and Sugiyama, M. Positive unlabeled learning with non-negative risk estimator. In Advances in neural information processing systems, pp. 1675–1685, 2017.
  • Krizhevsky et al. (2012) Krizhevsky, A., Sutskever, I., and Hinton, G. E. Imagenet classification with deep convolutional neural networks. In Advances in neural information processing systems, pp. 1097–1105, 2012.
  • Lee & Liu (2003) Lee, W. S. and Liu, B. Learning with positive and unlabeled examples using weighted logistic regression. In Proceedings of the Twentieth International Conference on Machine Learning, volume 3, pp. 448–455, 2003.
  • Lee et al. (2012) Lee, Y., Hu, P. J., Cheng, T., and Hsieh, Y. A cost-sensitive technique for positive-example learning supporting content-based product recommendations in b-to-c e-commerce. Decision Support Systems, 53(1):245–256, 2012.
  • Li & Hua (2014) Li, C. and Hua, X.

    Towards positive unlabeled learning for parallel data mining: a random forest framework.

    In International Conference on Advanced Data Mining and Applications, pp. 573–587. Springer, 2014.
  • Li & Liu (2003) Li, X. and Liu, B. Learning to classify texts using positive and unlabeled data. In

    International Joint Conference on Artificial Intelligence

    , volume 3, pp. 587–592, 2003.
  • Li et al. (2009) Li, X., Yu, P. S., Liu, B., and Ng, S. Positive unlabeled learning for data stream classification. In Proceedings of the 2009 SIAM International Conference on Data Mining, pp. 259–270. SIAM, 2009.
  • Liu et al. (2003) Liu, B., Dai, Y., Li, X., Lee, W. S., and Philip, S. Y. Building text classifiers using positive and unlabeled examples. In IEEE International Conference on Data Mining, volume 3, pp. 179–188. Citeseer, 2003.
  • machine learning group ULB (2017) machine learning group ULB. Credit card fraud detection data set, 2017. URL https://www.kaggle.com/mlg-ulb/creditcardfraud.
  • Miyato et al. (2018) Miyato, T., Maeda, S., Koyama, M., and Ishii, S. Virtual adversarial training: a regularization method for supervised and semi-supervised learning. IEEE transactions on pattern analysis and machine intelligence, 41(8):1979–1993, 2018.
  • Mordelet & Vert (2014) Mordelet, F. and Vert, J. A bagging svm to learn from positive and unlabeled examples. Pattern Recognition Letters, 37:201–209, 2014.
  • Oliver et al. (2018) Oliver, A., Odena, A., Raffel, C. A., Cubuk, E. D., and Goodfellow, I. Realistic evaluation of deep semi-supervised learning algorithms. In Advances in Neural Information Processing Systems, pp. 3235–3246, 2018.
  • Sakai et al. (2018) Sakai, T., Niu, G., and Sugiyama, M. Semi-supervised auc optimization based on positive-unlabeled learning. Machine Learning, 107(4):767–794, 2018.
  • Serfling (2009) Serfling, R. J. Approximation theorems of mathematical statistics. John Wiley & Sons, 2009.
  • Simonyan & Zisserman (2015) Simonyan, K. and Zisserman, A. Very deep convolutional networks for large-scale image recognition. In International Conference on Learning Representations, 2015.
  • Tax & Duin (2004) Tax, D. M. and Duin, R. P. Support vector data description. Machine learning, 54(1):45–66, 2004.
  • Tolles & Meurer (2016) Tolles, J. and Meurer, W. J. Logistic regression: relating patient characteristics to outcomes. Jama, 316(5):533–534, 2016.
  • Tufféry (2011) Tufféry, S. Data mining and statistics for decision making. John Wiley & Sons, 2011.
  • Xie & Li (2018) Xie, Z. and Li, M. Semi-supervised auc optimization without guessing labels of unlabeled data. In Thirty-Second AAAI Conference on Artificial Intelligence, 2018.
  • Zhang & Zuo (2009) Zhang, B. and Zuo, W.

    Reliable negative extracting based on knn for learning from positive and unlabeled examples.

    Journal of Computers, 4(1):94–101, 2009.
  • Zhu & Ghahramani (2002) Zhu, X. and Ghahramani, Z. Learning from labeled and unlabeled data with label propagation. Technical Report CMU-CALD-02-107, 2002.