In machine learning, there are situations where only positive labeled data can be collected, together with a large amount of unlabeled data. Learning with such data is called positive-unlabeled (PU) learning. Since PU learning can be seen as learning with clean P data and noisy negative (N) data, it is also related to learning with noisy labels [2, 3] and UU learning , while in the latter two learning paradigms no clean P data is available.
Depending on different data generation processes, problem setting of PU learning can be divided into two categories: censoring PU learning  where U data is collected first and then some of P data in the U data is labeled, and case-control PU learning  where P and U data are collected separately. Case-control PU learning is slightly more general than censoring PU learning  and thus nowadays there are many efforts devoted to solving it [8, 9, 10, 11]. In this paper, we also focus on the case-control PU learning.
Many PU learning methods have been proposed and they can be generally divided into generative and discriminative methods. Discriminative methods are the majority while there is few generative algorithm except  which relies on the restrictive cluster assumption . In the early development of discriminative methods, the sample selection approach was proposed, which selects N data from U data and performs ordinary PN learning afterwards. Various heuristics were designed then for selecting reliable N data, including ,  and . However, such a trend was soon braked by  and , which proposed the importance reweighting approach treating U data directly as N data but with reduced importance weights. Such an approach was compared extensively to various sample selection strategies , and experimental results showed that the former is significantly better. The belief that importance reweighting approach is superior to the sample selection approach was further strengthened 
. From then on, the importance reweighting methods of PU learning continue to be developed. Representative methods are based on unbiased risk estimation[8, 9], which rewrites the classification risk to an equivalent form depending only on PU data. Among them, the unbiased PU (uPU)  achieved relatively better empirical results and are accompanied with strong theoretical guarantees .
In the original uPU paper , simple models with only a small number of parameters were used. However, if complex models such as deep networks are used, serious overfitting happens while a breakthrough for this problem is . It realized that the overfitting of uPU is due to the rewrite of the risk to include a subtraction calculation. When minimizing the empirical risk during training, beacause of the strong capacity of complex models the risk will reach a minimum long away below zero, while it should be no less than zero. To deal with this problem,  proposed the non-negative PU (nnPU) method, which performs regular gradient descent if the training loss is above zero, and gradient ascent otherwise. Such a non-negative idea has been generalized to more general scenarios recently .
Note that nnPU is still an importance reweighting method. While aforementioned belief dated back in 2008 was based on simple models, it may not hold on complex models nowadays. Since complex models are used widely , we are wondering whether there is a possibility for the sample selection approach in this deep learning age.
In this paper, we answer this question affirmatively by:
What to select? We were inspired by the theoretical result that using simple models, P data will benefit PU learning more than PN learning . We empirically verify the importance of P data for PU learning using complex models and propose to select P.
treating small-loss data as clean data. According to them, large-loss data would be P data in PU learning. Such an assumption is confirmed empirically. We also discuss the choices of surrogate loss functions to make the selected P cleaner.
How to use the selected data? We show that the selected data is usually biased and using them directly causes serious underestimation of the empirical risk. We then carefully design a new learning objective for the selected biased data to be used as P data.
We call our method adaptively augmented PU
(aaPU). aaPU automatically identifies P data from a set of U data during the training process. In each epoch, the P data is selected and used for further training. Experimental results show that aaPU is superior.
Our contribution lies in three aspects.
data; more importantly, classical methods cannot work with complex models, because they select samples by training a PN classifier treating U data naively as N data. As we will show in Figure4 when complex models are used, all U data will be classified to be N after training even if we do optimal reweighting based on unbiased risk estimators.
One important direction in semi-supervised learning is self-training . Such a learning approach requires labeled data from all classes to begin with while in many weakly supervised problems , we only have labeled data from some classes. To the best of our knowledge, this is the first work to explore this direction.
The rest of the paper is organized as follows. In Sec. 2 we give the formulation. In Sec. 3, we answer what to select, how to select and how to use the selected data, ended with the algorithm framework. We give the experimental results in Sec. 4 and conclude in Sec. 5 with future work.
2 Formulation and Review
In this section, we introduce the formulation, as well as the risk used in PU learning.
be the input random variable and denote the output random variable by. In PU learning, the training set is composed of two parts, positive data and unlabeled data . contains instances sampled from . contains instances sampled from . Denote by
the class-prior probability, i.e.,. Same as , we assume that is known throughout the paper. In practice, it can be estimated from the given PU data .
let the parameters of the classification model be , and be the decision function. To learn a classifier, we need to minimize the risk
where is any trainable surrogate loss of the zero-one loss. Among the loss functions in , the most popular one is the sigmoid loss, which is defined as
There is also other surrogate loss functions, such as logistic loss defined as
We will discuss the choice of surrogate loss in Sec. 3.2.
For PN learning, the risk can be written as
and its empirical estimation would be
where is the number of N data.
In PU learning, because we do not have negative data, with
we can replace the negative loss part in Eq. (4) by
To simplify our description in the following, we denote
In this way, the expected risk for PU learning is written as
Given the empirical estimation of risk in Eq. (7), which is
and minimizing it, we can have the uPU method .
For nnPU , it realized that in uPU by minimizing Eq. (8) using flexible enough deep networks, the second line of Eq. (8), which should have stayed positive goes negative. Thus they proposed to optimizing a non-negative version of Eq. (8), which is
In the practical implementation, nnPU checked whether the loss is larger than zero. If it is larger, gradient descent is performed. Otherwise, nnPU performs gradient ascent to correct the overfitting effect.
3 Our Method
In this section, we discuss what to select, how to select and how to use the selected data. Finally, we summarize the proposed algorithm.
3.1 What to Select
Considering what to select, we are motivated by a conclusion in . Let the minimizer of in Eq. (5) be and the minimizer of in Eq. (8) be . Additionally, assume optimizes in Eq. (1). In , Corollary 5 states that with conditions on the functional class and probability at least ,
An interesting conclusion we can derive from the theoretical result above is that, adding more P data besides the original P data will benefit PU learning more than PN learning, since the weights of the empirical average of P data is in PU learning, which is twice of that in PN learning . Although the theoretical results are based on linear-in-parameter model due to the difficulty of deriving the Rademacher complexity  on complex deep networks, we were motivated by such a conclusion and performed some empirical studies. In Figure 1, we show the zero-one test loss on the CIFAR-10 data  comparing two methods: nnPU , nnPU with
more positive data (nnPU+P) using a deep neural network as a classification model. We used the same network structure and setting as that in Sec.4.2. From the results, we can see that for deep networks, adding positive data can benefit PU learning. Thus in the following, we will consider selecting P data from U data, instead of N data.
3.2 How to Select
Regarding how to select, we were motivated by recent work about the memorization properties of deep neural networks 
, showing that during the process of stochastic gradient descent, deep neural networks tend not to memorize all data at the same time, but memorize frequent patterns first and later irregular patterns. Such an effect is verified in through empirical studies on learning with noisy labels problem [31, 7, 32, 33]. We re-implmented part of the result in Figure 2. From this, we can see that the validation accuracies increase in the first few epochs on both clean and noise data fitting the frequent patterns. However, on noisy data they suddenly drop after the first few epochs, showing that they begin to fit noise, i.e., irregular patterns.
Researchers in the community of learning with noisy labels have been using such memorization properties to differentiate noisy and non-noisy data by large and small loss respectively [23, 24]. Such works give us good intuition on how to identify P data from U data, as U data in PU learning can be seen as noisy N data. However, we do not know whether such a “large-small-loss-trick” still works for PU learning. We expect that PU learning algorithms based on deep networks will remember N data first, and later P data.
To confirm our expectation, empirically, after finishing a particular epoch, we fed forward all U data into the current neural network. Since these data do not have labels, we calculate their loss by assuming their labels are negative. We then plotted the histogram of these loss to show their distribution. Specially, we divided the range of these loss equally into bins and counted the number of loss falling into each bin. The histogram on epochs , , and for nnPU are shown in Figure 3 where the true N data is marked blue and true P data is marked green. Obviously most green P data have large loss. Besides this phenomenon, we can see the following tendencies,
The “large-small loss” trick also works in PU learning. We can see in Figure 3 that as the training process evolves from epoch to epoch , the majority of large-loss data are P data. Thus such data can be identified by splitting U data according to some threshold on the loss. However, when the training process continues until epoch , large-loss data selected as positive are containing more N data, i.e., more noise. This shows that we should select positive data at appropriate epochs and eventually we could reduce the number of data selected.
For nnPU, although we can get a reasonable amount of positive data, large-loss data are not purely positive as shown in Figure 3. By selecting such positive data, we may get some noise. During the training process the effect of noise can be accumulated and eventually deteriorate the performance of the algorithm.
Our next task is to consider how to select P data as pure as possible. First, we note that nnPU performs gradient ascent during training to prevent overfitting which tends to classify some N data as P. In this way, we try the uPU method with complex models. If uPU can give us a reasonable number of pure P data, then, even if uPU overfits, we can still use it as a sample selection approach. The histogram of loss of U data on epochs , , and for uPU are shown in Figure 4. From Figure 4, we can see that although the large-loss part are purely “positive”, these data are the overlapping of P and U data, thus they are useless. In the following, we will do sample selection with nnPU.
We consider another way to get pure P data—choosing a better surrogate loss function. Previous works such as nnPU  used the sigmoid loss in Eq. (2) as the surrogate loss. Although such surrogate loss function is useful in previous works, it has a frawback in our context, i.e., its values are always within the range of . Note that in our work, we need to use the “large-small loss” trick to identify large-loss data in the set of U data as P. However, as the sigmoid loss is bounded from above, the loss of positive data are also upper-bounded. Thus even if these positive data have a potential to reach larger loss, they are upper limited. Among all the surrogate loss functions listed in , we found that the logistic loss in Eq. (3) is not upper bounded. It also shares some of other merits of the sigmoid loss, such as being Lipschitz continuous and differentiable everywhere. We thus use the logistic loss instead of the sigmoid loss.
We also plot the histogram of the logistic loss calculated on U data in epochs , , and , as shown in Figure 5. Comparing Figures 3 and 5, we can see that in an appropriate epoch, such as epoch , there is a loss range in which we can select pure positive data, if setting the threshold appropriately. In this way, we will use the logistic loss as a surrogate loss to perform sample selection.
3.3 How to Use the Selected Data
With the selected data, one trivial way to use them is to add them directly into the set of P data. However, we found such a simple operation cannot work well. In this section, we will discuss on this problem.
In Eq. (6), one principle is to use the available P data to calculate their risk when they are treated as N, i.e.,
is an unbiased estimation ofonly when the P data are not biased and can represent the true distribution of . However, we perform some experiments on synthetic data using the setting of Sec. 4.1 and find the selected data cannot represent the true distribution of P data. More specially, we selected to large-loss data in epoch , and plotted them against the decision boundary as they are 2-dimensional synthetic data. The results are shown in Figure 6 (a) (b) (c). We can see that to keep the sampled data containing less negative data, we should sample a safe amount of them, otherwise if we sample large-loss data we can easily meet a lot of false positive. To keep the selected data clean, we need sample only a safe amount of large-loss data, which may not cover all the positive data space and have a large tendency to be biased.
Note that in PU learning, we need to calculate two items: and as we show in Eq. (7). There is almost no problem with calculating by adding these small-margined P data. However, when calculating , the P data are always treated as negative ones, and the small-margined P data will result in serious problem. Specially, when the small-margined P data are treated as negative, their empirical surrogate losses, in Eq. (10) could be very large as they are in the positive area far away from the decision boundary. In this way, subtracting it in Eq. (8) would lead to an underdestimation of the empirical risk which will diverge greatly compared to the unbiased risk under this situation.
Target at this problem, in the following, we design a new learning objective. Given the P data , U data and the selected positive instance , we have
Here the main difference between and is that, instead of treating the selected data simply as P data and use them everywhere in Eq. (9), we only use them to estimate the positive risk , but not use them to estimate the negative risk . Such a difference will help us to get rid of the impact of biasness in selected data.
To summarize, we will call our method adaptively augmented PU learning (aaPU). For existing PU learning methods based on complex models, aaPU works as a wrapper to automatically identify P data from a set of U data during the training process. In each epoch, P data is selected from U data and will be used for future training. A framework of our proposed algorithm is shown in Algorithm 1.
We first initialize the parameter and (Line ), and in each epoch, we shuffle the current P data, U data and , dividing them into small batches (Line to ). Note that we keep the ratio between P and U data in each mini batch to be the same as the batch P and U data. We then use these mini-batches to do stochastic gradient descent (Line to ). After updating the network by all mini-batches in one epoch, we will feed forward all the U data into the current neural network and calculating their surrogate loss using as the ground truth (Line ). We thus get , which is
for all and . Finally we select the instances with the first largest surrogate losses and not in (Line ) and put these selected P data into (Line ), which will be together with the original P data to estimate the positive risk . Through the procedure of aaPU, the positive data can be added adaptively and permanently within the evolving of training neural networks.
In this section, we give the experimental results of aaPU on both synthetic and real data. The code is implemented in Python and all the deep learning models are implemented with PyTorch111https://pytorch.org/.
4.1 Experiments on Synthetic Data
We first run experiments on -dimensional synthetic data. We randomly sample uniformly from and uniformly from . The decision boundary is . All with are P; otherwise, they are N. We set if , i.e., ; and set otherwise, to make it easy to visualize the decision boundary. In this way we construct a training set with P data and U data. We also construct a test set with instances, including P and N. The is thus set as .
We then compare aaPU with the state-of-the-art deep PU method nnPU , which has already shown to be superior to uPU . Since the decision boundary cannot be approximated by simple linear model, we use a neural network with three layers: one input layer of -dimension, a fully collected layer of neurons, another fully collected layer of
neurons, and finally the output layer. Each layer uses ReLU35]. We use logistic loss as the surrogate loss. The optimizer is Adam  with default parameters in PyTorch and we will not use any dropout  for this simple task. The weight decay parameter is set as . We first use as learning rate, and lower it to after the th epoch. The batch size is set as and we run totally epochs. For aaPU, we begin to select positive data from the th epoch. In each epoch one instance with the largest loss is added. For nnPU, we use the same hyper parameter setting as the original paper, i.e., and .
We show the performance comparing nnPU and the proposed aaPU in Figure 7. From the results, we can see that when we start to select positive data in epoch and use them from then on, the proposed aaPU achieves better result than nnPU quickly. As the learning continues and more data are selected, there exists an obvious gap between nnpu and aaPU. The performance of aaPU becomes better and better.
|Data set||#p train||#u train||#p test||#n test|
4.2 Experiments on Real Data
In this section, we show the results on real data. We use three datasets: CIFAR-10 , CIFAR-100  and 20News . CIFAR-10 is an image data set with three channels. CIFAR-100 is also an image data set similar to but with larger scale than CIFAR-10. 20News is a text data set. CIFAR-10 and 20News are used in previous PU works  and we use CIFAR-100 to show the scalable ability of aaPU.
All these datasets are multi-class datasets. To make them suitable for binary classification, we adopt the same method as previous works  to make 20News into binary data. For CIFAR-100, we adopt the same method as CIFAR-10, dividing the data into animal/non animal222Positive: aquatic_mammals, fish, insects, large_carnivores, large_omnivores_and_herbivores, medium_mammals, non-insect_invertebrates, people, reptiles, small_mammals; Negative: flowers, food_containers, fruit_and_vegetables, household_electrical_devices, household_furniture, large_man-made_outdoor_things, large_natural_outdoor_scenes, trees, vehicles_1, vehicles_2. For each data set, we randomly sample positive data according to to form P, and data according to to form U. The detailed statistical information for each dataset can be found in Table 1.
For CIFAR-10, we use the same network structure as . We add dropout on the last three fully-connected layers with parameter . The weight decay parameter is set as . We use Adam as the SGD optimizer with learning rate in the first epochs and from epoch to , and after epoch . Batch size is set as , with totally epochs. For aaPU, from the th epoch, instances are added in each epoch.
For CIFAR-100, we use the same network structure as , with a dropout parameter as . The weight decay parameter is set as . We use Adam as the SGD optimizer with learning rate in the first epochs and from epoch . Batch size is set as , with totally epochs. For aaPU, from the th epoch, instances are added in each epoch.
For 20NewsGroup, we adopt the same network structure as  without dropout. The weight decay parameter is set as . We use Adam as the SGD optimizer with learning rate in the first epochs and after that. Batch size is set as , with totally epochs. For aaPU, the weight decay and learning rate parameter is set the same as nnPU. We begin to add positive data from the th epoch, and in each epoch we add instances.
We repeat each experiments times and report the average results as well as the variance. The results are shown in Figure 8. We can see that, an all three datasets, our proposed aaPU get better results. Specially, aaPU is better on CIFAR-10, and much better on CIFAR-100 and 20News. The superior of aaPU begins to appear tens of epochs after selecting P data.
In aaPU, one important factor is to decide when to add the selected data from the set of U data, and how much to add each time. For the first question, we usually begin to add P data when the validation loss becomes stable. In practice, we find that the validation loss becomes stable usually epochs after reducing the learning rate. For the second question of how much to add, we cannot add too much data for that too much will bring more false positive. But if we add only a small amount of data, since such data are too far away from the decision boundary, they will have little contribution to train the classifier. In practice, we select the number of positive data selected from .
In this paper, to answer whether there is a new sample selection method that can outperform the latest importance reweighting method in the deep learning age, we propose aaPU. aaPU automatically selects P data from U data during training, and uses these selected data in future epochs. Specially, aaPU uses deep networks as classification model. After each training epoch, all U data are fed forward into the neural networks, and logistic loss of them are calculated using as ground truth. Then data with large loss are selected to estimate the positive risk, which is part of the risk minimized in further training. Experiments validate that the proposed aaPU can work better than nnPU  on both synthetic and real data. Thus sample selection approach is possible to outperform the latest importance reweighting approach in the deep learning age.
In the future, we plan to combine aaPU with , which studies the learning problem when P data, U data and biased N data are presented. Although  is not targeted at selecting data, the technology proposed there dealing with biased N data are complementary to our work, and may be used to further improve aaPU.
We want to thank Yu-Guan Hsieh for kindful suggestions on the experiments. MS was supported by JST CREST JPMJCR1403.
-  F. Denis, “PAC learning from positive statistical queries,” in ALT, 1998.
-  D. Angluin and P. D. Laird, “Learning from noisy examples,” Machine Learning, vol. 2, no. 4, pp. 343–370, 1988.
-  C. Scott, G. Blanchard, and G. Handy, “Classification with asymmetric label noise: Consistency and maximal denoising,” in COLT, 2013.
-  N. Lu, G. Niu, A. K. Menon, and M. Sugiyama, “On the minimal supervision for training any binary classifier from only unlabeled data,” in ICLR, 2019.
-  C. Elkan and K. Noto, “Learning classifiers from only positive and unlabeled data,” in KDD, 2008.
-  G. Ward, T. Hastie, S. Barry, J. Elith, and J. R. Leathwick, “Presence-only data and the em algorithm,” Biometrics, vol. 65, no. 2, pp. 554–563, 2009.
-  A. K. Menon, B. van Rooyen, C. S. Ong, and B. Williamson, “Learning from corrupted binary labels via class-probability estimation,” in ICML, 2015.
-  M. C. du Plessis, G. Niu, and M. Sugiyama, “Analysis of learning from positive and unlabeled data,” in NeurIPS, 2014.
-  M. C. du Plessis, G. Niu, and M. Sugiyama, “Convex formulation for learning from positive and unlabeled data,” in ICML, 2015.
-  T. Sakai, M. C. du Plessis, G. Niu, and M. Sugiyama, “Semi-supervised classification based on classification from positive and unlabeled data,” in ICML, 2017.
-  R. Kiryo, G. Niu, M. C. du Plessis, and M. Sugiyama, “Positive-unlabeled learning with non-negative risk estimator,” in NeurIPS, 2017.
-  M. Hou, B. Chaib-draa, C. Li, and Q. Zhao, “Generative adversarial positive-unlabelled learning,” in IJCAI, 2018.
-  O. Chapelle, J. Weston, and B. Schölkopf, “Cluster kernels for semi-supervised learning,” in NeurIPS, 2002.
-  X. Li and B. Liu, “Learning to classify texts using positive and unlabeled data,” in IJCAI, 2003.
-  B. Liu, W. S. Lee, P. S. Yu, and X. Li, “Partially supervised classification of text documents,” in ICML, 2002.
-  H. Yu, J. Han, and K. C. Chang, “PEBL: positive example based learning for web page classification using SVM,” in KDD, 2002.
W. S. Lee and B. Liu, “Learning with positive and unlabeled examples using weighted logistic regression,” inICML, 2003.
-  B. Liu, Y. Dai, X. Li, W. S. Lee, and P. S. Yu, “Building text classifiers using positive and unlabeled examples,” in ICDM, 2003.
-  G. Niu, M. C. du Plessis, T. Sakai, Y. Ma, and M. Sugiyama, “Theoretical comparisons of positive-unlabeled learning against positive-negative learning,” in NeurIPS, 2016.
-  B. Han, G. Niu, J. Yao, X. Yu, M. Xu, I. W. Tsang, and M. Sugiyama, “Pumpout: A meta approach for robustly training deep neural networks with noisy labels,” arXiv, vol. 1809.11008, 2018.
-  I. Goodfellow, Y. Bengio, and A. Courville, Deep Learning. MIT Press, 2016.
-  D. Arpit, S. K. Jastrzebski, N. Ballas, D. Krueger, E. Bengio, M. S. Kanwal, T. Maharaj, A. Fischer, A. C. Courville, Y. Bengio, and S. Lacoste-Julien, “A closer look at memorization in deep networks,” in ICML, 2017.
-  L. Jiang, Z. Zhou, T. Leung, L. Li, and L. Fei-Fei, “Mentornet: Learning data-driven curriculum for very deep neural networks on corrupted labels,” in ICML, 2018.
-  B. Han, Q. Yao, X. Yu, G. Niu, M. Xu, W. Hu, I. Tsang, and M. Sugiyama, “Co-teaching: Robust training deep neural networks with extremely noisy labels,” in NeurIPS, 2018.
-  X. Zhu, “Semi-supervised learning,” in Encyclopedia of Machine Learning and Data Mining, pp. 1142–1147, 2017.
H. J. Scudder III, “Probability of error of some adaptive pattern-recognition machines,”IEEE Transaction on Information Theory, vol. 11, no. 3, pp. 363–371, 1965.
-  Z.-H. Zhou, “A brief introduction to weakly supervised learning,” National Science Review, vol. 5, no. 1, pp. 44–53, 2018.
-  H. G. Ramaswamy, C. Scott, and A. Tewari, “Mixture proportion estimation via kernel embeddings of distributions,” in ICML, 2016.
-  P. L. Bartlett and S. Mendelson, “Rademacher and gaussian complexities: Risk bounds and structural results,” Journal of Machine Learning Research, vol. 3, pp. 463–482, 2002.
-  A. Krizhevsky, “Learning multiple layers of features from tiny images,” tech. rep., University of Toronto, 2009.
-  N. Natarajan, I. S. Dhillon, P. Ravikumar, and A. Tewari, “Learning with noisy labels,” in NeurIPS, 2013.
-  T. Liu and D. Tao, “Classification with noisy labels by importance reweighting,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 38, no. 3, pp. 447–461, 2016.
-  X. Ma, Y. Wang, M. E. Houle, S. Zhou, S. M. Erfani, S. Xia, S. N. R. Wijewickrema, and J. Bailey, “Dimensionality-driven learning with noisy labels,” in ICML, 2018.
-  X. Glorot, A. Bordes, and Y. Bengio, “Deep sparse rectifier neural networks,” in AISTATS, 2011.
-  S. Ioffe and C. Szegedy, “Batch normalization: Accelerating deep network training by reducing internal covariate shift,” in ICML, 2015.
-  D. P. Kingma and J. Ba, “Adam: A method for stochastic optimization,” arXiv, vol. 1412.6980, 2014.
-  N. Srivastava, G. E. Hinton, A. Krizhevsky, I. Sutskever, and R. Salakhutdinov, “Dropout: a simple way to prevent neural networks from overfitting,” Journal of Machine Learning Research, vol. 15, no. 1, pp. 1929–1958, 2014.
-  K. Lang, “Newsweeder: Learning to filter netnews,” in ICML, 1995.
-  Y. Hsieh, G. Niu, and M. Sugiyama, “Classification from positive, unlabeled and biased negative data,” arXiv, vol. 1810.00846, 2018.