In ordinary supervised classification problems, each training pattern is equipped with a label which specifies the class the pattern belongs to. Although supervised classifier training is effective, labeling training patterns is often expensive and takes a lot of time. For this reason, learning from less expensive data has been extensively studied in the last decades, including but not limited to, semi-supervised learningsemi ; zhu03icml ; zhou03nips ; grandvalet04nips ; belkin06jmlr ; mann07icml ; niu13icml ; li15tpami ; yang16icml ; kipf17iclr ; laine17iclr , learning from pairwise/triple-wise constraints pairwise_constraints ; goldberger04nips ; davis07icml ; weinberger09jmlr ; NC:Niu+etal:2014 , and positive-unlabeled learning denis98alt ; elkan08kdd ; ward09biometrics ; blanchard10jmlr ; punips ; puicml ; NIPS:Niu+etal:2016 ; kiryo17nips .
In this paper, we consider another weakly supervised classification scenario with less expensive data: instead of any ordinary class label, only a complementary label which specifies a class that the pattern does not belong to is available. If the number of classes is large, choosing the correct class label from many candidate classes is laborious, while choosing one of the incorrect class labels would be much easier and thus less costly. In the binary classification setup, learning with complementary labels is equivalent to learning with ordinary labels, because complementary label (i.e., not class ) immediately means ordinary label . On the other hand, in -class classification for , complementary labels are less informative than ordinary labels because complementary label only means either of the ordinary labels .
The complementary classification problem may be solved by the method of learning from partial labels partial , where multiple candidate class labels are provided to each training pattern—complementary label can be regarded as an extreme case of partial labels given to all classes other than class . Another possibility to solve the complementary classification problem is to consider a multi-label setup multilabel2 , where each pattern can belong to multiple classes—complementary label is translated into a negative label for class and positive labels for the other classes.
Our contribution in this paper is to give a direct risk minimization framework for the complementary classification problem. More specifically, we consider a complementary loss that incurs a large loss if a predicted complementary label is not correct. We then show that the classification risk can be empirically estimated in an unbiased fashion if the complementary loss satisfies a certain symmetric condition—the sigmoid loss and the ramp loss (see Figure 1) are shown to satisfy this symmetric condition. Theoretically, we establish estimation error bounds for the proposed method, showing that learning from complementary labels is also consistent; the order of these bounds achieves the optimal parametric rate , where
denotes the order in probability andis the number of complementarily labeled data.
We further show that our proposed complementary classification can be easily combined with ordinary classification, providing a highly data-efficient classification method. This combination method is particularly useful, e.g., when labels are collected through crowdsourcing crowdsourcing : Usually, crowdworkers are asked to give a label to a pattern by selecting the correct class from the list of all candidate classes. This process is highly time-consuming when the number of classes is large. We may instead choose one of the classes randomly and ask crowdworkers whether a pattern belongs to the chosen class or not. Such a yes/no question can be much easier and quicker to be answered than selecting the correct class out of a long list of candidates. Then the pattern is treated as ordinarily labeled if the answer is yes; otherwise, the pattern is regarded as complementarily labeled.
Finally, we demonstrate the practical usefulness of the proposed methods through experiments.
2 Review of ordinary multi-class classification
Suppose that -dimensional pattern and its class label
are sampled independently from an unknown probability distribution with density. The goal of ordinary multi-class classification is to learn a classifier that minimizes the classification risk with multi-class loss :
where denotes the expectation. Typically, a classifier is assumed to take the following form:
where is a binary classifier for class versus the rest. Then, together with a binary loss that incurs a large loss for small , the one-versus-all (OVA) loss111We normalize the “rest” loss by to be consistent with the discussion in the following sections. or the pairwise-comparison (PC) loss defined as follows are used as the multi-class loss ova :
Finally, the expectation over unknown in Eq.(1) is empirically approximated using training samples to give a practical classification formulation.
3 Classification from complementary labels
In this section, we formulate the problem of complementary classification and propose a risk minimization framework.
We consider the situation where, instead of ordinary class label , we are given only complementary label which specifies a class that pattern does not belong to. Our goal is to still learn a classifier that minimizes the classification risk (1), but only from complementarily labeled training samples . We assume that are drawn independently from an unknown probability distribution with density:222The coefficient is for the normalization purpose: it would be natural to assume since all for equally contribute to ; in order to ensure that is a valid joint density such that , we must take .
Let us consider a complementary loss for a complementarily labeled sample . Then we have the following theorem, which allows unbiased estimation of the classification risk from complementarily labeled samples:
The classification risk (1) can be expressed as
if there exist constants such that for all and , the complementary loss satisfies
The first constraint in (7) can be regarded as a multi-class loss version of a symmetric constraint that we later use in Theorem 2. The second constraint in (7) means that the smaller is, the larger should be, i.e., if “pattern belongs to class ” is correct, “pattern does not belong to class ” should be incorrect.
Let us define the complementary losses corresponding to the OVA loss and the PC loss as
Then we have the following theorem (its proof is given in Appendix A):
4 Estimation Error Bounds
In this section, we establish the estimation error bounds for the proposed method.
Let be a function class for empirical risk minimization, be Rademacher variables, then the Rademacher complexity of for of size drawn from is defined as follows (mohri12FML, ):
define the Rademacher complexity of for of size drawn from as
Note that and thus , which enables us to express the obtained theoretical results using the standard Rademacher complexity .
To begin with, let be the shifted loss such that (in order to apply the Talagrand’s contraction lemma (ledoux91PBS, ) later), and and be losses defined following (9) and (10) but with instead of ; let be any (not necessarily the best) Lipschitz constant of . Define the corresponding function classes as follows:
Let be the Rademacher complexity of for of size drawn from defined as
Let be the Rademacher complexity of defined similarly to . Then,
For any , with probability at least ,
where is w.r.t. , and
where is w.r.t. .
Let be the true risk minimizer and be the empirical risk minimizer, i.e.,
Finally, based on Lemma 5, we can establish the estimation error bounds as follows:
For any , with probability at least ,
if is trained by minimizing is w.r.t. , and
if is trained by minimizing is w.r.t. .
Based on Lemma 5, the estimation error bounds can be proven through
where we used that by the definition of . ∎
Theorem 6 also guarantees that learning from complementary labels is consistent: as , . Consider a linear-in-parameter model defined by
where is a Hilbert space with an inner product , is a normal, is a feature map, and and are constants (scholkopf01LK, ). It is known that (mohri12FML, ) and thus in if this is used, where denotes the order in probability. This order is already the optimal parametric rate and cannot be improved without additional strong assumptions on , and jointly.
5 Incorporation of ordinary labels
In many practical situations, we may also have ordinarily labeled data in addition to complementarily labeled data. In such cases, we want to leverage both kinds of labeled data to obtain more accurate classifiers. To this end, motivated by sakai17icml , let us consider a convex combination of the classification risks derived from ordinarily labeled data and complementarily labeled data:
is a hyper-parameter that interpolates between the two risks. The combined risk (15) can be naively approximated by the sample averages as
where are ordinarily labeled data and are complementarily labeled data.
As explained in the introduction, we can naturally obtain both ordinarily and complementarily labeled data through crowdsourcing crowdsourcing . Our risk estimator (16) can utilize both kinds of labeled data to obtain better classifiers333 Note that when pattern has already been equipped with ordinary label , giving complementary label does not bring us any additional information (unless the ordinary label is noisy).. We will experimentally demonstrate the usefulness of this combination method in Section 6.
In this section, we experimentally evaluate the performance of the proposed methods.
6.1 Comparison of different losses
Here we first compare the performance among four variations of the proposed method with different loss functions: OVA (9) and PC (10), each with the sigmoid loss (13) and ramp loss (14). We used the MNIST hand-written digit dataset, downloaded from the website of the late Sam Roweis444See http://cs.nyu.edu/~roweis/data.html.
(with all patterns standardized to have zero mean and unit variance), with different number of classes: 3 classes (digits “1” to “3”) to 10 classes (digits “1” to “9” and “0”). From each class, we randomly sampled 500 data for training and 500 data for testing, and generated complementary labels by randomly selecting one of the complementary classes. From the training dataset, we left out 25% of the data for validating hyperparameter based on (8) with the zero-one loss plugged in (9) or (10).
For all the methods, we used a linear-in-input model as the binary classifier, where denotes the transpose, is the weight parameter, and is the bias parameter for class . We added an -regularization term, with the regularization parameter chosen from . Adam adam was used for optimization with 5,000 iterations, with mini-batch size 100. We reported the test accuracy of the model with the best validation score out of all iterations. All experiments were carried out with Chainer chainer .
We reported means and standard deviations of the classification accuracy over five trials in Table1. From the results, we can see that the performance of all four methods deteriorates as the number of classes increases. This is intuitive because supervised information that complementary labels contain becomes weaker with more classes.
The table also shows that there is no significant difference in classification accuracy among the four losses. Since the PC formulation is regarded as a more direct approach for classification vapnik (it takes the sign of the difference of the classifiers, instead of the sign of each classifier as in OVA) and the sigmoid loss is smooth, we use PC with the sigmoid loss as a representative of our proposed method in the following experiments.
|Method||3 cls||4 cls||5 cls||6 cls||7 cls||8 cls||9 cls||10 cls|
|OVA Sigmoid||95.2 (0.9)||91.4 (0.5)||87.5 (2.2)||82.0 (1.3)||74.5 (2.9)||73.9 (1.2)||63.6 (4.0)||57.2 (1.6)|
|OVA Ramp||95.1 (0.9)||90.8 (1.0)||86.5 (1.8)||79.4 (2.6)||73.9 (3.9)||71.4 (4.0)||66.1 (2.1)||56.1 (3.6)|
|PC Sigmoid||94.9 (0.5)||90.9 (0.8)||88.1 (1.8)||80.3 (2.5)||75.8 (2.5)||72.9 (3.0)||65.0 (3.5)||58.9 (3.9)|
|PC Ramp||94.5 (0.7)||90.8 (0.5)||88.0 (2.2)||81.0 (2.2)||74.0 (2.3)||71.4 (2.4)||69.0 (2.8)||57.3 (2.0)|
). Best and equivalent methods (with 5% t-test) are highlighted in boldface.
|Dataset||Class||Dim||# train||# test||PC/S||PL||ML|
6.2 Benchmark experiments
Next, we compare our proposed method, PC with the sigmoid loss (PC/S), with two baseline methods. The first baseline is one of the state-of-the-art partial label (PL) methods partial with the squared hinge loss555We decided to use the squared hinge loss (which is convex) here since it was reported to work well in the original paper partial .:
The second baseline is a multi-label (ML) method multilabel2 , where every complementary label is translated into a negative label for class and positive labels for the other classes. This yields the following loss:
where we used the same sigmoid loss as the proposed method for
. We used a one-hidden-layer neural network (--) with rectified linear units
as activation functions, and weight decay candidates were chosen from. Standardization, validation and optimization details follow the previous experiments.
We evaluated the classification performance on the following benchmark datasets: WAVEFORM1, WAVEFORM2, SATIMAGE, PENDIGITS, DRIVE, LETTER, and USPS. USPS can be downloaded from the website of the late Sam Roweis666See http://cs.nyu.edu/~roweis/data.html., and all other datasets can be downloaded from the UCI machine learning repository
UCI machine learning repository777See http://archive.ics.uci.edu/ml/.. We tested several different settings of class labels, with equal number of data in each class.
In Table 2, we summarized the specification of the datasets and reported the means and standard deviations of the classification accuracy over 10 trials. From the results, we can see that the proposed method is either comparable to or better than the baseline methods on many of the datasets.
6.3 Combination of ordinary and complementary labels
|Dataset||Class||Dim||# train||# test||OL||CL||OL & CL|
Finally, we demonstrate the usefulness of combining ordinarily and complementarily labeled data. We used (16), with hyperparameter fixed at for simplicity. We divided our training dataset by ratio, where one subset was labeled ordinarily while the other was labeled complementarily888We used times more complementarily labeled data than ordinarily labeled data since a single ordinary label corresponds to complementary labels.. From the training dataset, we left out 25% of the data for validating hyperparameters based on the zero-one loss version of (16). Other details such as standardization, the model and optimization, and weight-decay candidates follow the previous experiments.
We compared three methods: the ordinary label (OL) method corresponding to , the complementary label (CL) method corresponding to , and the combination (OL & CL) method with . The PC and sigmoid losses were commonly used for all methods.
We reported the means and standard deviations of the classification accuracy over 10 trials in Table 3. From the results, we can see that OL & CL tends to outperform OL and CL, demonstrating the usefulnesses of combining ordinarily and complementarily labeled data.
We proposed a novel problem setting called learning from complementary labels, and showed that an unbiased estimator to the classification risk can be obtained only from complementarily labeled data, if the loss function satisfies a certain symmetric condition. Our risk estimator can easily be minimized by any stochastic optimization algorithms such as Adam adam , allowing large-scale training. We theoretically established estimation error bounds for the proposed method, and proved that the proposed method achieves the optimal parametric rate. We further showed that our proposed complementary classification can be easily combined with ordinary classification. Finally, we experimentally demonstrated the usefulness of the proposed methods.
The formulation of learning from complementary labels may also be useful in the context of privacy-aware machine learning privacy : a subject needs to answer private questions such as psychological counseling which can make him/her hesitate to answer directly. In such a situation, providing a complementary label, i.e., one of the incorrect answers to the question, would be mentally less demanding. We will investigate this issue in the future.
It is noteworthy that the symmetric condition (11), which the loss should satisfy in our complementary classification framework, also appears in other weakly supervised learning formulations, e.g., in positive-unlabeled learning punips . It would be interesting to more closely investigate the role of this symmetric condition to gain further insight into these different weakly supervised learning problems.
GN and MS were supported by JST CREST JPMJCR1403. We thank Ikko Yamane for the helpful discussions.
-  M. Belkin, P. Niyogi, and V. Sindhwani. Manifold regularization: a geometric framework for learning from labeled and unlabeled examples. Journal of Machine Learning Research, 7:2399–2434, 2006.
G. Blanchard, G. Lee, and C. Scott.
Semi-supervised novelty detection.Journal of Machine Learning Research, 11:2973–3009, 2010.
M. R. Boutell, J. Luo, X. Shen, and C. M. Brown.
Learning multi-label scene classification.Pattern Recognition, 37(9):1757–1771, 2004.
-  O. Chapelle, B. SchÃ¶lkopf, and A. Zien, editors. Semi-Supervised Learning. MIT Press, 2006.
-  T. Cour, B. Sapp, and B. Taskar. Learning from partial labels. Journal of Machine Learning Research, 12:1501–1536, 2011.
-  J. Davis, B. Kulis, P. Jain, S. Sra, and I. Dhillon. Information-theoretic metric learning. In ICML, 2007.
-  F. Denis. PAC learning from positive statistical queries. In ALT, 1998.
-  M. C. du Plessis, G. Niu, and M. Sugiyama. Analysis of learning from positive and unlabeled data. In NIPS, 2014.
-  M. C. du Plessis, G. Niu, and M. Sugiyama. Convex formulation for learning from positive and unlabeled data. In ICML, 2015.
-  C. Dwork. Differential privacy: A survey of results. In TAMC, 2008.
-  C. Elkan and K. Noto. Learning classifiers from only positive and unlabeled data. In KDD, 2008.
-  J. Goldberger, S. Roweis, G. Hinton, and R. Salakhutdinov. Neighbourhood components analysis. In NIPS, 2004.
-  Y. Grandvalet and Y. Bengio. Semi-supervised learning by entropy minimization. In NIPS, 2004.
-  J. Howe. Crowdsourcing: Why the power of the crowd is driving the future of business. Crown Publishing Group, 2009.
-  D. P. Kingma and J. L. Ba. Adam: A method for stochastic optimization. In ICLR, 2015.
-  T. N. Kipf and M. Welling. Semi-supervised classification with graph convolutional networks. In ICLR, 2017.
-  R. Kiryo, G. Niu, M. C. du Plessis, and M. Sugiyama. Positive-unlabeled learning with non-negative risk estimator. In NIPS, 2017.
-  S. Laine and T. Aila. Temporal ensembling for semi-supervised learning. In ICLR, 2017.
-  M. Ledoux and M. Talagrand. Probability in Banach Spaces: Isoperimetry and Processes. Springer, 1991.
-  Y.-F. Li and Z.-H. Zhou. Towards making unlabeled data never hurt. IEEE Transactions on Pattern Analysis and Machine Intelligence, 37(1):175–188, 2015.
-  G. Mann and A. McCallum. Simple, robust, scalable semi-supervised learning via expectation regularization. In ICML, 2007.
-  C. McDiarmid. On the method of bounded differences. In J. Siemons, editor, Surveys in Combinatorics, pages 148–188. Cambridge University Press, 1989.
-  M. Mohri, A. Rostamizadeh, and A. Talwalkar. Foundations of Machine Learning. MIT Press, 2012.
V. Nair and G. Hinton.
Rectified linear units improve restricted boltzmann machines.In ICML, 2010.
-  G. Niu, B. Dai, M. Yamada, and M. Sugiyama. Information-theoretic semi-supervised metric learning via entropy regularization. Neural Computation, 26(8):1717–1762, 2014.
-  G. Niu, M. C. du Plessis, T. Sakai, Y. Ma, and M. Sugiyama. Theoretical comparisons of positive-unlabeled learning against positive-negative learning. In NIPS, 2016.
-  G. Niu, W. Jitkrittum, B. Dai, H. Hachiya, and M. Sugiyama. Squared-loss mutual information regularization: A novel information-theoretic approach to semi-supervised learning. In ICML, 2013.
-  T. Sakai, M. C. du Plessis, G. Niu, and M. Sugiyama. Semi-supervised classification based on classification from positive and unlabeled data. In ICML, 2017.
-  B. Schölkopf and A. Smola. Learning with Kernels. MIT Press, 2001.
S. Tokui, K. Oono, S. Hido, and J. Clayton.
Chainer: a next-generation open source framework for deep learning.In Proceedings of Workshop on Machine Learning Systems in NIPS, 2015.
-  V. N. Vapnik. Statistical learning theory. John Wiley and Sons, 1998.
-  G. Ward, T. Hastie, S. Barry, J. Elith, and J. Leathwick. Presence-only data and the EM algorithm. Biometrics, 65(2):554–563, 2009.
-  K. Weinberger, J. Blitzer, and L. Saul. Distance metric learning for large margin nearest neighbor classification. Journal of Machine Learning Research, 10:207–244, 2009.
-  E. P. Xing, A. Y. Ng, M. I. Jordan, and S. Russell. Distance metric learning with application to clustering with side-information. In NIPS, 2002.
-  Z. Yang, W. W. Cohen, and R. Salakhutdinov. Revisiting semi-supervised learning with graph embeddings. In ICML, 2016.
-  T. Zhang. Statistical analysis of some multi-category large margin classification methods. Journal of Machine Learning Research, 5:1225–1251, 2004.
-  D. Zhou, O. Bousquet, T. Navin Lal, J. Weston, and B. Schölkopf. Learning with local and global consistency. In NIPS, 2003.
-  X. Zhu, Z. Ghahramani, and J. Lafferty. Semi-supervised learning using Gaussian fields and harmonic functions. In ICML, 2003.
Appendix A Proof of Theorem 2
From Eq.(11), we have
Appendix B Proof of Lemma 3
By definition, so that
After rewriting , we can know that
due to the sub-additivity of the supremum.
The first term is independent of and thus