-
Multi-Complementary and Unlabeled Learning for Arbitrary Losses and Models
A weakly-supervised learning framework named as complementary-label lear...
read it
-
Learning from Complementary Labels
Collecting labeled data is costly and thus a critical bottleneck in real...
read it
-
Unbiased Risk Estimators Can Mislead: A Case Study of Learning with Complementary Labels
In weakly supervised learning, unbiased risk estimator(URE) is a powerfu...
read it
-
Progressive Identification of True Labels for Partial-Label Learning
Partial-label learning is one of the important weakly supervised learnin...
read it
-
Complementary-Label Learning for Arbitrary Losses and Models
In contrast to the standard classification paradigm where the true (or p...
read it
-
Learning from Similarity-Confidence Data
Weakly supervised learning has drawn considerable attention recently to ...
read it
-
Learning with Biased Complementary Labels
In this paper we study the classification problem in which we have acces...
read it
Learning from Multiple Complementary Labels
Complementary-label learning is a new weakly-supervised learning framework that solves the problem where each training example is supplied with a complementary label, which only specifies one of the classes that the example does not belong to. Although a few works have demonstrated that an unbiased estimator of the original classification risk can be obtained from only complementarily labeled data, they are all restricted to the case where each example is associated with exactly one complementary label. It would be more promising to learn from multiple complementary labels simultaneously, as the supervision information would be richer if more complementary labels are provided. So far, whether there exists an unbiased risk estimator for learning from multiple complementary labels simultaneously is still unknown. In this paper, we will give an affirmative answer by deriving the first unbiased risk estimator for learning from multiple complementary labels. In addition, we further theoretically analyze the estimation error bound of our proposed approach, and show that the optimal parametric convergence rate is achieved. Finally, we experimentally demonstrate the effectiveness of the proposed approach.
READ FULL TEXT
Comments
There are no comments yet.