Learning from Multiple Complementary Labels

12/30/2019 ∙ by Lei Feng, et al. ∙ 0

Complementary-label learning is a new weakly-supervised learning framework that solves the problem where each training example is supplied with a complementary label, which only specifies one of the classes that the example does not belong to. Although a few works have demonstrated that an unbiased estimator of the original classification risk can be obtained from only complementarily labeled data, they are all restricted to the case where each example is associated with exactly one complementary label. It would be more promising to learn from multiple complementary labels simultaneously, as the supervision information would be richer if more complementary labels are provided. So far, whether there exists an unbiased risk estimator for learning from multiple complementary labels simultaneously is still unknown. In this paper, we will give an affirmative answer by deriving the first unbiased risk estimator for learning from multiple complementary labels. In addition, we further theoretically analyze the estimation error bound of our proposed approach, and show that the optimal parametric convergence rate is achieved. Finally, we experimentally demonstrate the effectiveness of the proposed approach.

READ FULL TEXT
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.