DeepAI AI Chat
Log In Sign Up

Learning from Label Proportions by Learning with Label Noise

by   Jianxin Zhang, et al.

Learning from label proportions (LLP) is a weakly supervised classification problem where data points are grouped into bags, and the label proportions within each bag are observed instead of the instance-level labels. The task is to learn a classifier to predict the individual labels of future individual instances. Prior work on LLP for multi-class data has yet to develop a theoretically grounded algorithm. In this work, we provide a theoretically grounded approach to LLP based on a reduction to learning with label noise, using the forward correction (FC) loss of <cit.>. We establish an excess risk bound and generalization error analysis for our approach, while also extending the theory of the FC loss which may be of independent interest. Our approach demonstrates improved empirical performance in deep learning scenarios across multiple datasets and architectures, compared to the leading existing methods.


page 1

page 2

page 3

page 4


Address Instance-level Label Prediction in Multiple Instance Learning

Multiple Instance Learning (MIL) is concerned with learning from bags of...

Learning from Label Proportions: A Mutual Contamination Framework

Learning from label proportions (LLP) is a weakly supervised setting for...

Easy Learning from Label Proportions

We consider the problem of Learning from Label Proportions (LLP), a weak...

Fast learning from label proportions with small bags

In learning from label proportions (LLP), the instances are grouped into...

Two-stage Training for Learning from Label Proportions

Learning from label proportions (LLP) aims at learning an instance-level...

Making Deep Neural Networks Robust to Label Noise: a Loss Correction Approach

We present a theoretically grounded approach to train deep neural networ...

Learning from Aggregated Data: Curated Bags versus Random Bags

Protecting user privacy is a major concern for many machine learning sys...