Losses over Labels: Weakly Supervised Learning via Direct Loss Construction

12/13/2022
by   Dylan Sam, et al.
0

Owing to the prohibitive costs of generating large amounts of labeled data, programmatic weak supervision is a growing paradigm within machine learning. In this setting, users design heuristics that provide noisy labels for subsets of the data. These weak labels are combined (typically via a graphical model) to form pseudolabels, which are then used to train a downstream model. In this work, we question a foundational premise of the typical weakly supervised learning pipeline: given that the heuristic provides all “label" information, why do we need to generate pseudolabels at all? Instead, we propose to directly transform the heuristics themselves into corresponding loss functions that penalize differences between our model and the heuristic. By constructing losses directly from the heuristics, we can incorporate more information than is used in the standard weakly supervised pipeline, such as how the heuristics make their decisions, which explicitly informs feature selection during training. We call our method Losses over Labels (LoL) as it creates losses directly from heuristics without going through the intermediate step of a label. We show that LoL improves upon existing weak supervision methods on several benchmark text and image classification tasks and further demonstrate that incorporating gradient information leads to better performance on almost every task.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/07/2022

Label Propagation with Weak Supervision

Semi-supervised learning and weakly supervised learning are important pa...
research
01/08/2022

Decoupling Makes Weakly Supervised Local Feature Better

Weakly supervised learning can help local feature methods to overcome th...
research
03/04/2021

Lower-bounded proper losses for weakly supervised classification

This paper discusses the problem of weakly supervised learning of classi...
research
02/08/2016

Loss factorization, weakly supervised learning and label noise robustness

We prove that the empirical risk of most well-known loss functions facto...
research
03/02/2022

Nemo: Guiding and Contextualizing Weak Supervision for Interactive Data Programming

Weak Supervision (WS) techniques allow users to efficiently create large...
research
12/06/2018

Theoretical Guarantees of Deep Embedding Losses Under Label Noise

Collecting labeled data to train deep neural networks is costly and even...
research
10/25/2022

SepLL: Separating Latent Class Labels from Weak Supervision Noise

In the weakly supervised learning paradigm, labeling functions automatic...

Please sign up or login with your details

Forgot password? Click here to reset