Learning from Multiple Noisy Partial Labelers

06/08/2021
by   Peilin Yu, et al.
0

Programmatic weak supervision creates models without hand-labeled training data by combining the outputs of noisy, user-written rules and other heuristic labelers. Existing frameworks make the restrictive assumption that labelers output a single class label. Enabling users to create partial labelers that output subsets of possible class labels would greatly expand the expressivity of programmatic weak supervision. We introduce this capability by defining a probabilistic generative model that can estimate the underlying accuracies of multiple noisy partial labelers without ground truth labels. We prove that this class of models is generically identifiable up to label swapping under mild conditions. We also show how to scale up learning to 100k examples in one minute, a 300X speed up compared to a naive implementation. We evaluate our framework on three text classification and six object classification tasks. On text tasks, adding partial labels increases average accuracy by 9.6 percentage points. On image tasks, we show that partial labels allow us to approach some zero-shot object classification problems with programmatic weak supervision by using class attributes as partial labelers. Our framework is able to achieve accuracy comparable to recent embedding-based zero-shot learning methods using only pre-trained attribute detectors

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/04/2022

Language Models in the Loop: Incorporating Prompting into Weak Supervision

We propose a new strategy for applying large pre-trained language models...
research
10/07/2021

Creating Training Sets via Weak Indirect Supervision

Creating labeled training sets has become one of the major roadblocks in...
research
03/22/2017

Joint Intermodal and Intramodal Label Transfers for Extremely Rare or Unseen Classes

In this paper, we present a label transfer model from texts to images fo...
research
10/25/2016

Socratic Learning: Augmenting Generative Models to Incorporate Latent Subsets in Training Data

A challenge in training discriminative models like neural networks is ob...
research
08/30/2022

AutoWS-Bench-101: Benchmarking Automated Weak Supervision with 100 Labels

Weak supervision (WS) is a powerful method to build labeled datasets for...
research
12/16/2021

Extreme Zero-Shot Learning for Extreme Text Classification

The eXtreme Multi-label text Classification (XMC) problem concerns findi...
research
03/30/2022

The Weak Supervision Landscape

Many ways of annotating a dataset for machine learning classification ta...

Please sign up or login with your details

Forgot password? Click here to reset