Training Complex Models with Multi-Task Weak Supervision

10/05/2018
∙
by   Alexander Ratner, et al.
∙
14
∙

As machine learning models continue to increase in complexity, collecting large hand-labeled training sets has become one of the biggest roadblocks in practice. Instead, weaker forms of supervision that provide noisier but cheaper labels are often used. However, these weak supervision sources have diverse and unknown accuracies, may output correlated labels, and may label different tasks or apply at different levels of granularity. We propose a framework for integrating and modeling such weak supervision sources by viewing them as labeling different related sub-tasks of a problem, which we refer to as the multi-task weak supervision setting. We show that by solving a matrix completion-style problem, we can recover the accuracies of these multi-task sources given their dependency structure, but without any labeled data, leading to higher-quality supervision for training an end model. Theoretically, we show that the generalization error of models trained with this approach improves with the number of unlabeled data points, and characterize the scaling with respect to the task and dependency structures. On three fine-grained classification problems, we show that our approach leads to average gains of 20.2 points in accuracy over a traditional supervised approach, 6.8 points over a majority vote baseline, and 4.1 points over a previously proposed weak supervision method that models tasks separately.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
∙ 10/21/2019

Multi-Resolution Weak Supervision for Sequential Data

Since manually labeling training data is slow and expensive, recent indu...
research
∙ 05/11/2022

Weak Supervision with Incremental Source Accuracy Estimation

Motivated by the desire to generate labels for real-time data we develop...
research
∙ 03/24/2022

Shoring Up the Foundations: Fusing Model Embeddings and Weak Supervision

Foundation models offer an exciting new paradigm for constructing models...
research
∙ 03/03/2021

Comparing the Value of Labeled and Unlabeled Data in Method-of-Moments Latent Variable Estimation

Labeling data for modern machine learning is expensive and time-consumin...
research
∙ 11/25/2020

No Subclass Left Behind: Fine-Grained Robustness in Coarse-Grained Classification Problems

In real-world classification tasks, each class often comprises multiple ...
research
∙ 03/14/2018

Adversarial Data Programming: Using GANs to Relax the Bottleneck of Curated Labeled Data

Paucity of large curated hand-labeled training data for every domain-of-...
research
∙ 01/09/2022

Weak Supervision for Affordable Modeling of Electrocardiogram Data

Analysing electrocardiograms (ECGs) is an inexpensive and non-invasive, ...

Please sign up or login with your details

Forgot password? Click here to reset