Shoring Up the Foundations: Fusing Model Embeddings and Weak Supervision

03/24/2022
by   Mayee F. Chen, et al.
14

Foundation models offer an exciting new paradigm for constructing models with out-of-the-box embeddings and a few labeled examples. However, it is not clear how to best apply foundation models without labeled data. A potential approach is to fuse foundation models with weak supervision frameworks, which use weak label sources – pre-trained models, heuristics, crowd-workers – to construct pseudolabels. The challenge is building a combination that best exploits the signal available in both foundation models and weak sources. We propose Liger, a combination that uses foundation model embeddings to improve two crucial elements of existing weak supervision techniques. First, we produce finer estimates of weak source quality by partitioning the embedding space and learning per-part source accuracies. Second, we improve source coverage by extending source votes in embedding space. Despite the black-box nature of foundation models, we prove results characterizing how our approach improves performance and show that lift scales with the smoothness of label distributions in embedding space. On six benchmark NLP and video tasks, Liger outperforms vanilla weak supervision by 14.1 points, weakly-supervised kNN and adapters by 11.8 points, and kNN and adapters supervised by traditional hand labels by 7.2 points.

READ FULL TEXT

page 4

page 5

page 6

page 7

page 11

page 12

page 24

page 28

research
06/26/2020

Train and You'll Miss It: Interactive Model Iteration with Weak Supervision and Pre-Trained Embeddings

Our goal is to enable machine learning systems to be trained interactive...
research
10/05/2018

Training Complex Models with Multi-Task Weak Supervision

As machine learning models continue to increase in complexity, collectin...
research
10/21/2019

Multi-Resolution Weak Supervision for Sequential Data

Since manually labeling training data is slow and expensive, recent indu...
research
10/09/2020

Denoising Multi-Source Weak Supervision for Neural Text Classification

We study the problem of learning neural text classifiers without using a...
research
08/30/2022

AutoWS-Bench-101: Benchmarking Automated Weak Supervision with 100 Labels

Weak supervision (WS) is a powerful method to build labeled datasets for...
research
08/05/2020

Trove: Ontology-driven weak supervision for medical entity classification

Motivation: Recognizing named entities (NER) and their associated attrib...
research
06/05/2023

CELDA: Leveraging Black-box Language Model as Enhanced Classifier without Labels

Utilizing language models (LMs) without internal access is becoming an a...

Please sign up or login with your details

Forgot password? Click here to reset