Large Language Models as Annotators: Enhancing Generalization of NLP Models at Minimal Cost

06/27/2023
by   Parikshit Bansal, et al.
0

State-of-the-art supervised NLP models achieve high accuracy but are also susceptible to failures on inputs from low-data regimes, such as domains that are not represented in training data. As an approximation to collecting ground-truth labels for the specific domain, we study the use of large language models (LLMs) for annotating inputs and improving the generalization of NLP models. Specifically, given a budget for LLM annotations, we present an algorithm for sampling the most informative inputs to annotate and retrain the NLP model. We find that popular active learning strategies such as uncertainty-based sampling do not work well. Instead, we propose a sampling strategy based on the difference in prediction scores between the base model and the finetuned NLP model, utilizing the fact that most NLP models are finetuned from a base model. Experiments with classification (semantic similarity) and ranking (semantic search) tasks show that our sampling strategy leads to significant gains in accuracy for both the training and target domains.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
09/03/2021

ALLWAS: Active Learning on Language models in WASserstein space

Active learning has emerged as a standard paradigm in areas with scarcit...
research
10/19/2020

Cold-start Active Learning through Self-supervised Language Modeling

Active learning strives to reduce annotation costs by choosing the most ...
research
12/13/2021

Do Data-based Curricula Work?

Current state-of-the-art NLP systems use large neural networks that requ...
research
09/10/2021

Pre-train or Annotate? Domain Adaptation with a Constrained Budget

Recent work has demonstrated that pre-training in-domain language models...
research
09/19/2021

Training Dynamic based data filtering may not work for NLP datasets

The recent increase in dataset size has brought about significant advanc...
research
05/19/2017

Data-adaptive Active Sampling for Efficient Graph-Cognizant Classification

The present work deals with active sampling of graph nodes representing ...
research
05/17/2018

Extrapolation in NLP

We argue that extrapolation to examples outside the training space will ...

Please sign up or login with your details

Forgot password? Click here to reset