Bayesian Active Learning with Pretrained Language Models

04/16/2021
by   Katerina Margatina, et al.
30

Active Learning (AL) is a method to iteratively select data for annotation from a pool of unlabeled data, aiming to achieve better model performance than random selection. Previous AL approaches in Natural Language Processing (NLP) have been limited to either task-specific models that are trained from scratch at each iteration using only the labeled data at hand or using off-the-shelf pretrained language models (LMs) that are not adapted effectively to the downstream task. In this paper, we address these limitations by introducing BALM; Bayesian Active Learning with pretrained language Models. We first propose to adapt the pretrained LM to the downstream task by continuing training with all the available unlabeled data and then use it for AL. We also suggest a simple yet effective fine-tuning method to ensure that the adapted LM is properly trained in both low and high resource scenarios during AL. We finally apply Monte Carlo dropout to the downstream model to obtain well-calibrated confidence scores for data selection with uncertainty sampling. Our experiments in five standard natural language understanding tasks demonstrate that BALM provides substantial data efficiency improvements compared to various combinations of acquisition functions, models and fine-tuning methods proposed in recent AL literature.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
12/16/2021

ATM: An Uncertainty-aware Active Self-training Framework for Label-efficient Text Classification

Despite the great success of pre-trained language models (LMs) in many n...
research
11/15/2022

An Efficient Active Learning Pipeline for Legal Text Classification

Active Learning (AL) is a powerful tool for learning with less labeled d...
research
09/12/2023

Annotating Data for Fine-Tuning a Neural Ranker? Current Active Learning Strategies are not Better than Random Selection

Search methods based on Pretrained Language Models (PLM) have demonstrat...
research
10/03/2018

Active Learning for New Domains in Natural Language Understanding

We explore active learning (AL) utterance selection for improving the ac...
research
10/06/2022

To Softmax, or not to Softmax: that is the question when applying Active Learning for Transformer Models

Despite achieving state-of-the-art results in nearly all Natural Languag...
research
10/31/2022

Active Learning of Non-semantic Speech Tasks with Pretrained Models

Pretraining neural networks with massive unlabeled datasets has become p...
research
08/18/2022

Active PETs: Active Data Annotation Prioritisation for Few-Shot Claim Verification with Pattern Exploiting Training

To mitigate the impact of data scarcity on fact-checking systems, we foc...

Please sign up or login with your details

Forgot password? Click here to reset