Log In Sign Up

Bayesian Active Learning with Pretrained Language Models

by   Katerina Margatina, et al.

Active Learning (AL) is a method to iteratively select data for annotation from a pool of unlabeled data, aiming to achieve better model performance than random selection. Previous AL approaches in Natural Language Processing (NLP) have been limited to either task-specific models that are trained from scratch at each iteration using only the labeled data at hand or using off-the-shelf pretrained language models (LMs) that are not adapted effectively to the downstream task. In this paper, we address these limitations by introducing BALM; Bayesian Active Learning with pretrained language Models. We first propose to adapt the pretrained LM to the downstream task by continuing training with all the available unlabeled data and then use it for AL. We also suggest a simple yet effective fine-tuning method to ensure that the adapted LM is properly trained in both low and high resource scenarios during AL. We finally apply Monte Carlo dropout to the downstream model to obtain well-calibrated confidence scores for data selection with uncertainty sampling. Our experiments in five standard natural language understanding tasks demonstrate that BALM provides substantial data efficiency improvements compared to various combinations of acquisition functions, models and fine-tuning methods proposed in recent AL literature.


page 1

page 2

page 3

page 4


ATM: An Uncertainty-aware Active Self-training Framework for Label-efficient Text Classification

Despite the great success of pre-trained language models (LMs) in many n...

An Efficient Active Learning Pipeline for Legal Text Classification

Active Learning (AL) is a powerful tool for learning with less labeled d...

Active Learning for New Domains in Natural Language Understanding

We explore active learning (AL) utterance selection for improving the ac...

To Softmax, or not to Softmax: that is the question when applying Active Learning for Transformer Models

Despite achieving state-of-the-art results in nearly all Natural Languag...

Active PETs: Active Data Annotation Prioritisation for Few-Shot Claim Verification with Pattern Exploiting Training

To mitigate the impact of data scarcity on fact-checking systems, we foc...

Smooth Sailing: Improving Active Learning for Pre-trained Language Models with Representation Smoothness Analysis

Developed as a solution to a practical need, active learning (AL) method...

Active Learning Helps Pretrained Models Learn the Intended Task

Models can fail in unpredictable ways during deployment due to task ambi...