Parameter-Efficient Language Model Tuning with Active Learning in Low-Resource Settings

05/23/2023
by   Josip Jukić, et al.
0

Pre-trained language models (PLMs) have ignited a surge in demand for effective fine-tuning techniques, particularly in low-resource domains and languages. Active learning (AL), a set of algorithms designed to decrease labeling costs by minimizing label complexity, has shown promise in confronting the labeling bottleneck. Concurrently, adapter modules, designed for parameter-efficient fine-tuning (PEFT), have showcased notable potential in low-resource settings. However, the interplay between AL and adapter-based PEFT remains unexplored. In our study, we empirically investigate PEFT behavior with AL in low-resource settings for text classification tasks. Our findings affirm the superiority of PEFT over full-fine tuning (FFT) in low-resource settings and demonstrate that this advantage persists in AL setups. Finally, we delve into the properties of PEFT and FFT through the lens of forgetting dynamics and instance-level representations, linking them to AL instance selection behavior and the stability of PEFT. Our research underscores the synergistic potential of AL, PEFT, and TAPT in low-resource settings, paving the way for advancements in efficient and effective fine-tuning.

READ FULL TEXT
research
12/04/2020

Fine-tuning BERT for Low-Resource Natural Language Understanding via Active Learning

Recently, leveraging pre-trained Transformer based language models in do...
research
09/21/2023

PEFTT: Parameter-Efficient Fine-Tuning for low-resource Tibetan pre-trained language models

In this era of large language models (LLMs), the traditional training of...
research
12/20/2022

Smooth Sailing: Improving Active Learning for Pre-trained Language Models with Representation Smoothness Analysis

Developed as a solution to a practical need, active learning (AL) method...
research
11/24/2022

Prototypical Fine-tuning: Towards Robust Performance Under Varying Data Sizes

In this paper, we move towards combining large parametric models with no...
research
05/16/2023

On Dataset Transferability in Active Learning for Transformers

Active learning (AL) aims to reduce labeling costs by querying the examp...
research
10/17/2022

Using Bottleneck Adapters to Identify Cancer in Clinical Notes under Low-Resource Constraints

Processing information locked within clinical health records is a challe...
research
11/16/2022

Towards Robust Low-Resource Fine-Tuning with Multi-View Compressed Representations

Due to the huge amount of parameters, fine-tuning of pretrained language...

Please sign up or login with your details

Forgot password? Click here to reset