Comparison between parameter-efficient techniques and full fine-tuning: A case study on multilingual news article classification

08/14/2023
by   Olesya Razuvayevskaya, et al.
0

Adapters and Low-Rank Adaptation (LoRA) are parameter-efficient fine-tuning techniques designed to make the training of language models more efficient. Previous results demonstrated that these methods can even improve performance on some classification tasks. This paper complements the existing research by investigating how these techniques influence the classification performance and computation costs compared to full fine-tuning when applied to multilingual text classification tasks (genre, framing, and persuasion techniques detection; with different input lengths, number of predicted classes and classification difficulty), some of which have limited training data. In addition, we conduct in-depth analyses of their efficacy across different training scenarios (training on the original multilingual data; on the translations into English; and on a subset of English-only data) and different languages. Our findings provide valuable insights into the applicability of the parameter-efficient fine-tuning techniques, particularly to complex multilingual and multilabel classification tasks.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
09/12/2020

Improving Indonesian Text Classification Using Multilingual Language Model

Compared to English, the amount of labeled data for Indonesian text clas...
research
09/22/2021

Role of Language Relatedness in Multilingual Fine-tuning of Language Models: A Case Study in Indo-Aryan Languages

We explore the impact of leveraging the relatedness of languages that be...
research
10/24/2022

Multilingual Multimodal Learning with Machine Translated Text

Most vision-and-language pretraining research focuses on English tasks. ...
research
04/26/2021

Morph Call: Probing Morphosyntactic Content of Multilingual Transformers

The outstanding performance of transformer-based language models on a gr...
research
04/24/2023

KInITVeraAI at SemEval-2023 Task 3: Simple yet Powerful Multilingual Fine-Tuning for Persuasion Techniques Detection

This paper presents the best-performing solution to the SemEval 2023 Tas...
research
12/12/2022

Searching for Effective Multilingual Fine-Tuning Methods: A Case Study in Summarization

Recently, a large number of tuning strategies have been proposed to adap...
research
09/16/2023

Monolingual or Multilingual Instruction Tuning: Which Makes a Better Alpaca

Foundational large language models (LLMs) can be instruction-tuned to de...

Please sign up or login with your details

Forgot password? Click here to reset