PEFTT: Parameter-Efficient Fine-Tuning for low-resource Tibetan pre-trained language models

09/21/2023
by   Zhou Mingjun, et al.
0

In this era of large language models (LLMs), the traditional training of models has become increasingly unimaginable for regular users and institutions. The exploration of efficient fine-tuning for high-resource languages on these models is an undeniable trend that is gradually gaining popularity. However, there has been very little exploration for various low-resource languages, such as Tibetan. Research in Tibetan NLP is inherently scarce and limited. While there is currently no existing large language model for Tibetan due to its low-resource nature, that day will undoubtedly arrive. Therefore, research on efficient fine-tuning for low-resource language models like Tibetan is highly necessary. Our research can serve as a reference to fill this crucial gap. Efficient fine-tuning strategies for pre-trained language models (PLMs) in Tibetan have seen minimal exploration. We conducted three types of efficient fine-tuning experiments on the publicly available TNCC-title dataset: "prompt-tuning," "Adapter lightweight fine-tuning," and "prompt-tuning + Adapter fine-tuning." The experimental results demonstrate significant improvements using these methods, providing valuable insights for advancing Tibetan language applications in the context of pre-trained models.

READ FULL TEXT
research
05/23/2023

Parameter-Efficient Language Model Tuning with Active Learning in Low-Resource Settings

Pre-trained language models (PLMs) have ignited a surge in demand for ef...
research
11/03/2022

Fine-Tuning Pre-Trained Language Models Effectively by Optimizing Subnetworks Adaptively

Large-scale pre-trained language models have achieved impressive results...
research
03/29/2023

Adapting to the Low-Resource Double-Bind: Investigating Low-Compute Methods on Low-Resource African Languages

Many natural language processing (NLP) tasks make use of massively pre-t...
research
08/30/2022

To Adapt or to Fine-tune: A Case Study on Abstractive Summarization

Recent advances in the field of abstractive summarization leverage pre-t...
research
11/24/2022

Prototypical Fine-tuning: Towards Robust Performance Under Varying Data Sizes

In this paper, we move towards combining large parametric models with no...
research
05/30/2023

Resource-Efficient Fine-Tuning Strategies for Automatic MOS Prediction in Text-to-Speech for Low-Resource Languages

We train a MOS prediction model based on wav2vec 2.0 using the open-acce...
research
04/16/2021

Is Your Language Model Ready for Dense Representation Fine-tuning?

Pre-trained language models (LM) have become go-to text representation e...

Please sign up or login with your details

Forgot password? Click here to reset