Is Prompt-Based Finetuning Always Better than Vanilla Finetuning? Insights from Cross-Lingual Language Understanding

07/15/2023
by   Bolei Ma, et al.
0

Multilingual pretrained language models (MPLMs) have demonstrated substantial performance improvements in zero-shot cross-lingual transfer across various natural language understanding tasks by finetuning MPLMs on task-specific labelled data of a source language (e.g. English) and evaluating on a wide range of target languages. Recent studies show that prompt-based finetuning surpasses regular finetuning in few-shot scenarios. However, the exploration of prompt-based learning in multilingual tasks remains limited. In this study, we propose the ProFiT pipeline to investigate the cross-lingual capabilities of Prompt-based Finetuning. We conduct comprehensive experiments on diverse cross-lingual language understanding tasks (sentiment classification, paraphrase identification, and natural language inference) and empirically analyze the variation trends of prompt-based finetuning performance in cross-lingual transfer across different few-shot and full-data settings. Our results reveal the effectiveness and versatility of prompt-based finetuning in cross-lingual language understanding. Our findings indicate that prompt-based finetuning outperforms vanilla finetuning in full-data scenarios and exhibits greater advantages in few-shot scenarios, with different performance patterns dependent on task types. Additionally, we analyze underlying factors such as language similarity and pretraining data size that impact the cross-lingual performance of prompt-based finetuning. Overall, our work provides valuable insights into the cross-lingual prowess of prompt-based finetuning.

READ FULL TEXT
research
05/06/2021

XeroAlign: Zero-Shot Cross-lingual Transformer Alignment

The introduction of pretrained cross-lingual language models brought dec...
research
05/24/2023

BUFFET: Benchmarking Large Language Models for Few-shot Cross-lingual Transfer

Despite remarkable advancements in few-shot generalization in natural la...
research
05/26/2020

English Intermediate-Task Training Improves Zero-Shot Cross-Lingual Transfer Too

Intermediate-task training has been shown to substantially improve pretr...
research
06/19/2023

Multilingual Few-Shot Learning via Language Model Retrieval

Transformer-based language models have achieved remarkable success in fe...
research
05/22/2023

Enhancing Cross-lingual Natural Language Inference by Soft Prompting with Multilingual Verbalizer

Cross-lingual natural language inference is a fundamental problem in cro...
research
10/22/2022

Training Dynamics for Curriculum Learning: A Study on Monolingual and Cross-lingual NLU

Curriculum Learning (CL) is a technique of training models via ranking e...
research
02/15/2022

Enhancing Cross-lingual Prompting with Mask Token Augmentation

Prompting shows promising results in few-shot scenarios. However, its st...

Please sign up or login with your details

Forgot password? Click here to reset