Discrete and Soft Prompting for Multilingual Models

09/08/2021
by   Mengjie Zhao, et al.
12

It has been shown for English that discrete and soft prompting perform strongly in few-shot learning with pretrained language models (PLMs). In this paper, we show that discrete and soft prompting perform better than finetuning in multilingual cases: Crosslingual transfer and in-language training of multilingual natural language inference. For example, with 48 English training examples, finetuning obtains 33.74 surpassing the majority baseline (33.33 prompting outperform finetuning, achieving 36.43 demonstrate good performance of prompting with training data in multiple languages other than English.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/22/2022

Factual Consistency of Multilingual Pretrained Language Models

Pretrained language models can be queried for factual knowledge, with po...
research
12/20/2021

Few-shot Learning with Multilingual Language Models

Large-scale autoregressive language models such as GPT-3 are few-shot le...
research
11/03/2022

Crosslingual Generalization through Multitask Finetuning

Multitask prompted finetuning (MTF) has been shown to help large languag...
research
09/13/2021

Wine is Not v i n. – On the Compatibility of Tokenizations Across Languages

The size of the vocabulary is a central design choice in large pretraine...
research
05/03/2021

Scalar Adjective Identification and Multilingual Ranking

The intensity relationship that holds between scalar adjectives (e.g., n...
research
06/11/2023

Language Versatilists vs. Specialists: An Empirical Revisiting on Multilingual Transfer Ability

Multilingual transfer ability, which reflects how well the models fine-t...
research
11/09/2022

Local Structure Matters Most in Most Languages

Many recent perturbation studies have found unintuitive results on what ...

Please sign up or login with your details

Forgot password? Click here to reset