A Closer Look at Few-Shot Crosslingual Transfer: Variance, Benchmarks and Baselines
We present a focused study of few-shot crosslingual transfer, a recently proposed NLP scenario: a pretrained multilingual encoder is first finetuned on many annotations in a high resource language (typically English), and then finetuned on a few annotations (the “few shots”) in a target language. Few-shot transfer brings large improvements over zero-shot transfer. However, we show that it inherently has large variance and it is necessary to report results on multiple sets of few shots for stable results and to guarantee fair comparison of different algorithms. To address this problem, we publish our few-shot sets. In a study of why few-shot learning outperforms zero-shot transfer, we show that large models heavily rely on lexical hints when finetuned on a few shots and then overfit quickly. We evaluate different methods that use few-shot annotations, but do not observe significant improvements over the baseline. This calls for better ways of utilizing the few-shot annotations.
READ FULL TEXT