Investigating Transfer Learning in Multilingual Pre-trained Language Models through Chinese Natural Language Inference

06/07/2021
by   Hai Hu, et al.
0

Multilingual transformers (XLM, mT5) have been shown to have remarkable transfer skills in zero-shot settings. Most transfer studies, however, rely on automatically translated resources (XNLI, XQuAD), making it hard to discern the particular linguistic knowledge that is being transferred, and the role of expert annotated monolingual datasets when developing task-specific models. We investigate the cross-lingual transfer abilities of XLM-R for Chinese and English natural language inference (NLI), with a focus on the recent large-scale Chinese dataset OCNLI. To better understand linguistic transfer, we created 4 categories of challenge and adversarial tasks (totaling 17 new datasets) for Chinese that build on several well-known resources for English (e.g., HANS, NLI stress-tests). We find that cross-lingual models trained on English NLI do transfer well across our Chinese tasks (e.g., in 3/4 of our challenge categories, they perform as well/better than the best monolingual models, even on 3/5 uniquely Chinese linguistic phenomena such as idioms, pro drop). These results, however, come with important caveats: cross-lingual models often perform best when trained on a mixture of English and high-quality monolingual NLI data (OCNLI), and are often hindered by automatically translated resources (XNLI-zh). For many phenomena, all models continue to struggle, highlighting the need for our new diagnostics to help benchmark Chinese and cross-lingual models. All new datasets/code are released at https://github.com/huhailinguist/ChineseNLIProbing.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/15/2021

Beyond the English Web: Zero-Shot Cross-Lingual and Lightweight Monolingual Classification of Registers

We explore cross-lingual transfer of register classification for web doc...
research
09/15/2021

Cross-lingual Transfer of Monolingual Models

Recent studies in zero-shot cross-lingual learning using multilingual mo...
research
11/13/2021

Extracting and filtering paraphrases by bridging natural language inference and paraphrasing

Paraphrasing is a useful natural language processing task that can contr...
research
10/21/2018

BCWS: Bilingual Contextual Word Similarity

This paper introduces the first dataset for evaluating English-Chinese B...
research
09/16/2023

Cross-Lingual Knowledge Editing in Large Language Models

Knowledge editing aims to change language models' performance on several...
research
05/19/2023

Efficient Cross-Lingual Transfer for Chinese Stable Diffusion with Images as Pivots

Diffusion models have made impressive progress in text-to-image synthesi...
research
01/14/2021

SICKNL: A Dataset for Dutch Natural Language Inference

We present SICK-NL (read: signal), a dataset targeting Natural Language ...

Please sign up or login with your details

Forgot password? Click here to reset