DeepAI
Log In Sign Up

Improving Pretrained Cross-Lingual Language Models via Self-Labeled Word Alignment

06/11/2021
by   Zewen Chi, et al.
0

The cross-lingual language models are typically pretrained with masked language modeling on multilingual text or parallel sentences. In this paper, we introduce denoising word alignment as a new cross-lingual pre-training task. Specifically, the model first self-labels word alignments for parallel sentences. Then we randomly mask tokens in a bitext pair. Given a masked token, the model uses a pointer network to predict the aligned token in the other language. We alternately perform the above two steps in an expectation-maximization manner. Experimental results show that our method improves cross-lingual transferability on various datasets, especially on the token-level tasks, such as question answering, and structured prediction. Moreover, the model can serve as a pretrained word aligner, which achieves reasonably low error rates on the alignment benchmarks. The code and pretrained parameters are available at https://github.com/CZWin32768/XLM-Align.

READ FULL TEXT

page 1

page 2

page 3

page 4

06/30/2021

XLM-E: Cross-lingual Language Model Pre-training via ELECTRA

In this paper, we introduce ELECTRA-style tasks to cross-lingual languag...
10/15/2021

mLUKE: The Power of Entity Representations in Multilingual Pretrained Language Models

Recent studies have shown that multilingual pretrained language models c...
11/08/2022

Third-Party Aligner for Neural Word Alignments

Word alignment is to find translationally equivalent words between sourc...
03/16/2022

Graph Neural Networks for Multiparallel Word Alignment

After a period of decrease, interest in word alignments is increasing ag...
02/15/2022

Enhancing Cross-lingual Prompting with Mask Token Augmentation

Prompting shows promising results in few-shot scenarios. However, its st...
01/31/2022

Constrained Density Matching and Modeling for Cross-lingual Alignment of Contextualized Representations

Multilingual representations pre-trained with monolingual data exhibit c...
06/25/2021

ParaLaw Nets – Cross-lingual Sentence-level Pretraining for Legal Text Processing

Ambiguity is a characteristic of natural language, which makes expressio...