SLUA: A Super Lightweight Unsupervised Word Alignment Model via Cross-Lingual Contrastive Learning

02/08/2021
by   Di Wu, et al.
0

Word alignment is essential for the down-streaming cross-lingual language understanding and generation tasks. Recently, the performance of the neural word alignment models has exceeded that of statistical models. However, they heavily rely on sophisticated translation models. In this study, we propose a super lightweight unsupervised word alignment (SLUA) model, in which bidirectional symmetric attention trained with a contrastive learning objective is introduced, and an agreement loss is employed to bind the attention maps, such that the alignments follow mirror-like symmetry hypothesis. Experimental results on several public benchmarks demonstrate that our model achieves competitive, if not better, performance compared to the state of the art in word alignment while significantly reducing the training and decoding time on average. Further ablation analysis and case studies show the superiority of our proposed SLUA. Notably, we recognize our model as a pioneer attempt to unify bilingual word embedding and word alignments. Encouragingly, our approach achieves 16.4x speedup against GIZA++, and 50x parameter compression compared with the Transformer-based alignment methods. We will release our code to facilitate the community.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
01/06/2019

Improving Unsupervised Word-by-Word Translation with Language Model and Denoising Autoencoder

Unsupervised learning of cross-lingual word embedding offers elegant mat...
research
05/28/2021

Lightweight Cross-Lingual Sentence Representation Learning

Large-scale models for learning fixed-dimensional cross-lingual sentence...
research
02/26/2022

Multi-Level Contrastive Learning for Cross-Lingual Alignment

Cross-language pre-trained models such as multilingual BERT (mBERT) have...
research
10/07/2022

Robust Unsupervised Cross-Lingual Word Embedding using Domain Flow Interpolation

This paper investigates an unsupervised approach towards deriving a univ...
research
04/11/2019

Scalable Cross-Lingual Transfer of Neural Sentence Embeddings

We develop and investigate several cross-lingual alignment approaches fo...
research
09/12/2017

Cross-lingual Word Segmentation and Morpheme Segmentation as Sequence Labelling

This paper presents our segmentation system developed for the MLP 2017 s...
research
10/06/2020

Do Explicit Alignments Robustly Improve Multilingual Encoders?

Multilingual BERT (mBERT), XLM-RoBERTa (XLMR) and other unsupervised mul...

Please sign up or login with your details

Forgot password? Click here to reset