Informative Language Representation Learning for Massively Multilingual Neural Machine Translation

09/04/2022
by   Renren Jin, et al.
0

In a multilingual neural machine translation model that fully shares parameters across all languages, an artificial language token is usually used to guide translation into the desired target language. However, recent studies show that prepending language tokens sometimes fails to navigate the multilingual neural machine translation models into right translation directions, especially on zero-shot translation. To mitigate this issue, we propose two methods, language embedding embodiment and language-aware multi-head attention, to learn informative language representations to channel translation into right directions. The former embodies language embeddings into different critical switching points along the information flow from the source to the target, aiming at amplifying translation direction guiding signals. The latter exploits a matrix, instead of a vector, to represent a language in the continuous space. The matrix is chunked into multiple heads so as to learn language representations in multiple subspaces. Experiment results on two datasets for massively multilingual neural machine translation demonstrate that language-aware multi-head attention benefits both supervised and zero-shot translation and significantly alleviates the off-target translation issue. Further linguistic typology prediction experiments show that matrix-based language representations learned by our methods are capable of capturing rich linguistic typology features.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
11/21/2017

Effective Strategies in Zero-Shot Neural Machine Translation

In this paper, we proposed two strategies which can be applied to a mult...
research
04/30/2020

Bridging linguistic typology and multilingual machine translation with multi-view language representations

Sparse language vectors from linguistic typology databases and learned e...
research
02/01/2018

Emerging Language Spaces Learned From Massively Multilingual Corpora

Translations capture important information about languages that can be u...
research
09/21/2020

Alleviating the Inequality of Attention Heads for Neural Machine Translation

Recent studies show that the attention heads in Transformer are not equa...
research
10/02/2014

Not All Neural Embeddings are Born Equal

Neural language models learn word representations that capture rich ling...
research
05/17/2023

Variable-length Neural Interlingua Representations for Zero-shot Neural Machine Translation

The language-independency of encoded representations within multilingual...
research
05/31/2023

TPDM: Selectively Removing Positional Information for Zero-shot Translation via Token-Level Position Disentangle Module

Due to Multilingual Neural Machine Translation's (MNMT) capability of ze...

Please sign up or login with your details

Forgot password? Click here to reset