Towards Continual Learning for Multilingual Machine Translation via Vocabulary Substitution

03/11/2021
by   Xavier Garcia, et al.
0

We propose a straightforward vocabulary adaptation scheme to extend the language capacity of multilingual machine translation models, paving the way towards efficient continual learning for multilingual machine translation. Our approach is suitable for large-scale datasets, applies to distant languages with unseen scripts, incurs only minor degradation on the translation performance for the original language pairs and provides competitive performance even in the case where we only possess monolingual data for the new languages.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
09/17/2021

Back-translation for Large-Scale Multilingual Machine Translation

This paper illustrates our approach to the shared task on large-scale mu...
research
10/20/2021

Multilingual Unsupervised Neural Machine Translation with Denoising Adapters

We consider the problem of multilingual unsupervised machine translation...
research
02/07/2023

Efficiently Upgrading Multilingual Machine Translation Models to Support More Languages

With multilingual machine translation (MMT) models continuing to grow in...
research
10/20/2021

Continual Learning in Multilingual NMT via Language-Specific Embeddings

This paper proposes a technique for adding a new source or target langua...
research
04/16/2021

Serial or Parallel? Plug-able Adapter for multilingual machine translation

Developing a unified multilingual translation model is a key topic in ma...
research
04/30/2020

Bridging linguistic typology and multilingual machine translation with multi-view language representations

Sparse language vectors from linguistic typology databases and learned e...
research
09/09/2023

MADLAD-400: A Multilingual And Document-Level Large Audited Dataset

We introduce MADLAD-400, a manually audited, general domain 3T token mon...

Please sign up or login with your details

Forgot password? Click here to reset