Collective Wisdom: Improving Low-resource Neural Machine Translation using Adaptive Knowledge Distillation

10/12/2020
by   Fahimeh Saleh, et al.
9

Scarcity of parallel sentence-pairs poses a significant hurdle for training high-quality Neural Machine Translation (NMT) models in bilingually low-resource scenarios. A standard approach is transfer learning, which involves taking a model trained on a high-resource language-pair and fine-tuning it on the data of the low-resource MT condition of interest. However, it is not clear generally which high-resource language-pair offers the best transfer learning for the target MT setting. Furthermore, different transferred models may have complementary semantic and/or syntactic strengths, hence using only one model may be sub-optimal. In this paper, we tackle this problem using knowledge distillation, where we propose to distill the knowledge of ensemble of teacher models to a single student model. As the quality of these teacher models varies, we propose an effective adaptive knowledge distillation approach to dynamically adjust the contribution of the teacher models during the distillation process. Experiments on transferring from a collection of six language pairs from IWSLT to five low-resource language-pairs from TED Talks demonstrate the effectiveness of our approach, achieving up to +0.9 BLEU score improvement compared to strong baselines.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
08/17/2019

Language Graph Distillation for Low-Resource Machine Translation

Neural machine translation on low-resource language is challenging due t...
research
10/15/2021

Multilingual Neural Machine Translation:Can Linguistic Hierarchies Help?

Multilingual Neural Machine Translation (MNMT) trains a single NMT model...
research
03/02/2023

Letz Translate: Low-Resource Machine Translation for Luxembourgish

Natural language processing of Low-Resource Languages (LRL) is often cha...
research
05/02/2017

A Teacher-Student Framework for Zero-Resource Neural Machine Translation

While end-to-end neural machine translation (NMT) has made remarkable pr...
research
10/27/2022

Too Brittle To Touch: Comparing the Stability of Quantization and Distillation Towards Developing Lightweight Low-Resource MT Models

Leveraging shared learning through Massively Multilingual Models, state-...
research
09/15/2023

Multilingual Sentence-Level Semantic Search using Meta-Distillation Learning

Multilingual semantic search is the task of retrieving relevant contents...
research
04/19/2023

An Empirical Study of Leveraging Knowledge Distillation for Compressing Multilingual Neural Machine Translation Models

Knowledge distillation (KD) is a well-known method for compressing neura...

Please sign up or login with your details

Forgot password? Click here to reset