DeepAI AI Chat
Log In Sign Up

Gradient-guided Loss Masking for Neural Machine Translation

02/26/2021
by   Xinyi Wang, et al.
0

To mitigate the negative effect of low quality training data on the performance of neural machine translation models, most existing strategies focus on filtering out harmful data before training starts. In this paper, we explore strategies that dynamically optimize data usage during the training process using the model's gradients on a small set of clean data. At each training step, our algorithm calculates the gradient alignment between the training data and the clean data to mask out data with negative alignment. Our method has a natural intuition: good training data should update the model parameters in a similar direction as the clean data. Experiments on three WMT language pairs show that our method brings significant improvement over strong baselines, and the improvements are generalizable across test data from different domains.

READ FULL TEXT

page 1

page 2

page 3

page 4

09/16/2021

Improving Neural Machine Translation by Bidirectional Training

We present a simple and effective pretraining strategy – bidirectional t...
11/22/2019

Optimizing Data Usage via Differentiable Rewards

To acquire a new skill, humans learn better and faster if a tutor, based...
10/22/2019

Robust Neural Machine Translation for Clean and Noisy Speech Transcripts

Neural machine translation models have shown to achieve high quality whe...
09/01/2021

Masked Adversarial Generation for Neural Machine Translation

Attacking Neural Machine Translation models is an inherently combinatori...
02/28/2022

LCP-dropout: Compression-based Multiple Subword Segmentation for Neural Machine Translation

In this study, we propose a simple and effective preprocessing method fo...