Sequence-Level Training for Non-Autoregressive Neural Machine Translation

06/15/2021
by   Chenze Shao, et al.
0

In recent years, Neural Machine Translation (NMT) has achieved notable results in various translation tasks. However, the word-by-word generation manner determined by the autoregressive mechanism leads to high translation latency of the NMT and restricts its low-latency applications. Non-Autoregressive Neural Machine Translation (NAT) removes the autoregressive mechanism and achieves significant decoding speedup through generating target words independently and simultaneously. Nevertheless, NAT still takes the word-level cross-entropy loss as the training objective, which is not optimal because the output of NAT cannot be properly evaluated due to the multimodality problem. In this paper, we propose using sequence-level training objectives to train NAT models, which evaluate the NAT outputs as a whole and correlates well with the real translation quality. Firstly, we propose training NAT models to optimize sequence-level evaluation metrics (e.g., BLEU) based on several novel reinforcement algorithms customized for NAT, which outperforms the conventional method by reducing the variance of gradient estimation. Secondly, we introduce a novel training objective for NAT models, which aims to minimize the Bag-of-Ngrams (BoN) difference between the model output and the reference sentence. The BoN training objective is differentiable and can be calculated efficiently without doing any approximations. Finally, we apply a three-stage training strategy to combine these two methods to train the NAT model. We validate our approach on four translation tasks (WMT14 En↔De, WMT16 En↔Ro), which shows that our approach largely outperforms NAT baselines and achieves remarkable performance on all translation tasks.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
11/21/2019

Minimizing the Bag-of-Ngrams Difference for Non-Autoregressive Neural Machine Translation

Non-Autoregressive Neural Machine Translation (NAT) achieves significant...
research
09/10/2018

Greedy Search with Probabilistic N-gram Matching for Neural Machine Translation

Neural machine translation (NMT) models are usually trained with the wor...
research
01/24/2021

Fast Sequence Generation with Multi-Agent Reinforcement Learning

Autoregressive sequence Generation models have achieved state-of-the-art...
research
06/22/2019

Retrieving Sequential Information for Non-Autoregressive Neural Machine Translation

Non-Autoregressive Transformer (NAT) aims to accelerate the Transformer ...
research
12/08/2019

Cost-Sensitive Training for Autoregressive Models

Training autoregressive models to better predict under the test metric, ...
research
04/24/2021

Modeling Coverage for Non-Autoregressive Neural Machine Translation

Non-Autoregressive Neural Machine Translation (NAT) has achieved signifi...
research
08/22/2019

Dual Skew Divergence Loss for Neural Machine Translation

For neural sequence model training, maximum likelihood (ML) has been com...

Please sign up or login with your details

Forgot password? Click here to reset