Energy-Based Reranking: Improving Neural Machine Translation Using Energy-Based Models

09/20/2020
by   Subhajit Naskar, et al.
6

The discrepancy between maximum likelihood estimation (MLE) and task measures such as BLEU score has been studied before for autoregressive neural machine translation (NMT) and resulted in alternative training algorithms (Ranzato et al., 2016; Norouzi et al., 2016; Shen et al., 2016; Wu et al., 2018). However, MLE training remains the de facto approach for autoregressive NMT because of its computational efficiency and stability. Despite this mismatch between the training objective and task measure, we notice that the samples drawn from an MLE-based trained NMT support the desired distribution – there are samples with much higher BLEU score comparing to the beam decoding output. To benefit from this observation, we train an energy-based model to mimic the behavior of the task measure (i.e., the energy-based model assigns lower energy to samples with higher BLEU score), which is resulted in a re-ranking algorithm based on the samples drawn from NMT: energy-based re-ranking (EBR). Our EBR consistently improves the performance of the Transformer-based NMT: +3 BLEU points on Sinhala-English and +2.0 BLEU points on IWSLT'17 French-English tasks.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/26/2017

English-Japanese Neural Machine Translation with Encoder-Decoder-Reconstructor

Neural machine translation (NMT) has recently become popular in the fiel...
research
08/02/2017

Dynamic Data Selection for Neural Machine Translation

Intelligent selection of training data has proven a successful technique...
research
10/09/2019

Novel Applications of Factored Neural Machine Translation

In this work, we explore the usefulness of target factors in neural mach...
research
06/04/2021

Exposing the Implicit Energy Networks behind Masked Language Models via Metropolis–Hastings

While recent work has shown that scores from models trained by the ubiqu...
research
12/08/2019

Cost-Sensitive Training for Autoregressive Models

Training autoregressive models to better predict under the test metric, ...
research
10/06/2020

Converting the Point of View of Messages Spoken to Virtual Assistants

Virtual Assistants can be quite literal at times. If the user says "tell...
research
05/03/2020

Dynamic Programming Encoding for Subword Segmentation in Neural Machine Translation

This paper introduces Dynamic Programming Encoding (DPE), a new segmenta...

Please sign up or login with your details

Forgot password? Click here to reset