Pronoun-Targeted Fine-tuning for NMT with Hybrid Losses

10/15/2020
by   Prathyusha Jwalapuram, et al.
11

Popular Neural Machine Translation model training uses strategies like backtranslation to improve BLEU scores, requiring large amounts of additional data and training. We introduce a class of conditional generative-discriminative hybrid losses that we use to fine-tune a trained machine translation model. Through a combination of targeted fine-tuning objectives and intuitive re-use of the training data the model has failed to adequately learn from, we improve the model performance of both a sentence-level and a contextual model without using any additional data. We target the improvement of pronoun translations through our fine-tuning and evaluate our models on a pronoun benchmark testset. Our sentence-level model shows a 0.5 BLEU improvement on both the WMT14 and the IWSLT13 De-En testsets, while our contextual model achieves the best results, improving from 31.81 to 32 BLEU on WMT14 De-En testset, and from 32.10 to 33.13 on the IWSLT13 De-En testset, with corresponding improvements in pronoun translation. We further show the generalizability of our method by reproducing the improvements on two additional language pairs, Fr-En and Cs-En. Code available at <https://github.com/ntunlp/pronoun-finetuning>.

READ FULL TEXT
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

08/02/2017

Dynamic Data Selection for Neural Machine Translation

Intelligent selection of training data has proven a successful technique...
02/17/2020

Incorporating BERT into Neural Machine Translation

The recently proposed BERT has shown great power on a variety of natural...
06/04/2021

BERTTune: Fine-Tuning Neural Machine Translation with BERTScore

Neural machine translation models are often biased toward the limited tr...
10/30/2019

Ordering Matters: Word Ordering Aware Unsupervised NMT

Denoising-based Unsupervised Neural Machine Translation (U-NMT) models t...
06/08/2021

Reading StackOverflow Encourages Cheating: Adding Question Text Improves Extractive Code Generation

Answering a programming question using only its title is difficult as sa...
11/20/2019

Fine-Tuning by Curriculum Learning for Non-Autoregressive Neural Machine Translation

Non-autoregressive translation (NAT) models remove the dependence on pre...
06/23/2021

Dealing with training and test segmentation mismatch: FBK@IWSLT2021

This paper describes FBK's system submission to the IWSLT 2021 Offline S...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.