Domain Adaptive Inference for Neural Machine Translation

06/02/2019
by   Danielle Saunders, et al.
0

We investigate adaptive ensemble weighting for Neural Machine Translation, addressing the case of improving performance on a new and potentially unknown domain without sacrificing performance on the original domain. We adapt sequentially across two Spanish-English and three English-German tasks, comparing unregularized fine-tuning, L2 and Elastic Weight Consolidation. We then report a novel scheme for adaptive NMT ensemble decoding by extending Bayesian Interpolation with source information, and show strong improvements across test domains without access to the domain label.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
11/08/2019

Domain Robustness in Neural Machine Translation

Translating text that diverges from the training domain is a key challen...
research
08/29/2017

Neural Machine Translation Training in a Multi-Domain Scenario

In this paper, we explore alternative ways to train a neural machine tra...
research
10/23/2022

Additive Interventions Yield Robust Multi-Domain Machine Translation Models

Additive interventions are a recently-proposed mechanism for controlling...
research
07/31/2017

Regularization techniques for fine-tuning in neural machine translation

We investigate techniques for supervised domain adaptation for neural ma...
research
05/23/2023

Non-parametric, Nearest-neighbor-assisted Fine-tuning for Neural Machine Translation

Non-parametric, k-nearest-neighbor algorithms have recently made inroads...
research
04/11/2017

Unfolding and Shrinking Neural Machine Translation Ensembles

Ensembling is a well-known technique in neural machine translation (NMT)...
research
10/21/2022

Revisiting Checkpoint Averaging for Neural Machine Translation

Checkpoint averaging is a simple and effective method to boost the perfo...

Please sign up or login with your details

Forgot password? Click here to reset