Compact Personalized Models for Neural Machine Translation

11/05/2018
by   Joern Wuebker, et al.
0

We propose and compare methods for gradient-based domain adaptation of self-attentive neural machine translation models. We demonstrate that a large proportion of model parameters can be frozen during adaptation with minimal or no reduction in translation quality by encouraging structured sparsity in the set of offset tensors during learning via group lasso regularization. We evaluate this technique for both batch and incremental adaptation across multiple data sets and language pairs. Our system architecture - combining a state-of-the-art self-attentive model with compact domain adaptation - provides high quality personalized machine translation that is both space and time efficient.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
09/15/2023

Neural Machine Translation Models Can Learn to be Few-shot Learners

The emergent ability of Large Language Models to use a small number of e...
research
05/14/2019

Curriculum Learning for Domain Adaptation in Neural Machine Translation

We introduce a curriculum learning approach to adapt generic neural mach...
research
09/23/2022

Zero-shot Domain Adaptation for Neural Machine Translation with Retrieved Phrase-level Prompts

Domain adaptation is an important challenge for neural machine translati...
research
06/08/2018

Findings of the Second Workshop on Neural Machine Translation and Generation

This document describes the findings of the Second Workshop on Neural Ma...
research
08/27/2019

Unsupervised Domain Adaptation for Neural Machine Translation with Domain-Aware Feature Embeddings

The recent success of neural machine translation models relies on the av...
research
05/04/2018

Extreme Adaptation for Personalized Neural Machine Translation

Every person speaks or writes their own flavor of their native language,...

Please sign up or login with your details

Forgot password? Click here to reset