Off-the-Shelf Unsupervised NMT

11/06/2018
by   Chris Hokamp, et al.
0

We frame unsupervised machine translation (MT) in the context of multi-task learning (MTL), combining insights from both directions. We leverage off-the-shelf neural MT architectures to train unsupervised MT models with no parallel data and show that such models can achieve reasonably good performance, competitive with models purpose-built for unsupervised MT. Finally, we propose improvements that allow us to apply our models to English-Turkish, a truly low-resource language pair.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/15/2021

MENYO-20k: A Multi-domain English-Yorùbá Corpus for Machine Translation and Domain Adaptation

Massively multilingual machine translation (MT) has shown impressive cap...
research
12/14/2016

Unsupervised Clustering of Commercial Domains for Adaptive Machine Translation

In this paper, we report on domain clustering in the ambit of an adaptiv...
research
04/26/2022

Flow-Adapter Architecture for Unsupervised Machine Translation

In this work, we propose a flow-adapter architecture for unsupervised NM...
research
11/28/2020

Using Multiple Subwords to Improve English-Esperanto Automated Literary Translation Quality

Building Machine Translation (MT) systems for low-resource languages rem...
research
01/17/2019

Instance-Level Microtubule Segmentation Using Recurrent Attention

We propose a new deep learning algorithm for multiple microtubule (MT) s...
research
10/31/2019

Naver Labs Europe's Systems for the Document-Level Generation and Translation Task at WNGT 2019

Recently, neural models led to significant improvements in both machine ...
research
06/11/2019

Translating Translationese: A Two-Step Approach to Unsupervised Machine Translation

Given a rough, word-by-word gloss of a source language sentence, target ...

Please sign up or login with your details

Forgot password? Click here to reset