Building Machine Translation Systems for the Next Thousand Languages

05/09/2022
by   Ankur Bapna, et al.
8

In this paper we share findings from our effort to build practical machine translation (MT) systems capable of translating across over one thousand languages. We describe results in three research domains: (i) Building clean, web-mined datasets for 1500+ languages by leveraging semi-supervised pre-training for language identification and developing data-driven filtering techniques; (ii) Developing practical MT models for under-served languages by leveraging massively multilingual models trained with supervised parallel data for over 100 high-resource languages and monolingual datasets for an additional 1000+ languages; and (iii) Studying the limitations of evaluation metrics for these languages and conducting qualitative analysis of the outputs from our MT models, highlighting several frequent error modes of these types of models. We hope that our work provides useful insights to practitioners working towards building MT systems for currently understudied languages, and highlights research directions that can complement the weaknesses of massively multilingual models in data-sparse settings.

READ FULL TEXT
research
10/15/2021

Breaking Down Multilingual Machine Translation

While multilingual training is now an essential ingredient in machine tr...
research
06/11/2023

Neural Machine Translation for the Indigenous Languages of the Americas: An Introduction

Neural models have drastically advanced state of the art for machine tra...
research
12/20/2022

IndicMT Eval: A Dataset to Meta-Evaluate Machine Translation metrics for Indian Languages

The rapid growth of machine translation (MT) systems has necessitated co...
research
05/13/2022

Beyond General Purpose Machine Translation: The Need for Context-specific Empirical Research to Design for Appropriate User Trust

Machine Translation (MT) has the potential to help people overcome langu...
research
05/17/2023

A Survey on Zero Pronoun Translation

Zero pronouns (ZPs) are frequently omitted in pro-drop languages (e.g. C...
research
10/27/2022

Too Brittle To Touch: Comparing the Stability of Quantization and Distillation Towards Developing Lightweight Low-Resource MT Models

Leveraging shared learning through Massively Multilingual Models, state-...

Please sign up or login with your details

Forgot password? Click here to reset