OCR Improves Machine Translation for Low-Resource Languages

02/27/2022
by   Oana Ignat, et al.
0

We aim to investigate the performance of current OCR systems on low resource languages and low resource scripts. We introduce and make publicly available a novel benchmark, OCR4MT, consisting of real and synthetic data, enriched with noise, for 60 low-resource languages in low resource scripts. We evaluate state-of-the-art OCR systems on our benchmark and analyse most common errors. We show that OCR monolingual data is a valuable resource that can increase performance of Machine Translation models, when used in backtranslation. We then perform an ablation study to investigate how OCR errors impact Machine Translation performance and determine what is the minimum level of OCR quality needed for the monolingual data to be useful for Machine Translation.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/31/2021

Adapting High-resource NMT Models to Translate Low-resource Related Languages without Parallel Data

The scarcity of parallel data is a major obstacle for training high-qual...
research
09/01/2021

Survey of Low-Resource Machine Translation

We present a survey covering the state of the art in low-resource machin...
research
06/06/2021

The FLORES-101 Evaluation Benchmark for Low-Resource and Multilingual Machine Translation

One of the biggest challenges hindering progress in low-resource and mul...
research
12/13/2022

Towards a general purpose machine translation system for Sranantongo

Machine translation for Sranantongo (Sranan, srn), a low-resource Creole...
research
11/14/2022

High-Resource Methodological Bias in Low-Resource Investigations

The central bottleneck for low-resource NLP is typically regarded to be ...
research
07/01/2018

Lost in Translation: Analysis of Information Loss During Machine Translation Between Polysynthetic and Fusional Languages

Machine translation from polysynthetic to fusional languages is a challe...
research
02/18/2023

How Good Are GPT Models at Machine Translation? A Comprehensive Evaluation

Generative Pre-trained Transformer (GPT) models have shown remarkable ca...

Please sign up or login with your details

Forgot password? Click here to reset