Exploring Document-Level Literary Machine Translation with Parallel Paragraphs from World Literature

10/25/2022
by   Katherine Thai, et al.
0

Literary translation is a culturally significant task, but it is bottlenecked by the small number of qualified literary translators relative to the many untranslated works published around the world. Machine translation (MT) holds potential to complement the work of human translators by improving both training procedures and their overall efficiency. Literary translation is less constrained than more traditional MT settings since translators must balance meaning equivalence, readability, and critical interpretability in the target language. This property, along with the complex discourse-level context present in literary texts, also makes literary MT more challenging to computationally model and evaluate. To explore this task, we collect a dataset (Par3) of non-English language novels in the public domain, each aligned at the paragraph level to both human and automatic English translations. Using Par3, we discover that expert literary translators prefer reference human translations over machine-translated paragraphs at a rate of 84 automatic MT metrics do not correlate with those preferences. The experts note that MT outputs contain not only mistranslations, but also discourse-disrupting errors and stylistic inconsistencies. To address these problems, we train a post-editing model whose output is preferred over normal MT output at a rate of 69 https://github.com/katherinethai/par3/ to spur future research into literary MT.

READ FULL TEXT

page 5

page 13

research
06/27/2023

Quality Estimation of Machine Translated Texts based on Direct Evidence from Training Data

Current Machine Translation systems achieve very good results on a growi...
research
05/08/2023

Target-Side Augmentation for Document-Level Machine Translation

Document-level machine translation faces the challenge of data sparsity ...
research
12/27/2021

HOPE: A Task-Oriented and Human-Centric Evaluation Framework Using Professional Post-Editing Towards More Effective MT Evaluation

Traditional automatic evaluation metrics for machine translation have be...
research
09/02/2018

Exploring Gap Filling as a Cheaper Alternative to Reading Comprehension Questionnaires when Evaluating Machine Translation for Gisting

A popular application of machine translation (MT) is gisting: MT is cons...
research
01/15/2021

The Impact of Post-editing and Machine Translation on Creativity and Reading Experience

This article presents the results of a study involving the translation o...
research
08/12/2023

With a Little Help from the Authors: Reproducing Human Evaluation of an MT Error Detector

This work presents our efforts to reproduce the results of the human eva...
research
10/14/2019

Estimating post-editing effort: a study on human judgements, task-based and reference-based metrics of MT quality

Devising metrics to assess translation quality has always been at the co...

Please sign up or login with your details

Forgot password? Click here to reset