A Test Suite and Manual Evaluation of Document-Level NMT at WMT19

08/08/2019
by   Kateřina Rysová, et al.
0

As the quality of machine translation rises and neural machine translation (NMT) is moving from sentence to document level translations, it is becoming increasingly difficult to evaluate the output of translation systems. We provide a test suite for WMT19 aimed at assessing discourse phenomena of MT systems participating in the News Translation Task. We have manually checked the outputs and identified types of translation errors that are relevant to document-level translation.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
08/19/2022

Discourse Cohesion Evaluation for Document-Level Neural Machine Translation

It is well known that translations generated by an excellent document-le...
research
12/18/2019

A Survey on Document-level Machine Translation: Methods and Evaluation

Machine translation (MT) is an important task in natural language proces...
research
12/14/2016

How Grammatical is Character-level Neural Machine Translation? Assessing MT Quality with Contrastive Translation Pairs

Analysing translation quality in regards to specific linguistic phenomen...
research
09/04/2019

SAO WMT19 Test Suite: Machine Translation of Audit Reports

This paper describes a machine translation test set of documents from th...
research
08/31/2019

Evaluating Pronominal Anaphora in Machine Translation: An Evaluation Measure and a Test Suite

The ongoing neural revolution in machine translation has made it easier ...
research
11/04/2019

Analysing Coreference in Transformer Outputs

We analyse coreference phenomena in three neural machine translation sys...
research
06/08/2023

Improving Long Context Document-Level Machine Translation

Document-level context for neural machine translation (NMT) is crucial t...

Please sign up or login with your details

Forgot password? Click here to reset