Log In Sign Up

Detecting over/under-translation errors for determining adequacy in human translations

by   Prabhakar Gupta, et al.

We present a novel approach to detecting over and under translations (OT/UT) as part of adequacy error checks in translation evaluation. We do not restrict ourselves to machine translation (MT) outputs and specifically target applications with human generated translation pipeline. The goal of our system is to identify OT/UT errors from human translated video subtitles with high error recall. We achieve this without reference translations by learning a model on synthesized training data. We compare various classification networks that we trained on embeddings from pre-trained language model with our best hybrid network of GRU + CNN achieving 89.3 human-annotated evaluation data in 8 languages.


page 1

page 2

page 3

page 4


Analysing Coreference in Transformer Outputs

We analyse coreference phenomena in three neural machine translation sys...

Exploring Document-Level Literary Machine Translation with Parallel Paragraphs from World Literature

Literary translation is a culturally significant task, but it is bottlen...

Automatic Correction of Human Translations

We introduce translation error correction (TEC), the task of automatical...

An Overview on Machine Translation Evaluation

Since the 1950s, machine translation (MT) has become one of the importan...

Removing Biases from Trainable MT Metrics by Using Self-Training

Most trainable machine translation (MT) metrics train their weights on h...

DeepSubQE: Quality estimation for subtitle translations

Quality estimation (QE) for tasks involving language data is hard owing ...

Automated Evaluation of Out-of-Context Errors

We present a new approach to evaluate computational models for the task ...