DeepAI
Log In Sign Up

Detecting over/under-translation errors for determining adequacy in human translations

04/01/2021
by   Prabhakar Gupta, et al.
0

We present a novel approach to detecting over and under translations (OT/UT) as part of adequacy error checks in translation evaluation. We do not restrict ourselves to machine translation (MT) outputs and specifically target applications with human generated translation pipeline. The goal of our system is to identify OT/UT errors from human translated video subtitles with high error recall. We achieve this without reference translations by learning a model on synthesized training data. We compare various classification networks that we trained on embeddings from pre-trained language model with our best hybrid network of GRU + CNN achieving 89.3 human-annotated evaluation data in 8 languages.

READ FULL TEXT

page 1

page 2

page 3

page 4

11/04/2019

Analysing Coreference in Transformer Outputs

We analyse coreference phenomena in three neural machine translation sys...
10/25/2022

Exploring Document-Level Literary Machine Translation with Parallel Paragraphs from World Literature

Literary translation is a culturally significant task, but it is bottlen...
06/17/2022

Automatic Correction of Human Translations

We introduce translation error correction (TEC), the task of automatical...
02/22/2022

An Overview on Machine Translation Evaluation

Since the 1950s, machine translation (MT) has become one of the importan...
08/10/2015

Removing Biases from Trainable MT Metrics by Using Self-Training

Most trainable machine translation (MT) metrics train their weights on h...
04/22/2020

DeepSubQE: Quality estimation for subtitle translations

Quality estimation (QE) for tasks involving language data is hard owing ...
03/23/2018

Automated Evaluation of Out-of-Context Errors

We present a new approach to evaluate computational models for the task ...