Semi-automatic Simultaneous Interpreting Quality Evaluation

11/12/2016
by   Xiaojun Zhang, et al.
0

Increasing interpreting needs a more objective and automatic measurement. We hold a basic idea that 'translating means translating meaning' in that we can assessment interpretation quality by comparing the meaning of the interpreting output with the source input. That is, a translation unit of a 'chunk' named Frame which comes from frame semantics and its components named Frame Elements (FEs) which comes from Frame Net are proposed to explore their matching rate between target and source texts. A case study in this paper verifies the usability of semi-automatic graded semantic-scoring measurement for human simultaneous interpreting and shows how to use frame and FE matches to score. Experiments results show that the semantic-scoring metrics have a significantly correlation coefficient with human judgment.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/17/2021

Lost in Interpreting: Speech Translation from Source or Interpreter?

Interpreters facilitate multi-lingual meetings but the affordable set of...
research
05/26/2023

Robustness of Multi-Source MT to Transcription Errors

Automatic speech translation is sensitive to speech recognition errors, ...
research
11/20/2008

chi2TeX Semi-automatic translation from chiwriter to LaTeX

Semi-automatic translation of math-filled book from obsolete ChiWriter f...
research
08/05/2017

Referenceless Quality Estimation for Natural Language Generation

Traditional automatic evaluation measures for natural language generatio...
research
07/30/2019

DuTongChuan: Context-aware Translation Model for Simultaneous Interpreting

In this paper, we present DuTongChuan, a novel context-aware translation...
research
01/05/2022

KUDO Interpreter Assist: Automated Real-time Support for Remote Interpretation

High-quality human interpretation requires linguistic and factual prepar...

Please sign up or login with your details

Forgot password? Click here to reset