Enhanced Bilingual Evaluation Understudy

09/30/2015
by   Krzysztof Wołk, et al.
0

Our research extends the Bilingual Evaluation Understudy (BLEU) evaluation technique for statistical machine translation to make it more adjustable and robust. We intend to adapt it to resemble human evaluation more. We perform experiments to evaluate the performance of our technique against the primary existing evaluation methods. We describe and show the improvements it makes over existing methods as well as correlation to them. When human translators translate a text, they often use synonyms, different word orders or style, and other similar variations. We propose an SMT evaluation technique that enhances the BLEU metric to consider variations such as those.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset