Unbabel's Participation in the WMT20 Metrics Shared Task

10/29/2020
by   Ricardo Rei, et al.
0

We present the contribution of the Unbabel team to the WMT 2020 Shared Task on Metrics. We intend to participate on the segment-level, document-level and system-level tracks on all language pairs, as well as the 'QE as a Metric' track. Accordingly, we illustrate results of our models in these tracks with reference to test sets from the previous year. Our submissions build upon the recently proposed COMET framework: We train several estimator models to regress on different human-generated quality scores and a novel ranking model trained on relative ranks obtained from Direct Assessments. We also propose a simple technique for converting segment-level predictions into a document-level score. Overall, our systems achieve strong results for all language pairs on previous test sets and in many cases set a new state-of-the-art.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/24/2019

Unbabel's Participation in the WMT19 Translation Quality Estimation Shared Task

We present the contribution of the Unbabel team to the WMT 2019 Shared T...
research
08/16/2019

UDS--DFKI Submission to the WMT2019 Similar Language Translation Shared Task

In this paper we present the UDS-DFKI system submitted to the Similar La...
research
09/27/2022

Embarrassingly Easy Document-Level MT Metrics: How to Convert Any Pretrained Metric Into a Document-Level Metric

We hypothesize that existing sentence-level machine translation (MT) met...
research
10/11/2020

TransQuest at WMT2020: Sentence-Level Direct Assessment

This paper presents the team TransQuest's participation in Sentence-Leve...
research
07/26/2017

Can string kernels pass the test of time in Native Language Identification?

We describe a machine learning approach for the 2017 shared task on Nati...
research
09/13/2022

CometKiwi: IST-Unbabel 2022 Submission for the Quality Estimation Shared Task

We present the joint contribution of IST and Unbabel to the WMT 2022 Sha...
research
04/21/2021

On User Interfaces for Large-Scale Document-Level Human Evaluation of Machine Translation Outputs

Recent studies emphasize the need of document context in human evaluatio...

Please sign up or login with your details

Forgot password? Click here to reset