Evaluation Discrepancy Discovery: A Sentence Compression Case-study

01/22/2021
by   Yevgeniy Puzikov, et al.
0

Reliable evaluation protocols are of utmost importance for reproducible NLP research. In this work, we show that sometimes neither metric nor conventional human evaluation is sufficient to draw conclusions about system performance. Using sentence compression as an example task, we demonstrate how a system can game a well-established dataset to achieve state-of-the-art results. In contrast with the results reported in previous work that showed correlation between human judgements and metric scores, our manual analysis of state-of-the-art system outputs demonstrates that high metric scores may only indicate a better fit to the data, but not better outputs, as perceived by humans.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
12/20/2022

BMX: Boosting Machine Translation Metrics with Explainability

State-of-the-art machine translation evaluation metrics are based on bla...
research
02/01/2019

Human acceptability judgements for extractive sentence compression

Recent approaches to English-language sentence compression rely on paral...
research
09/04/2017

Learning Neural Word Salience Scores

Measuring the salience of a word is an essential step in numerous NLP ta...
research
06/29/2021

Scientific Credibility of Machine Translation Research: A Meta-Evaluation of 769 Papers

This paper presents the first large-scale meta-evaluation of machine tra...
research
07/08/2019

Barriers towards no-reference metrics application to compressed video quality analysis: on the example of no-reference metric NIQE

This paper analyses the application of no-reference metric NIQE to the t...
research
08/12/2016

Measuring the State of the Art of Automated Pathway Curation Using Graph Algorithms - A Case Study of the mTOR Pathway

This paper evaluates the difference between human pathway curation and c...
research
10/12/2022

Better Smatch = Better Parser? AMR evaluation is not so simple anymore

Recently, astonishing advances have been observed in AMR parsing, as mea...

Please sign up or login with your details

Forgot password? Click here to reset