Repairing the Cracked Foundation: A Survey of Obstacles in Evaluation Practices for Generated Text

02/14/2022
by   Sebastian Gehrmann, et al.
0

Evaluation practices in natural language generation (NLG) have many known flaws, but improved evaluation approaches are rarely widely adopted. This issue has become more urgent, since neural NLG models have improved to the point where they can often no longer be distinguished based on the surface-level features that older metrics rely on. This paper surveys the issues with human and automatic model evaluations and with commonly used datasets in NLG that have been pointed out over the past 20 years. We summarize, categorize, and discuss how researchers have been addressing these issues and what their findings mean for the current state of model evaluations. Building on those insights, we lay out a long-term vision for NLG evaluation and propose concrete steps for researchers to improve their evaluation processes. Finally, we analyze 66 NLG papers from recent NLP conferences in how well they already follow these suggestions and identify which areas require more drastic changes to the status quo.

READ FULL TEXT
research
06/26/2020

Evaluation of Text Generation: A Survey

The paper surveys evaluation methods of natural language generation (NLG...
research
03/29/2017

Survey of the State of the Art in Natural Language Generation: Core tasks, applications and evaluation

This paper surveys the current state of the art in Natural Language Gene...
research
06/30/2021

All That's 'Human' Is Not Gold: Evaluating Human Evaluation of Generated Text

Human evaluations are typically considered the gold standard in natural ...
research
05/31/2022

Cluster-based Evaluation of Automatically Generated Text

While probabilistic language generators have improved dramatically over ...
research
11/30/2022

A Major Obstacle for NLP Research: Let's Talk about Time Allocation!

The field of natural language processing (NLP) has grown over the last f...
research
03/29/2022

Investigating Data Variance in Evaluations of Automatic Machine Translation Metrics

Current practices in metric evaluation focus on one single dataset, e.g....
research
11/10/2019

Not All Claims are Created Equal: Choosing the Right Approach to Assess Your Hypotheses

Empirical research in Natural Language Processing (NLP) has adopted a na...

Please sign up or login with your details

Forgot password? Click here to reset