All That's 'Human' Is Not Gold: Evaluating Human Evaluation of Generated Text

06/30/2021
by   Elizabeth Clark, et al.
0

Human evaluations are typically considered the gold standard in natural language generation, but as models' fluency improves, how well can evaluators detect and judge machine-generated text? We run a study assessing non-experts' ability to distinguish between human- and machine-authored text (GPT2 and GPT3) in three domains (stories, news articles, and recipes). We find that, without training, evaluators distinguished between GPT3- and human-authored text at random chance level. We explore three approaches for quickly training evaluators to better identify GPT3-authored text (detailed instructions, annotated examples, and paired examples) and find that while evaluators' accuracy improved up to 55 domains. Given the inconsistent results across text domains and the often contradictory reasons evaluators gave for their judgments, we examine the role untrained human evaluations play in NLG evaluation and provide recommendations to NLG researchers for improving human evaluations of text generated from state-of-the-art models.

READ FULL TEXT
research
11/08/2020

A Gold Standard Methodology for Evaluating Accuracy in Data-To-Text Systems

Most Natural Language Generation systems need to produce accurate texts....
research
10/06/2020

RoFT: A Tool for Evaluating Human Detection of Machine-Generated Text

In recent years, large neural networks for natural language generation (...
research
07/06/2023

Style Over Substance: Evaluation Biases for Large Language Models

As large language models (LLMs) continue to advance, accurately and comp...
research
02/14/2022

Repairing the Cracked Foundation: A Survey of Obstacles in Evaluation Practices for Generated Text

Evaluation practices in natural language generation (NLG) have many know...
research
10/07/2020

What Can We Learn from Collective Human Opinions on Natural Language Inference Data?

Despite the subjective nature of many NLP tasks, most NLU evaluations ha...
research
07/28/2021

Investigating Text Simplification Evaluation

Modern text simplification (TS) heavily relies on the availability of go...
research
03/08/2022

Reproducible Subjective Evaluation

Human perceptual studies are the gold standard for the evaluation of man...

Please sign up or login with your details

Forgot password? Click here to reset