The Perils of Using Mechanical Turk to Evaluate Open-Ended Text Generation

09/14/2021
by   Marzena Karpinska, et al.
0

Recent text generation research has increasingly focused on open-ended domains such as story and poetry generation. Because models built for such tasks are difficult to evaluate automatically, most researchers in the space justify their modeling choices by collecting crowdsourced human judgments of text quality (e.g., Likert scores of coherence or grammaticality) from Amazon Mechanical Turk (AMT). In this paper, we first conduct a survey of 45 open-ended text generation papers and find that the vast majority of them fail to report crucial details about their AMT tasks, hindering reproducibility. We then run a series of story evaluation experiments with both AMT workers and English teachers and discover that even with strict qualification filters, AMT workers (unlike teachers) fail to distinguish between model-generated text and human-generated references. We show that AMT worker judgments improve when they are shown model-generated output alongside human-generated references, which enables the workers to better calibrate their ratings. Finally, interviews with the English teachers provide deeper insights into the challenges of the evaluation process, particularly when rating model-generated text.

READ FULL TEXT
research
02/02/2021

MAUVE: Human-Machine Divergence Curves for Evaluating Open-Ended Text Generation

Despite major advances in open-ended text generation, there has been lim...
research
02/06/2018

Texygen: A Benchmarking Platform for Text Generation Models

We introduce Texygen, a benchmarking platform to support research on ope...
research
08/13/2021

MTG: A Benchmarking Suite for Multilingual Text Generation

We introduce MTG, a new benchmark suite for training and evaluating mult...
research
04/20/2022

Event Transition Planning for Open-ended Text Generation

Open-ended text generation tasks, such as dialogue generation and story ...
research
09/16/2020

UNION: An Unreferenced Metric for Evaluating Open-ended Story Generation

Despite the success of existing referenced metrics (e.g., BLEU and Mover...
research
08/30/2023

Optimizing Factual Accuracy in Text Generation through Dynamic Knowledge Selection

Language models (LMs) have revolutionized the way we interact with infor...
research
04/13/2020

Reverse Engineering Configurations of Neural Text Generation Models

This paper seeks to develop a deeper understanding of the fundamental pr...

Please sign up or login with your details

Forgot password? Click here to reset