Do Massively Pretrained Language Models Make Better Storytellers?

09/24/2019
by   Abigail See, et al.
2

Large neural language models trained on massive amounts of text have emerged as a formidable strategy for Natural Language Understanding tasks. However, the strength of these models as Natural Language Generators is less clear. Though anecdotal evidence suggests that these models generate better quality text, there has been no detailed study characterizing their generation abilities. In this work, we compare the performance of an extensively pretrained model, OpenAI GPT2-117 (Radford et al., 2019), to a state-of-the-art neural story generation model (Fan et al., 2018). By evaluating the generated text across a wide variety of automatic metrics, we characterize the ways in which pretrained models do, and do not, make better storytellers. We find that although GPT2-117 conditions more strongly on context, is more sensitive to ordering of events, and uses more unusual words, it is just as likely to produce repetitive and under-diverse text when using likelihood-maximizing decoding algorithms.

READ FULL TEXT
research
07/29/2021

Demystifying Neural Language Models' Insensitivity to Word-Order

Recent research analyzing the sensitivity of natural language understand...
research
09/24/2020

RealToxicityPrompts: Evaluating Neural Toxic Degeneration in Language Models

Pretrained neural language models (LMs) are prone to generating racist, ...
research
12/10/2020

Infusing Finetuning with Semantic Dependencies

For natural language processing systems, two kinds of evidence support t...
research
09/04/2023

A Comparative Analysis of Pretrained Language Models for Text-to-Speech

State-of-the-art text-to-speech (TTS) systems have utilized pretrained l...
research
06/23/2020

Automating Text Naturalness Evaluation of NLG Systems

Automatic methods and metrics that assess various quality criteria of au...
research
09/24/2021

Transformers Generalize Linearly

Natural language exhibits patterns of hierarchically governed dependenci...
research
05/29/2023

Do Language Models Know When They're Hallucinating References?

Current state-of-the-art language models (LMs) are notorious for generat...

Please sign up or login with your details

Forgot password? Click here to reset