Uniform Complexity for Text Generation

04/11/2022
by   Joseph Marvin Imperial, et al.
5

Powerful language models such as GPT-2 have shown promising results in tasks such as narrative generation which can be useful in an educational setup. These models, however, should be consistent with the linguistic properties of triggers used. For example, if the reading level of an input text prompt is appropriate for low-leveled learners (ex. A2 in the CEFR), then the generated continuation should also assume this particular level. Thus, we propose the task of uniform complexity for text generation which serves as a call to make existing language generators uniformly complex with respect to prompts used. Our study surveyed over 160 linguistic properties for evaluating text complexity and found out that both humans and GPT-2 models struggle in preserving the complexity of prompts in a narrative generation setting.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
01/14/2022

A Survey of Pretrained Language Models Based Text Generation

Text Generation aims to produce plausible and readable text in human lan...
research
10/30/2018

Evaluating Text GANs as Language Models

Generative Adversarial Networks (GANs) are a promising approach for text...
research
03/15/2018

Neural Text Generation: Past, Present and Beyond

This paper presents a systematic survey on recent development of neural ...
research
04/02/2019

Pragmatically Informative Text Generation

We improve the informativeness of models for conditional text generation...
research
02/01/2023

Does Vision Accelerate Hierarchical Generalization of Neural Language Learners?

Neural language models (LMs) are arguably less data-efficient than human...
research
09/23/2021

Revisiting the Uniform Information Density Hypothesis

The uniform information density (UID) hypothesis posits a preference amo...
research
04/18/2021

Extract, Denoise, and Enforce: Evaluating and Predicting Lexical Constraints for Conditional Text Generation

Recently, pre-trained language models (PLMs) have dominated conditional ...

Please sign up or login with your details

Forgot password? Click here to reset