DeepAI
Log In Sign Up

An Empirical Study of Extrapolation in Text Generation with Scalar Control

04/16/2021
by   Aashi Jain, et al.
0

We conduct an empirical evaluation of extrapolation performance when conditioning on scalar control inputs like desired output length, desired edit from an input sentence, and desired sentiment across three text generation tasks. Specifically, we examine a zero-shot setting where models are asked to generalize to ranges of control values not seen during training. We focus on evaluating popular embedding methods for scalar inputs, including both learnable and sinusoidal embeddings, as well as simpler approaches. Surprisingly, our findings indicate that the simplest strategy of using scalar inputs directly, without further encoding, most reliably allows for successful extrapolation.

READ FULL TEXT

page 1

page 2

page 3

page 4

06/05/2020

CoCon: A Self-Supervised Approach for Controlled Text Generation

Pretrained Transformer-based language models (LMs) display remarkable na...
11/27/2022

Understanding BLOOM: An empirical study on diverse NLP tasks

In this work, we present an evaluation of smaller BLOOM model variants (...
11/01/2019

Kernelized Bayesian Softmax for Text Generation

Neural models for text generation require a softmax layer with proper to...
10/24/2020

Text Editing by Command

A prevailing paradigm in neural text generation is one-shot generation, ...
08/09/2022

High Recall Data-to-text Generation with Progressive Edit

Data-to-text (D2T) generation is the task of generating texts from struc...
10/09/2022

ASDOT: Any-Shot Data-to-Text Generation with Pretrained Language Models

Data-to-text generation is challenging due to the great variety of the i...
12/31/2020

Promoting Graph Awareness in Linearized Graph-to-Text Generation

Generating text from structured inputs, such as meaning representations ...