On Decoding Strategies for Neural Text Generators

03/29/2022
by   Gian Wiher, et al.
0

When generating text from probabilistic models, the chosen decoding strategy has a profound effect on the resulting text. Yet the properties elicited by various decoding strategies do not always transfer across natural language generation tasks. For example, while mode-seeking methods like beam search perform remarkably well for machine translation, they have been observed to lead to incoherent and repetitive text in story generation. Despite such observations, the effectiveness of decoding strategies is often assessed with respect to only a single task. This work – in contrast – provides a comprehensive analysis of the interaction between language generation tasks and decoding strategies. Specifically, we measure changes in attributes of generated text as a function of both decoding strategy and task using human and automatic evaluation. Our results reveal both previously-observed and surprising findings. For example, the nature of the diversity-quality trade-off in language generation is very task-specific; the length bias often attributed to beam search is not constant across tasks.

READ FULL TEXT
research
10/06/2020

If beam search is the answer, what was the question?

Quite surprisingly, exact maximum a posteriori (MAP) decoding of neural ...
research
03/31/2022

On the probability-quality paradox in language generation

When generating natural language from neural probabilistic models, high ...
research
09/17/2019

BSDAR: Beam Search Decoding with Attention Reward in Neural Keyphrase Generation

This study mainly investigates two decoding problems in neural keyphrase...
research
11/09/2019

How Decoding Strategies Affect the Verifiability of Generated Text

Language models are of considerable importance. They are used for pretra...
research
10/25/2022

Information Filter upon Diversity-Improved Decoding for Diversity-Faithfulness Tradeoff in NLG

Some Natural Language Generation (NLG) tasks require both faithfulness a...
research
10/13/2022

Language Model Decoding as Likelihood-Utility Alignment

A critical component of a successful language generation pipeline is the...
research
02/14/2023

The Stable Entropy Hypothesis and Entropy-Aware Decoding: An Analysis and Algorithm for Robust Natural Language Generation

State-of-the-art language generation models can degenerate when applied ...

Please sign up or login with your details

Forgot password? Click here to reset