How Decoding Strategies Affect the Verifiability of Generated Text

11/09/2019
by   Luca Massarelli, et al.
0

Language models are of considerable importance. They are used for pretraining, finetuning, and rescoring in downstream applications, and as is as a test-bed and benchmark for progress in natural language understanding. One fundamental question regards the way we should generate text from a language model. It is well known that different decoding strategies can have dramatic impact on the quality of the generated text and using the most likely sequence under the model distribution, e.g., via beam search, generally leads to degenerate and repetitive outputs. While generation strategies such as top-k and nucleus sampling lead to more natural and less repetitive generations, the true cost of avoiding the highest scoring solution is hard to quantify. In this paper, we argue that verifiability, i.e., the consistency of the generated text with factual knowledge, is a suitable metric for measuring this cost. We use an automatic fact-checking system to calculate new metrics as a function of the number of supported claims per sentence and find that sampling-based generation strategies, such as top-k, indeed lead to less verifiable text. This finding holds across various dimensions, such as model size, training data size and parameters of the generation strategy. Based on this finding, we introduce a simple and effective generation strategy for producing non-repetitive and more verifiable (in comparison to other methods) text.

READ FULL TEXT
research
03/29/2022

On Decoding Strategies for Neural Text Generators

When generating text from probabilistic models, the chosen decoding stra...
research
03/28/2022

A Well-Composed Text is Half Done! Composition Sampling for Diverse Conditional Generation

We propose Composition Sampling, a simple but effective method to genera...
research
07/07/2023

On the Efficacy of Sampling Adapters

Sampling is a common strategy for generating text from probabilistic mod...
research
09/09/2023

Reverse-Engineering Decoding Strategies Given Blackbox Access to a Language Generation System

Neural language models are increasingly deployed into APIs and websites ...
research
01/04/2023

Text sampling strategies for predicting missing bibliographic links

The paper proposes various strategies for sampling text data when perfor...
research
04/22/2019

The Curious Case of Neural Text Degeneration

Despite considerable advancements with deep neural language models, the ...
research
10/22/2021

Lightweight Decoding Strategies for Increasing Specificity

Language models are known to produce vague and generic outputs. We propo...

Please sign up or login with your details

Forgot password? Click here to reset