Typical Decoding for Natural Language Generation

02/01/2022
by   Clara Meister, et al.
0

Despite achieving incredibly low perplexities on myriad natural language corpora, today's language models still often underperform when used to generate text. This dichotomy has puzzled the language generation community for the last few years. In this work, we posit that the abstraction of natural language as a communication channel (à la Shannon, 1948) can provide new insights into the behaviors of probabilistic language generators, e.g., why high-probability texts can be dull or repetitive. Humans use language as a means of communicating information, and do so in an efficient yet error-minimizing manner, choosing each word in a string with this (perhaps subconscious) goal in mind. We propose that generation from probabilistic models should mimic this behavior. Rather than always choosing words from the high-probability region of the distribution–which have a low Shannon information content–we sample from the set of words with an information content close to its expected value, i.e., close to the conditional entropy of our model. This decision criterion can be realized through a simple and efficient implementation, which we call typical sampling. Automatic and human evaluations show that, in comparison to nucleus and top-k sampling, typical sampling offers competitive performance in terms of quality while consistently reducing the number of degenerate repetitions.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/31/2022

On the probability-quality paradox in language generation

When generating natural language from neural probabilistic models, high ...
research
03/18/2022

Are You Robert or RoBERTa? Deceiving Online Authorship Attribution Models Using Neural Text Generators

Recently, there has been a rise in the development of powerful pre-train...
research
05/04/2023

Conformal Nucleus Sampling

Language models generate text based on successively sampling the next wo...
research
10/27/2022

Truncation Sampling as Language Model Desmoothing

Long samples of text from neural language models can be of poor quality....
research
03/08/2023

On the Risks of Stealing the Decoding Algorithms of Language Models

A key component of generating text from modern language models (LM) is t...
research
11/23/2022

On the Typicality of Musical Sequences

It has been shown in a recent publication that words in human-produced E...
research
12/16/2021

Taming Repetition in Dialogue Generation

The wave of pre-training language models has been continuously improving...

Please sign up or login with your details

Forgot password? Click here to reset