Lightweight Decoding Strategies for Increasing Specificity

10/22/2021
by   Katy Ilonka Gero, et al.
0

Language models are known to produce vague and generic outputs. We propose two unsupervised decoding strategies based on either word-frequency or point-wise mutual information to increase the specificity of any model that outputs a probability distribution over its vocabulary at generation time. We test the strategies in a prompt completion task; with human evaluations, we find that both strategies increase the specificity of outputs with only modest decreases in sensibility. We also briefly present a summarization use case, where these strategies can produce more specific summaries.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/24/2022

Mutual Information Alleviates Hallucinations in Abstractive Summarization

Despite significant progress in the quality of language generated from a...
research
06/14/2019

Comparison of Diverse Decoding Methods from Conditional Language Models

While conditional language models have greatly improved in their ability...
research
03/06/2023

Faithfulness-Aware Decoding Strategies for Abstractive Summarization

Despite significant progress in understanding and improving faithfulness...
research
11/09/2019

How Decoding Strategies Affect the Verifiability of Generated Text

Language models are of considerable importance. They are used for pretra...
research
12/15/2021

Mask-combine Decoding and Classification Approach for Punctuation Prediction with real-time Inference Constraints

In this work, we unify several existing decoding strategies for punctuat...
research
05/24/2023

KNN-LM Does Not Improve Open-ended Text Generation

In this paper, we study the generation quality of interpolation-based re...
research
07/09/2020

Automation Strategies for Unconstrained Crossword Puzzle Generation

An unconstrained crossword puzzle is a generalization of the constrained...

Please sign up or login with your details

Forgot password? Click here to reset