Reflective Decoding: Unsupervised Paraphrasing and Abductive Reasoning

10/16/2020
by   Peter West, et al.
0

Pretrained Language Models (LMs) generate text with remarkable quality, novelty,and coherence. Yet applying LMs to the problems of paraphrasing and infilling currently requires direct supervision, since these tasks break the left-to-right generation setup of pretrained LMs. We present Reflective Decoding, a novel unsupervised approach to apply the capabilities of pretrained LMs to non-sequential tasks. Our approach is general and applicable to two distant tasks - paraphrasing and abductive reasoning. It requires no supervision or parallel corpora, only two pretrained language models: forward and backward. Reflective Decoding operates in two intuitive steps. In the contextualization step, we use LMs to generate many left and right contexts which collectively capture the meaning of the input sentence. Then, in the reflection step we decode in the semantic neighborhood of the input, conditioning on an ensemble of generated contexts with the reverse direction LM. We reflect through the generated contexts, effectively using them as an intermediate meaning representation to generate conditional output. Empirical results demonstrate that Reflective Decoding outperforms strong unsupervised baselines on both paraphrasing and abductive text infilling, significantly narrowing the gap between unsupervised and supervised methods.Reflective Decoding introduces the concept of using generated contexts to represent meaning, opening up new possibilities for unsupervised conditional text generation.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/12/2020

Back to the Future: Unsupervised Backprop-based Decoding for Counterfactual and Abductive Commonsense Reasoning

Abductive and counterfactual reasoning, core abilities of everyday human...
research
10/24/2020

NeuroLogic Decoding: (Un)supervised Neural Text Generation with Predicate Logic Constraints

Conditional text generation often requires lexical constraints, i.e., wh...
research
05/14/2023

ParaLS: Lexical Substitution via Pretrained Paraphraser

Lexical substitution (LS) aims at finding appropriate substitutes for a ...
research
08/19/2019

Encoder-Agnostic Adaptation for Conditional Language Generation

Large pretrained language models have changed the way researchers approa...
research
09/17/2023

Contrastive Decoding Improves Reasoning in Large Language Models

We demonstrate that Contrastive Decoding – a simple, computationally lig...
research
11/03/2022

PINTO: Faithful Language Reasoning Using Prompt-Generated Rationales

Neural language models (LMs) have achieved impressive results on various...
research
10/26/2020

PowerTransformer: Unsupervised Controllable Revision for Biased Language Correction

Unconscious biases continue to be prevalent in modern text and media, ca...

Please sign up or login with your details

Forgot password? Click here to reset