Joint Retrieval and Generation Training for Grounded Text Generation

by   Yizhe Zhang, et al.

Recent advances in large-scale pre-training such as GPT-3 allow seemingly high quality text to be generated from a given prompt. However, such generation systems often suffer from problems of hallucinated facts, and are not inherently designed to incorporate useful external information. Grounded generation models appear to offer remedies, but their training typically relies on rarely-available parallel data where corresponding documents are provided for context. We propose a framework that alleviates this data constraint by jointly training a grounded generator and document retriever on the language model signal. The model learns to retrieve the documents with the highest utility in generation and attentively combines them in the output. We demonstrate that by taking advantage of external references our approach can produce more informative and interesting text in both prose and dialogue generation.


Focused Attention Improves Document-Grounded Generation

Document grounded generation is the task of using the information provid...

Hindsight: Posterior-guided training of retrievers for improved open-ended generation

Many text generation systems benefit from using a retriever to retrieve ...

KGPT: Knowledge-Grounded Pre-Training for Data-to-Text Generation

Data-to-text generation has recently attracted substantial interests due...

Pragmatically Informative Text Generation

We improve the informativeness of models for conditional text generation...

MixingBoard: a Knowledgeable Stylized Integrated Text Generation Platform

We present MixingBoard, a platform for quickly building demos with a foc...

Retrieve and Refine: Improved Sequence Generation Models For Dialogue

Sequence generation models for dialogue are known to have several proble...

Refer, Reuse, Reduce: Generating Subsequent References in Visual and Conversational Contexts

Dialogue participants often refer to entities or situations repeatedly w...