Retrieval-Augmented Generation for Knowledge-Intensive NLP Tasks

05/22/2020
by   Patrick Lewis, et al.
0

Large pre-trained language models have been shown to store factual knowledge in their parameters, and achieve state-of-the-art results when fine-tuned on downstream NLP tasks. However, their ability to access and precisely manipulate knowledge is still limited, and hence on knowledge-intensive tasks, their performance lags behind task-specific architectures. Additionally, providing provenance for their decisions and updating their world knowledge remain open research problems. Pre-trained models with a differentiable access mechanism to explicit non-parametric memory can overcome this issue, but have so far been only investigated for extractive downstream tasks. We explore a general-purpose fine-tuning recipe for retrieval-augmented generation (RAG) – models which combine pre-trained parametric and non-parametric memory for language generation. We introduce RAG models where the parametric memory is a pre-trained seq2seq model and the non-parametric memory is a dense vector index of Wikipedia, accessed with a pre-trained neural retriever. We compare two RAG formulations, one which conditions on the same retrieved passages across the whole generated sequence, the other can use different passages per token. We fine-tune and evaluate our models on a wide range of knowledge-intensive NLP tasks and set the state-of-the-art on three open domain QA tasks, outperforming parametric seq2seq models and task-specific retrieve-and-extract architectures. For language generation tasks, we find that RAG models generate more specific, diverse and factual language than a state-of-the-art parametric-only seq2seq baseline.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
12/31/2020

An Experimental Evaluation of Transformer-based Language Models in the Biomedical Domain

With the growing amount of text in health data, there have been rapid ad...
research
05/31/2021

How transfer learning impacts linguistic knowledge in deep NLP models?

Transfer learning from pre-trained neural language models towards downst...
research
03/16/2022

TegTok: Augmenting Text Generation via Task-specific and Open-world Knowledge

Generating natural and informative texts has been a long-standing proble...
research
08/16/2022

CorpusBrain: Pre-train a Generative Retrieval Model for Knowledge-Intensive Language Tasks

Knowledge-intensive language tasks (KILT) usually require a large body o...
research
05/28/2023

Plug-and-Play Document Modules for Pre-trained Models

Large-scale pre-trained models (PTMs) have been widely used in document-...
research
03/21/2022

DQ-BART: Efficient Sequence-to-Sequence Model via Joint Distillation and Quantization

Large-scale pre-trained sequence-to-sequence models like BART and T5 ach...
research
12/20/2022

When Not to Trust Language Models: Investigating Effectiveness and Limitations of Parametric and Non-Parametric Memories

Despite their impressive performance on diverse tasks, large language mo...

Please sign up or login with your details

Forgot password? Click here to reset