Shall We Pretrain Autoregressive Language Models with Retrieval? A Comprehensive Study

04/13/2023
by   Boxin Wang, et al.
16

Large decoder-only language models (LMs) can be largely improved in terms of perplexity by retrieval (e.g., RETRO), but its impact on text generation quality and downstream task accuracy is unclear. Thus, it is still an open question: shall we pretrain large autoregressive LMs with retrieval? To answer it, we perform a comprehensive study on a scalable pre-trained retrieval-augmented LM (i.e., RETRO) compared with standard GPT and retrieval-augmented GPT incorporated at fine-tuning or inference stages. We first provide the recipe to reproduce RETRO up to 9.5B parameters while retrieving a text corpus with 330B tokens. Based on that, we have the following novel findings: i) RETRO outperforms GPT on text generation with much less degeneration (i.e., repetition), moderately higher factual accuracy, and slightly lower toxicity with a nontoxic retrieval database. ii) On the LM Evaluation Harness benchmark, RETRO largely outperforms GPT on knowledge-intensive tasks, but is on par with GPT on other tasks. Furthermore, we introduce a simple variant of the model, RETRO++, which largely improves open-domain QA results of original RETRO (e.g., EM score +8.6 on Natural Question) and significantly outperforms retrieval-augmented GPT across different model sizes. Our findings highlight the promising direction of pretraining autoregressive LMs with retrieval as future foundation models. We release our implementation at: https://github.com/NVIDIA/Megatron-LM#retro

READ FULL TEXT

page 1

page 2

page 3

page 4

research
08/21/2023

RaLLe: A Framework for Developing and Evaluating Retrieval-Augmented Large Language Models

Retrieval-augmented large language models (R-LLMs) combine pre-trained l...
research
05/24/2023

KNN-LM Does Not Improve Open-ended Text Generation

In this paper, we study the generation quality of interpolation-based re...
research
04/22/2022

Autoregressive Search Engines: Generating Substrings as Document Identifiers

Knowledge-intensive language tasks require NLP systems to both provide t...
research
03/19/2021

Controllable Generation from Pre-trained Language Models via Inverse Prompting

Large-scale pre-trained language models have demonstrated strong capabil...
research
09/28/2022

FiD-Light: Efficient and Effective Retrieval-Augmented Text Generation

Retrieval-augmented generation models offer many benefits over standalon...
research
02/01/2022

Novelty Controlled Paraphrase Generation with Retrieval Augmented Conditional Prompt Tuning

Paraphrase generation is a fundamental and long-standing task in natural...
research
09/15/2020

Current Limitations of Language Models: What You Need is Retrieval

We classify and re-examine some of the current approaches to improve the...

Please sign up or login with your details

Forgot password? Click here to reset