On-The-Fly Information Retrieval Augmentation for Language Models

07/03/2020
by   Hai Wang, et al.
0

Here we experiment with the use of information retrieval as an augmentation for pre-trained language models. The text corpus used in information retrieval can be viewed as form of episodic memory which grows over time. By augmenting GPT 2.0 with information retrieval we achieve a zero shot 15 reduction in perplexity on Gigaword corpus without any re-training. We also validate our IR augmentation on an event co-reference task.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/10/2022

InPars: Data Augmentation for Information Retrieval using Large Language Models

The information retrieval community has recently witnessed a revolution ...
research
05/24/2023

Referral Augmentation for Zero-Shot Information Retrieval

We propose Referral-Augmented Retrieval (RAR), a simple technique that c...
research
05/05/2021

Rethinking Search: Making Experts out of Dilettantes

When experiencing an information need, users want to engage with an expe...
research
02/07/2023

Augmenting Zero-Shot Dense Retrievers with Plug-in Mixture-of-Memories

In this paper we improve the zero-shot generalization ability of languag...
research
09/14/2023

Zero-shot Audio Topic Reranking using Large Language Models

The Multimodal Video Search by Examples (MVSE) project investigates usin...
research
08/04/2023

ChatGPT for GTFS: From Words to Information

The General Transit Feed Specification (GTFS) standard for publishing tr...
research
05/10/2020

How Context Affects Language Models' Factual Predictions

When pre-trained on large unsupervised textual corpora, language models ...

Please sign up or login with your details

Forgot password? Click here to reset