Surface-Based Retrieval Reduces Perplexity of Retrieval-Augmented Language Models

05/25/2023
by   Ehsan Doostmohammadi, et al.
0

Augmenting language models with a retrieval mechanism has been shown to significantly improve their performance while keeping the number of parameters low. Retrieval-augmented models commonly rely on a semantic retrieval mechanism based on the similarity between dense representations of the query chunk and potential neighbors. In this paper, we study the state-of-the-art Retro model and observe that its performance gain is better explained by surface-level similarities, such as token overlap. Inspired by this, we replace the semantic retrieval in Retro with a surface-level method based on BM25, obtaining a significant reduction in perplexity. As full BM25 retrieval can be computationally costly for large datasets, we also apply it in a re-ranking scenario, gaining part of the perplexity reduction with minimal computational overhead.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/31/2023

Dense Sparse Retrieval: Using Sparse Language Models for Inference Efficient Dense Retrieval

Vector-based retrieval systems have become a common staple for academic ...
research
05/24/2023

Enhancing Retrieval-Augmented Large Language Models with Iterative Retrieval-Generation Synergy

Large language models are powerful text processors and reasoners, but ar...
research
10/28/2022

You can't pick your neighbors, or can you? When and how to rely on retrieval in the kNN-LM

Retrieval-enhanced language models (LMs), which condition their predicti...
research
02/23/2023

On the Generalization Ability of Retrieval-Enhanced Transformers

Recent work on the Retrieval-Enhanced Transformer (RETRO) model has show...
research
03/29/2022

The Inefficiency of Language Models in Scholarly Retrieval: An Experimental Walk-through

Language models are increasingly becoming popular in AI-powered scientif...
research
02/11/2023

Characterizing Attribution and Fluency Tradeoffs for Retrieval-Augmented Large Language Models

Despite recent progress, it has been difficult to prevent semantic hallu...
research
08/28/2023

MEMORY-VQ: Compression for Tractable Internet-Scale Memory

Retrieval augmentation is a powerful but expensive method to make langua...

Please sign up or login with your details

Forgot password? Click here to reset