Augmenting Zero-Shot Dense Retrievers with Plug-in Mixture-of-Memories

02/07/2023
by   Suyu Ge, et al.
0

In this paper we improve the zero-shot generalization ability of language models via Mixture-Of-Memory Augmentation (MoMA), a mechanism that retrieves augmentation documents from multiple information corpora ("external memories"), with the option to "plug in" new memory at inference time. We develop a joint learning mechanism that trains the augmentation component with latent labels derived from the end retrieval task, paired with hard negatives from the memory mixture. We instantiate the model in a zero-shot dense retrieval setting by augmenting a strong T5-based retriever with MoMA. Our model, MoMA, obtains strong zero-shot retrieval accuracy on the eighteen tasks included in the standard BEIR benchmark. It outperforms systems that seek generalization from increased model parameters and computation steps. Our analysis further illustrates the necessity of augmenting with mixture-of-memory for robust generalization, the benefits of augmentation learning, and how MoMA utilizes the plug-in memory at inference time without changing its parameters. We plan to open source our code.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/03/2020

On-The-Fly Information Retrieval Augmentation for Language Models

Here we experiment with the use of information retrieval as an augmentat...
research
05/27/2022

Nearest Neighbor Zero-Shot Inference

We introduce kNN-Prompt, a simple and effective technique to use k-neare...
research
10/11/2022

Retrieval Augmentation for T5 Re-ranker using External Sources

Retrieval augmentation has shown promising improvements in different tas...
research
05/27/2023

Augmentation-Adapted Retriever Improves Generalization of Language Models as Generic Plug-In

Retrieval augmentation can aid language models (LMs) in knowledge-intens...
research
10/06/2022

Retrieval of Soft Prompt Enhances Zero-Shot Task Generalization

During zero-shot inference with language models (LMs), using hard prompt...
research
05/29/2022

Decoupling Knowledge from Memorization: Retrieval-augmented Prompt Learning

Prompt learning approaches have made waves in natural language processin...
research
02/15/2023

How to Train Your DRAGON: Diverse Augmentation Towards Generalizable Dense Retrieval

Various techniques have been developed in recent years to improve dense ...

Please sign up or login with your details

Forgot password? Click here to reset