Retrieval Augmentation for T5 Re-ranker using External Sources

10/11/2022
by   Kai Hui, et al.
10

Retrieval augmentation has shown promising improvements in different tasks. However, whether such augmentation can assist a large language model based re-ranker remains unclear. We investigate how to augment T5-based re-rankers using high-quality information retrieved from two external corpora – a commercial web search engine and Wikipedia. We empirically demonstrate how retrieval augmentation can substantially improve the effectiveness of T5-based re-rankers for both in-domain and zero-shot out-of-domain re-ranking tasks.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/01/2022

Zemi: Learning Zero-Shot Semi-Parametric Language Models from Multiple Tasks

Although large language models have achieved impressive zero-shot abilit...
research
02/07/2023

Augmenting Zero-Shot Dense Retrievers with Plug-in Mixture-of-Memories

In this paper we improve the zero-shot generalization ability of languag...
research
05/18/2023

The Web Can Be Your Oyster for Improving Large Language Models

Large language models (LLMs) encode a large amount of world knowledge. H...
research
05/27/2023

Augmentation-Adapted Retriever Improves Generalization of Language Models as Generic Plug-In

Retrieval augmentation can aid language models (LMs) in knowledge-intens...
research
05/03/2023

Zero-Shot Listwise Document Reranking with a Large Language Model

Supervised ranking methods based on bi-encoder or cross-encoder architec...
research
05/27/2022

Nearest Neighbor Zero-Shot Inference

We introduce kNN-Prompt, a simple and effective technique to use k-neare...
research
09/06/2022

Segment Augmentation and Differentiable Ranking for Logo Retrieval

Logo retrieval is a challenging problem since the definition of similari...

Please sign up or login with your details

Forgot password? Click here to reset