GRM: Generative Relevance Modeling Using Relevance-Aware Sample Estimation for Document Retrieval

06/16/2023
by   Iain Mackie, et al.
0

Recent studies show that Generative Relevance Feedback (GRF), using text generated by Large Language Models (LLMs), can enhance the effectiveness of query expansion. However, LLMs can generate irrelevant information that harms retrieval effectiveness. To address this, we propose Generative Relevance Modeling (GRM) that uses Relevance-Aware Sample Estimation (RASE) for more accurate weighting of expansion terms. Specifically, we identify similar real documents for each generated document and use a neural re-ranker to estimate their relevance. Experiments on three standard document ranking benchmarks show that GRM improves MAP by 6-9

READ FULL TEXT
research
04/25/2023

Generative Relevance Feedback with Large Language Models

Current query expansion models use pseudo-relevance feedback to improve ...
research
10/13/2022

Query Expansion Using Contextual Clue Sampling with Language Models

Query expansion is an effective approach for mitigating vocabulary misma...
research
08/08/2019

Neural Document Expansion with User Feedback

This paper presents a neural document expansion approach (NeuDEF) that e...
research
02/22/2023

One-Shot Labeling for Automatic Relevance Estimation

Dealing with unjudged documents ("holes") in relevance assessments is a ...
research
01/21/2022

Less is Less: When Are Snippets Insufficient for Human vs Machine Relevance Estimation?

Traditional information retrieval (IR) ranking models process the full t...
research
05/19/2023

Exploring the Viability of Synthetic Query Generation for Relevance Prediction

Query-document relevance prediction is a critical problem in Information...
research
04/20/2018

The Role-Relevance Model for Enhanced Semantic Targeting in Unstructured Text

Personalized search provides a potentially powerful tool, however, it is...

Please sign up or login with your details

Forgot password? Click here to reset