Adapting Learned Sparse Retrieval for Long Documents

05/29/2023
by   Thong Nguyen, et al.
0

Learned sparse retrieval (LSR) is a family of neural retrieval methods that transform queries and documents into sparse weight vectors aligned with a vocabulary. While LSR approaches like Splade work well for short passages, it is unclear how well they handle longer documents. We investigate existing aggregation approaches for adapting LSR to longer documents and find that proximal scoring is crucial for LSR to handle long documents. To leverage this property, we proposed two adaptations of the Sequential Dependence Model (SDM) to LSR: ExactSDM and SoftSDM. ExactSDM assumes only exact query term dependence, while SoftSDM uses potential functions that model the dependence of query terms and their expansion terms (i.e., terms identified using a transformer's masked language modeling head). Experiments on the MSMARCO Document and TREC Robust04 datasets demonstrate that both ExactSDM and SoftSDM outperform existing LSR aggregation approaches for different document length constraints. Surprisingly, SoftSDM does not provide any performance benefits over ExactSDM. This suggests that soft proximity matching is not necessary for modeling term dependence in LSR. Overall, this study provides insights into handling long documents with LSR, proposing adaptations that improve its performance.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
01/13/2023

Do the Findings of Document and Passage Retrieval Generalize to the Retrieval of Responses for Dialogues?

A number of learned sparse and dense retrieval approaches have recently ...
research
02/08/2015

Improving Term Frequency Normalization for Multi-topical Documents, and Application to Language Modeling Approaches

Term frequency normalization is a serious issue since lengths of documen...
research
09/11/2017

A Short Note on Proximity-based Scoring of Documents with Multiple Fields

The BM25 ranking function is one of the most well known query relevance ...
research
11/08/2018

An Axiomatic Study of Query Terms Order in Ad-hoc Retrieval

Classic retrieval methods use simple bag-of-word representations for que...
research
05/11/2020

Local Self-Attention over Long Text for Efficient Document Retrieval

Neural networks, particularly Transformer-based architectures, have achi...
research
02/07/2018

To Phrase or Not to Phrase - Impact of User versus System Term Dependence Upon Retrieval

When submitting queries to information retrieval (IR) systems, users oft...
research
05/24/2023

Adapting Language Models to Compress Contexts

Transformer-based language models (LMs) are powerful and widely-applicab...

Please sign up or login with your details

Forgot password? Click here to reset