Injecting the BM25 Score as Text Improves BERT-Based Re-rankers

01/23/2023
by   Arian Askari, et al.
0

In this paper we propose a novel approach for combining first-stage lexical retrieval models and Transformer-based re-rankers: we inject the relevance score of the lexical model as a token in the middle of the input of the cross-encoder re-ranker. It was shown in prior work that interpolation between the relevance score of lexical and BERT-based re-rankers may not consistently result in higher effectiveness. Our idea is motivated by the finding that BERT models can capture numeric information. We compare several representations of the BM25 score and inject them as text in the input of four different cross-encoders. We additionally analyze the effect for different query types, and investigate the effectiveness of our method for capturing exact matching relevance. Evaluation on the MSMARCO Passage collection and the TREC DL collections shows that the proposed method significantly improves over all cross-encoder re-rankers as well as the common interpolation methods. We show that the improvement is consistent for all query types. We also find an improvement in exact matching capabilities over both BM25 and the cross-encoders. Our findings indicate that cross-encoder re-rankers can efficiently be improved without additional computational burden and extra steps in the pipeline by explicitly adding the output of the first-stage ranker to the model input, and this effect is robust for different models and query types.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/11/2022

On the Interpolation of Contextualized Term-based Ranking with BM25 for Query-by-Example Retrieval

Term-based ranking with pre-trained transformer-based language models ha...
research
06/19/2023

Enhancing Documents with Multidimensional Relevance Statements in Cross-encoder Re-ranking

In this paper, we propose a novel approach to consider multiple dimensio...
research
03/11/2021

Improving Bi-encoder Document Ranking Models with Two Rankers and Multi-teacher Distillation

BERT-based Neural Ranking Models (NRMs) can be classified according to h...
research
11/09/2022

Distribution-Aligned Fine-Tuning for Efficient Neural Retrieval

Dual-encoder-based neural retrieval models achieve appreciable performan...
research
08/19/2021

Fast Passage Re-ranking with Contextualized Exact Term Matching and Efficient Passage Expansion

BERT-based information retrieval models are expensive, in both time (que...
research
01/21/2021

Rethink Training of BERT Rerankers in Multi-Stage Retrieval Pipeline

Pre-trained deep language models (LM) have advanced the state-of-the-art...
research
07/06/2023

LEA: Improving Sentence Similarity Robustness to Typos Using Lexical Attention Bias

Textual noise, such as typos or abbreviations, is a well-known issue tha...

Please sign up or login with your details

Forgot password? Click here to reset