PoolRank: Max/Min Pooling-based Ranking Loss for Listwise Learning Ranking Balance

08/08/2021
by   Zhizhong Chen, et al.
0

Numerous neural retrieval models have been proposed in recent years. These models learn to compute a ranking score between the given query and document. The majority of existing models are trained in pairwise fashion using human-judged labels directly without further calibration. The traditional pairwise schemes can be time-consuming and require pre-defined positive-negative document pairs for training, potentially leading to learning bias due to document distribution mismatch between training and test conditions. Some popular existing listwise schemes rely on the strong pre-defined probabilistic assumptions and stark difference between relevant and non-relevant documents for the given query, which may limit the model potential due to the low-quality or ambiguous relevance labels. To address these concerns, we turn to a physics-inspired ranking balance scheme and propose PoolRank, a pooling-based listwise learning framework. The proposed scheme has four major advantages: (1) PoolRank extracts training information from the best candidates at the local level based on model performance and relative ranking among abundant document candidates. (2) By combining four pooling-based loss components in a multi-task learning fashion, PoolRank calibrates the ranking balance for the partially relevant and the highly non-relevant documents automatically without costly human inspection. (3) PoolRank can be easily generalized to any neural retrieval model without requiring additional learnable parameters or model structure modifications. (4) Compared to pairwise learning and existing listwise learning schemes, PoolRank yields better ranking performance for all studied retrieval models while retaining efficient convergence rates.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/29/2021

ExpertRank: A Multi-level Coarse-grained Expert-based Listwise Ranking Loss

The goal of information retrieval is to recommend a list of document can...
research
07/10/2022

Sparse Pairwise Re-ranking with Pre-trained Transformers

Pairwise re-ranking models predict which of two documents is more releva...
research
10/19/2022

Incorporating Relevance Feedback for Information-Seeking Retrieval using Few-Shot Document Re-Ranking

Pairing a lexical retriever with a neural re-ranking model has set state...
research
05/26/2023

Integrating Listwise Ranking into Pairwise-based Image-Text Retrieval

Image-Text Retrieval (ITR) is essentially a ranking problem. Given a que...
research
05/11/2020

Local Self-Attention over Long Text for Efficient Document Retrieval

Neural networks, particularly Transformer-based architectures, have achi...
research
03/12/2022

Information retrieval for label noise document ranking by bag sampling and group-wise loss

Long Document retrieval (DR) has always been a tremendous challenge for ...
research
06/22/2019

RLTM: An Efficient Neural IR Framework for Long Documents

Deep neural networks have achieved significant improvements in informati...

Please sign up or login with your details

Forgot password? Click here to reset