ExpertRank: A Multi-level Coarse-grained Expert-based Listwise Ranking Loss

07/29/2021
by   Zhizhong Chen, et al.
0

The goal of information retrieval is to recommend a list of document candidates that are most relevant to a given query. Listwise learning trains neural retrieval models by comparing various candidates simultaneously on a large scale, offering much more competitive performance than pairwise and pointwise schemes. Existing listwise ranking losses treat the candidate document list as a whole unit without further inspection. Some candidates with moderate semantic prominence may be ignored by the noisy similarity signals or overshadowed by a few especially pronounced candidates. As a result, existing ranking losses fail to exploit the full potential of neural retrieval models. To address these concerns, we apply the classic pooling technique to conduct multi-level coarse graining and propose ExpertRank, a novel expert-based listwise ranking loss. The proposed scheme has three major advantages: (1) ExpertRank introduces the profound physics concept of coarse graining to information retrieval by selecting prominent candidates at various local levels based on model prediction and inter-document comparison. (2) ExpertRank applies the mixture of experts (MoE) technique to combine different experts effectively by extending the traditional ListNet. (3) Compared to other existing listwise learning approaches, ExpertRank produces much more reliable and competitive performance for various neural retrieval models with different complexities, from traditional models, such as KNRM, ConvKNRM, MatchPyramid, to sophisticated BERT/ALBERT-based retrieval models.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
08/08/2021

PoolRank: Max/Min Pooling-based Ranking Loss for Listwise Learning Ranking Balance

Numerous neural retrieval models have been proposed in recent years. The...
research
03/16/2021

A Neural Passage Model for Ad-hoc Document Retrieval

Traditional statistical retrieval models often treat each document as a ...
research
02/28/2021

LRG at TREC 2020: Document Ranking with XLNet-Based Models

Establishing a good information retrieval system in popular mediums of e...
research
08/23/2023

Reranking Passages with Coarse-to-Fine Neural Retriever using List-Context Information

Passage reranking is a crucial task in many applications, particularly w...
research
06/12/2013

Finding Academic Experts on a MultiSensor Approach using Shannon's Entropy

Expert finding is an information retrieval task concerned with the searc...
research
08/29/2023

Improving Neural Ranking Models with Traditional IR Methods

Neural ranking methods based on large transformer models have recently g...
research
06/15/2023

Ranking and Selection in Large-Scale Inference of Heteroscedastic Units

The allocation of limited resources to a large number of potential candi...

Please sign up or login with your details

Forgot password? Click here to reset