Modeling Label Ambiguity for Neural List-Wise Learning to Rank

07/24/2017
by   Rolf Jagerman, et al.
0

List-wise learning to rank methods are considered to be the state-of-the-art. One of the major problems with these methods is that the ambiguous nature of relevance labels in learning to rank data is ignored. Ambiguity of relevance labels refers to the phenomenon that multiple documents may be assigned the same relevance label for a given query, so that no preference order should be learned for those documents. In this paper we propose a novel sampling technique for computing a list-wise loss that can take into account this ambiguity. We show the effectiveness of the proposed method by training a 3-layer deep neural network. We compare our new loss function to two strong baselines: ListNet and ListMLE. We show that our method generalizes better and significantly outperforms other methods on the validation and test sets.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
11/11/2018

Learning Groupwise Scoring Functions Using Deep Neural Networks

While in a classification or a regression setting a label or a value is ...
research
09/15/2019

Plackett-Luce model for learning-to-rank task

List-wise based learning to rank methods are generally supposed to have ...
research
02/13/2022

Learning to Rank from Relevance Judgments Distributions

Learning to Rank (LETOR) algorithms are usually trained on annotated cor...
research
05/20/2020

Context-Aware Learning to Rank with Self-Attention

In learning to rank, one is interested in optimising the global ordering...
research
01/07/2020

Listwise Learning to Rank by Exploring Unique Ratings

In this paper, we propose new listwise learning-to-rank models that miti...
research
06/12/2023

Deep Model Compression Also Helps Models Capture Ambiguity

Natural language understanding (NLU) tasks face a non-trivial amount of ...

Please sign up or login with your details

Forgot password? Click here to reset