Differentiable Unbiased Online Learning to Rank

09/22/2018
by   Harrie Oosterhuis, et al.
0

Online Learning to Rank (OLTR) methods optimize rankers based on user interactions. State-of-the-art OLTR methods are built specifically for linear models. Their approaches do not extend well to non-linear models such as neural networks. We introduce an entirely novel approach to OLTR that constructs a weighted differentiable pairwise loss after each interaction: Pairwise Differentiable Gradient Descent (PDGD). PDGD breaks away from the traditional approach that relies on interleaving or multileaving and extensive sampling of models to estimate gradients. Instead, its gradient is based on inferring preferences between document pairs from user clicks and can optimize any differentiable model. We prove that the gradient of PDGD is unbiased w.r.t. user document pair preferences. Our experiments on the largest publicly available Learning to Rank (LTR) datasets show considerable and significant improvements under all levels of interaction noise. PDGD outperforms existing OLTR methods both in terms of learning speed as well as final convergence. Furthermore, unlike previous OLTR methods, PDGD also allows for non-linear models to be optimized effectively. Our results show that using a neural network leads to even better performance at convergence than a linear model. In summary, PDGD is an efficient and unbiased OLTR approach that provides a better user experience than previously possible.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
11/26/2017

Balancing Speed and Quality in Online Learning to Rank for Information Retrieval

In Online Learning to Rank (OLTR) the aim is to find an optimal ranking ...
research
01/29/2019

Optimizing Ranking Models in an Online Setting

Online Learning to Rank (OLTR) methods optimize ranking models by direct...
research
01/17/2022

Learning Neural Ranking Models Online from Implicit User Feedback

Existing online learning to rank (OL2R) solutions are limited to linear ...
research
06/10/2019

Variance Reduction in Gradient Exploration for Online Learning to Rank

Online Learning to Rank (OL2R) algorithms learn from implicit user feedb...
research
05/21/2020

Accelerated Convergence for Counterfactual Learning to Rank

Counterfactual Learning to Rank (LTR) algorithms learn a ranking model f...
research
11/01/2019

ARSM Gradient Estimator for Supervised Learning to Rank

We propose a new model for supervised learning to rank. In our model, th...
research
11/26/2017

Sensitive and Scalable Online Evaluation with Theoretical Guarantees

Multileaved comparison methods generalize interleaved comparison methods...

Please sign up or login with your details

Forgot password? Click here to reset