Optimizing Ranking Models in an Online Setting

01/29/2019
by   Harrie Oosterhuis, et al.
0

Online Learning to Rank (OLTR) methods optimize ranking models by directly interacting with users, which allows them to be very efficient and responsive. All OLTR methods introduced during the past decade have extended on the original OLTR method: Dueling Bandit Gradient Descent (DBGD). Recently, a fundamentally different approach was introduced with the Pairwise Differentiable Gradient Descent (PDGD) algorithm. To date the only comparisons of the two approaches are limited to simulations with cascading click models and low levels of noise. The main outcome so far is that PDGD converges at higher levels of performance and learns considerably faster than DBGD-based methods. However, the PDGD algorithm assumes cascading user behavior, potentially giving it an unfair advantage. Furthermore, the robustness of both methods to high levels of noise has not been investigated. Therefore, it is unclear whether the reported advantages of PDGD over DBGD generalize to different experimental conditions. In this paper, we investigate whether the previous conclusions about the PDGD and DBGD comparison generalize from ideal to worst-case circumstances. We do so in two ways. First, we compare the theoretical properties of PDGD and DBGD, by taking a critical look at previously proven properties in the context of ranking. Second, we estimate an upper and lower bound on the performance of methods by simulating both ideal user behavior and extremely difficult behavior, i.e., almost-random non-cascading user models. Our findings show that the theoretical bounds of DBGD do not apply to any common ranking model and, furthermore, that the performance of DBGD is substantially worse than PDGD in both ideal and worst-case circumstances. These results reproduce previously published findings about the relative performance of PDGD vs. DBGD and generalize them to extremely noisy and non-cascading circumstances.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
09/22/2018

Differentiable Unbiased Online Learning to Rank

Online Learning to Rank (OLTR) methods optimize rankers based on user in...
research
08/14/2018

An Experimental Study of Algorithms for Online Bipartite Matching

We perform an experimental study of algorithms for online bipartite matc...
research
12/03/2019

Rank Aggregation via Heterogeneous Thurstone Preference Models

We propose the Heterogeneous Thurstone Model (HTM) for aggregating ranke...
research
10/12/2021

Optimizing Ranking Systems Online as Bandits

Ranking system is the core part of modern retrieval and recommender syst...
research
05/21/2020

Accelerated Convergence for Counterfactual Learning to Rank

Counterfactual Learning to Rank (LTR) algorithms learn a ranking model f...
research
02/22/2021

Kernel quadrature by applying a point-wise gradient descent method to discrete energies

We propose a method for generating nodes for kernel quadrature by a poin...
research
05/18/2018

Efficient Exploration of Gradient Space for Online Learning to Rank

Online learning to rank (OL2R) optimizes the utility of returned search ...

Please sign up or login with your details

Forgot password? Click here to reset