Balancing Speed and Quality in Online Learning to Rank for Information Retrieval

11/26/2017
by   Harrie Oosterhuis, et al.
0

In Online Learning to Rank (OLTR) the aim is to find an optimal ranking model by interacting with users. When learning from user behavior, systems must interact with users while simultaneously learning from those interactions. Unlike other Learning to Rank (LTR) settings, existing research in this field has been limited to linear models. This is due to the speed-quality tradeoff that arises when selecting models: complex models are more expressive and can find the best rankings but need more user interactions to do so, a requirement that risks frustrating users during training. Conversely, simpler models can be optimized on fewer interactions and thus provide a better user experience, but they will converge towards suboptimal rankings. This tradeoff creates a deadlock, since novel models will not be able to improve either the user experience or the final convergence point, without sacrificing the other. Our contribution is twofold. First, we introduce a fast OLTR model called Sim-MGD that addresses the speed aspect of the speed-quality tradeoff. Sim-MGD ranks documents based on similarities with reference documents. It converges rapidly and, hence, gives a better user experience but it does not converge towards the optimal rankings. Second, we contribute Cascading Multileave Gradient Descent (C-MGD) for OLTR that directly addresses the speed-quality tradeoff by using a cascade that enables combinations of the best of two worlds: fast learning and high quality final convergence. C-MGD can provide the better user experience of Sim-MGD while maintaining the same convergence as the state-of-the-art MGD model. This opens the door for future work to design new models for OLTR without having to deal with the speed-quality tradeoff.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
09/22/2018

Differentiable Unbiased Online Learning to Rank

Online Learning to Rank (OLTR) methods optimize rankers based on user in...
research
03/07/2017

Online Learning to Rank in Stochastic Click Models

Online learning to rank is a core problem in information retrieval and m...
research
01/05/2022

Reinforcement Online Learning to Rank with Unbiased Reward Shaping

Online learning to rank (OLTR) aims to learn a ranker directly from impl...
research
05/18/2018

Efficient Exploration of Gradient Space for Online Learning to Rank

Online learning to rank (OL2R) optimizes the utility of returned search ...
research
04/22/2022

Counterfactual Learning To Rank for Utility-Maximizing Query Autocompletion

Conventional methods for query autocompletion aim to predict which compl...
research
11/05/2020

Efficient Online Learning of Optimal Rankings: Dimensionality Reduction via Gradient Descent

We consider a natural model of online preference aggregation, where sets...
research
04/22/2022

Learning-to-Rank at the Speed of Sampling: Plackett-Luce Gradient Estimation With Minimal Computational Complexity

Plackett-Luce gradient estimation enables the optimization of stochastic...

Please sign up or login with your details

Forgot password? Click here to reset