Reaching the End of Unbiasedness: Uncovering Implicit Limitations of Click-Based Learning to Rank

06/24/2022
by   Harrie Oosterhuis, et al.
0

Click-based learning to rank (LTR) tackles the mismatch between click frequencies on items and their actual relevance. The approach of previous work has been to assume a model of click behavior and to subsequently introduce a method for unbiasedly estimating preferences under that assumed model. The success of this approach is evident in that unbiased methods have been found for an increasing number of behavior models and types of bias. This work aims to uncover the implicit limitations of the high-level prevalent approach in the counterfactual LTR field. Thus, in contrast with limitations that follow from explicit assumptions, our aim is to recognize limitations that the field is currently unaware of. We do this by inverting the existing approach: we start by capturing existing methods in generic terms, and subsequently, from these generic descriptions we derive the click behavior for which these methods can be unbiased. Our inverted approach reveals that there are indeed implicit limitations to the counterfactual LTR approach: we find counterfactual estimation can only produce unbiased methods for click behavior based on affine transformations. In addition, we also recognize previously undiscussed limitations of click-modelling and pairwise approaches to click-based LTR. Our findings reveal that it is impossible for existing approaches to provide unbiasedness guarantees for all plausible click behavior models.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/16/2019

Unbiased Learning to Rank: Counterfactual and Online Approaches

This tutorial covers and contrasts the two main methodologies in unbiase...
research
05/18/2020

Policy-Aware Unbiased Learning to Rank for Top-k Rankings

Counterfactual Learning to Rank (LTR) methods optimize ranking systems u...
research
05/25/2020

Cascade Model-based Propensity Estimation for Counterfactual Learning to Rank

Unbiased CLTR requires click propensities to compensate for the differen...
research
03/31/2022

Doubly-Robust Estimation for Unbiased Learning-to-Rank from Position-Biased Click Feedback

Clicks on rankings suffer from position bias: generally items on lower r...
research
08/24/2020

When Inverse Propensity Scoring does not Work: Affine Corrections for Unbiased Learning to Rank

Besides position bias, which has been well-studied, trust bias is anothe...
research
08/19/2021

Mixture-Based Correction for Position and Trust Bias in Counterfactual Learning to Rank

In counterfactual learning to rank (CLTR) user interactions are used as ...
research
07/18/2022

A General Framework for Pairwise Unbiased Learning to Rank

Pairwise debiasing is one of the most effective strategies in reducing p...

Please sign up or login with your details

Forgot password? Click here to reset