Learning to Re-rank with Constrained Meta-Optimal Transport

04/29/2023
by   Andrés Hoyos-Idrobo, et al.
0

Many re-ranking strategies in search systems rely on stochastic ranking policies, encoded as Doubly-Stochastic (DS) matrices, that satisfy desired ranking constraints in expectation, e.g., Fairness of Exposure (FOE). These strategies are generally two-stage pipelines: i) an offline re-ranking policy construction step and ii) an online sampling of rankings step. Building a re-ranking policy requires repeatedly solving a constrained optimization problem, one for each issued query. Thus, it is necessary to recompute the optimization procedure for any new/unseen query. Regarding sampling, the Birkhoff-von-Neumann decomposition (BvND) is the favored approach to draw rankings from any DS-based policy. However, the BvND is too costly to compute online. Hence, the BvND as a sampling solution is memory-consuming as it can grow as (N n^2) for N queries and n documents. This paper offers a novel, fast, lightweight way to predict fair stochastic re-ranking policies: Constrained Meta-Optimal Transport (CoMOT). This method fits a neural network shared across queries like a learning-to-rank system. We also introduce Gumbel-Matching Sampling (GumMS), an online sampling approach from DS-based policies. Our proposed pipeline, CoMOT + GumMS, only needs to store the parameters of a single model, and it generalizes to unseen queries. We empirically evaluated our pipeline on the TREC 2019 and 2020 datasets under FOE constraints. Our experiments show that CoMOT rapidly predicts fair re-ranking policies on held-out data, with a speed-up proportional to the average number of documents per query. It also displays fairness and ranking performance similar to the original optimization-based policy. Furthermore, we empirically validate the effectiveness of GumMS to approximate DS-based policies in expectation.

READ FULL TEXT
research
05/25/2022

Fairness of Exposure in Light of Incomplete Exposure Estimation

Fairness of exposure is a commonly used notion of fairness for ranking s...
research
02/11/2019

Policy Learning for Fairness in Ranking

Conventional Learning-to-Rank (LTR) methods optimize the utility of the ...
research
05/16/2022

Pareto-Optimal Fairness-Utility Amortizations in Rankings with a DBN Exposure Model

In recent years, it has become clear that rankings delivered in many are...
research
11/19/2019

Fair Learning-to-Rank from Implicit Feedback

Addressing unfairness in rankings has become an increasingly important p...
research
07/15/2022

FairFuse: Interactive Visual Support for Fair Consensus Ranking

Fair consensus building combines the preferences of multiple rankers int...
research
02/11/2021

Robust Generalization and Safe Query-Specialization in Counterfactual Learning to Rank

Existing work in counterfactual Learning to Rank (LTR) has focussed on o...
research
11/15/2018

Boosting Search Performance Using Query Variations

Rank fusion is a powerful technique that allows multiple sources of info...

Please sign up or login with your details

Forgot password? Click here to reset