Ranking via Sinkhorn Propagation

06/09/2011
by   Ryan Prescott Adams, et al.
0

It is of increasing importance to develop learning methods for ranking. In contrast to many learning objectives, however, the ranking problem presents difficulties due to the fact that the space of permutations is not smooth. In this paper, we examine the class of rank-linear objective functions, which includes popular metrics such as precision and discounted cumulative gain. In particular, we observe that expectations of these gains are completely characterized by the marginals of the corresponding distribution over permutation matrices. Thus, the expectations of rank-linear objectives can always be described through locations in the Birkhoff polytope, i.e., doubly-stochastic matrices (DSMs). We propose a technique for learning DSM-based ranking functions using an iterative projection operator known as Sinkhorn normalization. Gradients of this operator can be computed via backpropagation, resulting in an algorithm we call Sinkhorn propagation, or SinkProp. This approach can be combined with a wide range of gradient-based approaches to rank learning. We demonstrate the utility of SinkProp on several information retrieval data sets.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
09/26/2013

Stochastic Rank Aggregation

This paper addresses the problem of rank aggregation, which aims to find...
research
05/23/2017

Hashing as Tie-Aware Learning to Rank

Hashing, or learning binary embeddings of data, is frequently used in ne...
research
04/04/2022

Which Tricks are Important for Learning to Rank?

Nowadays, state-of-the-art learning-to-rank (LTR) methods are based on g...
research
03/02/2018

RankDCG: Rank-Ordering Evaluation Measure

Ranking is used for a wide array of problems, most notably information r...
research
08/16/2016

Scalable Learning of Non-Decomposable Objectives

Modern retrieval systems are often driven by an underlying machine learn...
research
08/09/2014

The Lovasz-Bregman Divergence and connections to rank aggregation, clustering, and web ranking

We extend the recently introduced theory of Lovasz-Bregman (LB) divergen...

Please sign up or login with your details

Forgot password? Click here to reset