DeepAI AI Chat
Log In Sign Up

WMRB: Learning to Rank in a Scalable Batch Training Approach

by   Kuan Liu, et al.
USC Information Sciences Institute
University of Southern California

We propose a new learning to rank algorithm, named Weighted Margin-Rank Batch loss (WMRB), to extend the popular Weighted Approximate-Rank Pairwise loss (WARP). WMRB uses a new rank estimator and an efficient batch training algorithm. The approach allows more accurate item rank approximation and explicit utilization of parallel computation to accelerate training. In three item recommendation tasks, WMRB consistently outperforms WARP and other baselines. Moreover, WMRB shows clear time efficiency advantages as data scale increases.


page 1

page 2

page 3


A Batch Learning Framework for Scalable Personalized Ranking

In designing personalized ranking algorithms, it is desirable to encoura...

Top-N Recommendation with Novel Rank Approximation

The importance of accurate recommender systems has been widely recognize...

A note on spanoid rank

We construct a spanoid S on n elements with rank(S) > n^c f-rank(S) wher...

ARSM Gradient Estimator for Supervised Learning to Rank

We propose a new model for supervised learning to rank. In our model, th...

PLANC: Parallel Low Rank Approximation with Non-negativity Constraints

We consider the problem of low-rank approximation of massive dense non-n...

Weighted Low-Rank Approximation of Matrices and Background Modeling

We primarily study a special a weighted low-rank approximation of matric...