DeepAI AI Chat
Log In Sign Up

WMRB: Learning to Rank in a Scalable Batch Training Approach

11/10/2017
by   Kuan Liu, et al.
USC Information Sciences Institute
University of Southern California
0

We propose a new learning to rank algorithm, named Weighted Margin-Rank Batch loss (WMRB), to extend the popular Weighted Approximate-Rank Pairwise loss (WARP). WMRB uses a new rank estimator and an efficient batch training algorithm. The approach allows more accurate item rank approximation and explicit utilization of parallel computation to accelerate training. In three item recommendation tasks, WMRB consistently outperforms WARP and other baselines. Moreover, WMRB shows clear time efficiency advantages as data scale increases.

READ FULL TEXT

page 1

page 2

page 3

11/10/2017

A Batch Learning Framework for Scalable Personalized Ranking

In designing personalized ranking algorithms, it is desirable to encoura...
02/25/2016

Top-N Recommendation with Novel Rank Approximation

The importance of accurate recommender systems has been widely recognize...
10/05/2018

A note on spanoid rank

We construct a spanoid S on n elements with rank(S) > n^c f-rank(S) wher...
11/01/2019

ARSM Gradient Estimator for Supervised Learning to Rank

We propose a new model for supervised learning to rank. In our model, th...
08/30/2019

PLANC: Parallel Low Rank Approximation with Non-negativity Constraints

We consider the problem of low-rank approximation of massive dense non-n...
04/15/2018

Weighted Low-Rank Approximation of Matrices and Background Modeling

We primarily study a special a weighted low-rank approximation of matric...