Fast Distributed Bandits for Online Recommendation Systems

07/16/2020
by   Kanak Mahadik, et al.
0

Contextual bandit algorithms are commonly used in recommender systems, where content popularity can change rapidly. These algorithms continuously learn latent mappings between users and items, based on contexts associated with them both. Recent recommendation algorithms that learn clustering or social structures between users have exhibited higher recommendation accuracy. However, as the number of users and items in the environment increases, the time required to generate recommendations deteriorates significantly. As a result, these cannot be deployed in practice. The state-of-the-art distributed bandit algorithm - DCCB - relies on a peer-to-peer net-work to share information among distributed workers. However, this approach does not scale well with the increasing number of users. Furthermore, it suffers from slow discovery of clusters, resulting in accuracy degradation. To address the above issues, this paper proposes a novel distributed bandit-based algorithm called DistCLUB. This algorithm lazily creates clusters in a distributed manner, and dramatically reduces the network data sharing requirement, achieving high scalability. Additionally, DistCLUB finds clusters much faster, achieving better accuracy than the state-of-the-art algorithm. Evaluation over both real-world benchmarks and synthetic datasets shows that DistCLUB is on average 8.87x faster than DCCB, and achieves 14.5 performance.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/04/2019

Toward Building Conversational Recommender Systems: A Contextual Bandit Approach

Contextual bandit algorithms have gained increasing popularity in recomm...
research
04/26/2016

Distributed Clustering of Linear Bandits in Peer to Peer Networks

We provide two distributed confidence ball algorithms for solving linear...
research
08/21/2020

Contextual User Browsing Bandits for Large-Scale Online Mobile Recommendation

Online recommendation services recommend multiple commodities to users. ...
research
08/18/2020

Fast Approximate Bayesian Contextual Cold Start Learning (FAB-COST)

Cold-start is a notoriously difficult problem which can occur in recomme...
research
06/26/2023

Scalable Neural Contextual Bandit for Recommender Systems

High-quality recommender systems ought to deliver both innovative and re...
research
09/28/2020

Position-Based Multiple-Play Bandits with Thompson Sampling

Multiple-play bandits aim at displaying relevant items at relevant posit...
research
05/12/2023

High Accuracy and Low Regret for User-Cold-Start Using Latent Bandits

We develop a novel latent-bandit algorithm for tackling the cold-start p...

Please sign up or login with your details

Forgot password? Click here to reset