Context-Aware Bandits

10/12/2015
by   Shuai Li, et al.
0

We propose an efficient Context-Aware clustering of Bandits (CAB) algorithm, which can capture collaborative effects. CAB can be easily deployed in a real-world recommendation system, where multi-armed bandits have been shown to perform well in particular with respect to the cold-start problem. CAB utilizes a context-aware clustering augmented by exploration-exploitation strategies. CAB dynamically clusters the users based on the content universe under consideration. We give a theoretical analysis in the standard stochastic multi-armed bandits setting. We show the efficiency of our approach on production and real-world datasets, demonstrate the scalability, and, more importantly, the significant increased prediction performance against several state-of-the-art methods.

READ FULL TEXT
research
05/02/2016

Graph Clustering Bandits for Recommendation

We investigate an efficient context-dependent clustering technique for r...
research
01/31/2014

Online Clustering of Bandits

We introduce a novel algorithmic approach to content recommendation base...
research
08/06/2016

On Context-Dependent Clustering of Bandits

We investigate a novel cluster-of-bandit algorithm CAB for collaborative...
research
08/02/2021

Pure Exploration in Multi-armed Bandits with Graph Side Information

We study pure exploration in multi-armed bandits with graph side-informa...
research
04/14/2018

Combining Difficulty Ranking with Multi-Armed Bandits to Sequence Educational Content

As e-learning systems become more prevalent, there is a growing need for...
research
06/11/2020

Bandit-PAM: Almost Linear Time k-Medoids Clustering via Multi-Armed Bandits

Clustering is a ubiquitous task in data science. Compared to the commonl...
research
06/04/2013

A Gang of Bandits

Multi-armed bandit problems are receiving a great deal of attention beca...

Please sign up or login with your details

Forgot password? Click here to reset