Laplacian-regularized graph bandits: Algorithms and theoretical analysis

07/12/2019
by   Kaige Yang, et al.
0

We study contextual multi-armed bandit problems in the case of multiple users, where we exploit the structure in the user domain to reduce the cumulative regret. Specifically, we model user relation as a graph, and assume that the parameters (preferences) of users form smooth signals on the graph. This leads to a graph Laplacian-regularized estimator, for which we propose a novel bandit algorithm whose performance depends on a notion of local smoothness on the graph. We provide a closed-form solution to the estimator, enabling a theoretical analysis on the convergence property of the estimator as well as single-user upper confidence bound (UCB) and cumulative regret of the proposed bandit algorithm. Furthermore, we show that the regret scales linearly with the local smoothness measure, which approaches zero for densely connected graph. The single-user UCB also allows us to further propose an extension of the bandit algorithm, whose computational complexity scales linearly with the number of users. We support theoretical claims with empirical evidences, and demonstrate the advantage of the proposed algorithm in comparison with state-of-the-art graph-based bandit algorithms on both synthetic and real-world datasets.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/11/2018

Incentives in the Dark: Multi-armed Bandits for Evolving Users with Unknown Type

Design of incentives or recommendations to users is becoming more common...
research
05/23/2018

Learning Contextual Bandits in a Non-stationary Environment

Multi-armed bandit algorithms have become a reference solution for handl...
research
02/11/2019

Error Analysis on Graph Laplacian Regularized Estimator

We provide a theoretical analysis of the representation learning problem...
research
02/25/2021

Federated Multi-armed Bandits with Personalization

A general framework of personalized federated multi-armed bandits (PF-MA...
research
06/04/2013

A Gang of Bandits

Multi-armed bandit problems are receiving a great deal of attention beca...
research
01/31/2022

Generalized Bayesian Upper Confidence Bound with Approximate Inference for Bandit Problems

Bayesian bandit algorithms with approximate inference have been widely u...
research
07/31/2018

Graph-Based Recommendation System

In this work, we study recommendation systems modelled as contextual mul...

Please sign up or login with your details

Forgot password? Click here to reset