Multi-Armed Bandits on Unit Interval Graphs

02/12/2018
by   Xiao Xu, et al.
0

An online learning problem with side information on the similarity and dissimilarity across different actions is considered. The problem is formulated as a stochastic multi-armed bandit problem with a graph-structured learning space. Each node in the graph represents an arm in the bandit problem and an edge between two nodes represents closeness in their mean rewards. It is shown that the resulting graph is a unit interval graph. A hierarchical learning policy is developed that offers sublinear scaling of regret with the size of the learning space by fully exploiting the side information through an offline reduction of the learning space and online aggregation of reward observations from similar arms. The order optimality of the proposed policy in terms of both the size of the learning space and the length of the time horizon is established through a matching lower bound on regret. It is further shown that when the mean rewards are bounded, complete learning with bounded regret over an infinite time horizon can be achieved. An extension to the case with only partial information on arm similarity and dissimilarity is also discussed.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/05/2014

Generalized Risk-Aversion in Stochastic Multi-Armed Bandits

We consider the problem of minimizing the regret in stochastic multi-arm...
research
12/25/2017

Stochastic Multi-armed Bandits in Constant Space

We consider the stochastic bandit problem in the sublinear space setting...
research
07/24/2018

Decision Variance in Online Learning

Online learning has classically focused on the expected behaviour of lea...
research
05/17/2019

Pair Matching: When bandits meet stochastic block model

The pair-matching problem appears in many applications where one wants t...
research
09/19/2023

Decentralized Online Learning in Task Assignment Games for Mobile Crowdsensing

The problem of coordinated data collection is studied for a mobile crowd...
research
04/11/2023

: Fair Multi-Armed Bandits with Guaranteed Rewards per Arm

Classic no-regret online prediction algorithms, including variants of th...
research
01/31/2022

Rotting infinitely many-armed bandits

We consider the infinitely many-armed bandit problem with rotting reward...

Please sign up or login with your details

Forgot password? Click here to reset