Multi-Armed Bandits on Unit Interval Graphs

02/12/2018
by   Xiao Xu, et al.
0

An online learning problem with side information on the similarity and dissimilarity across different actions is considered. The problem is formulated as a stochastic multi-armed bandit problem with a graph-structured learning space. Each node in the graph represents an arm in the bandit problem and an edge between two nodes represents closeness in their mean rewards. It is shown that the resulting graph is a unit interval graph. A hierarchical learning policy is developed that offers sublinear scaling of regret with the size of the learning space by fully exploiting the side information through an offline reduction of the learning space and online aggregation of reward observations from similar arms. The order optimality of the proposed policy in terms of both the size of the learning space and the length of the time horizon is established through a matching lower bound on regret. It is further shown that when the mean rewards are bounded, complete learning with bounded regret over an infinite time horizon can be achieved. An extension to the case with only partial information on arm similarity and dissimilarity is also discussed.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset