Optimal Strategies for Graph-Structured Bandits

07/07/2020
by   Hassan Saber, et al.
0

We study a structured variant of the multi-armed bandit problem specified by a set of Bernoulli distributions ν= (ν_a,b)_a ∈𝒜, b ∈ℬ with means (μ_a,b)_a ∈𝒜, b ∈ℬ∈[0,1]^𝒜×ℬ and by a given weight matrix ω= (ω_b,b')_b,b' ∈ℬ, where 𝒜 is a finite set of arms and ℬ is a finite set of users. The weight matrix ω is such that for any two users b,b'∈ℬ, max_a∈𝒜|μ_a,b-μ_a,b'| ≤ω_b,b'. This formulation is flexible enough to capture various situations, from highly-structured scenarios (ω∈{0,1}^ℬ×ℬ) to fully unstructured setups (ω≡ 1).We consider two scenarios depending on whether the learner chooses only the actions to sample rewards from or both users and actions. We first derive problem-dependent lower bounds on the regret for this generic graph-structure that involves a structure dependent linear programming problem. Second, we adapt to this setting the Indexed Minimum Empirical Divergence (IMED) algorithm introduced by Honda and Takemura (2015), and introduce the IMED-GS^⋆ algorithm. Interestingly, IMED-GS^⋆ does not require computing the solution of the linear programming problem more than about log(T) times after T steps, while being provably asymptotically optimal. Also, unlike existing bandit strategies designed for other popular structures, IMED-GS^⋆ does not resort to an explicit forced exploration scheme and only makes use of local counts of empirical events. We finally provide numerical illustration of our results that confirm the performance of IMED-GS^⋆.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
12/02/2021

Indexed Minimum Empirical Divergence for Unimodal Bandits

We consider a multi-armed bandit problem specified by a set of one-dimen...
research
10/18/2018

Exploiting Correlation in Finite-Armed Structured Bandits

We consider a correlated multi-armed bandit problem in which rewards of ...
research
06/30/2020

Forced-exploration free Strategies for Unimodal Bandits

We consider a multi-armed bandit problem specified by a set of Gaussian ...
research
07/02/2020

Structure Adaptive Algorithms for Stochastic Bandits

We study reward maximisation in a wide class of structured stochastic mu...
research
01/25/2019

Almost Boltzmann Exploration

Boltzmann exploration is widely used in reinforcement learning to provid...
research
10/05/2020

Diversity-Preserving K-Armed Bandits, Revisited

We consider the bandit-based framework for diversity-preserving recommen...
research
08/31/2020

Asymptotically optimal strategies for online prediction with history-dependent experts

We establish sharp asymptotically optimal strategies for the problem of ...

Please sign up or login with your details

Forgot password? Click here to reset