A General Framework of Multi-Armed Bandit Processes by Arm Switch Restrictions

08/20/2018
by   Wenqing Bao, et al.
0

This paper proposes a general framework of multi-armed bandit (MAB) processes by introducing a type of restrictions on the switches among arms evolving in continuous time. The Gittins index process is constructed for any single arm subject to the restrictions on switches and then the optimality of the corresponding Gittins index rule is established. The Gittins indices defined in this paper are consistent with the ones for MAB processes in continuous time, integer time, semi-Markovian setting as well as general discrete time setting, so that the new theory covers the classical models as special cases and also applies to many other situations that have not yet been touched in the literature. While the proof of the optimality of Gittins index policies benefits from ideas in the existing theory of MAB processes in continuous time, new techniques are introduced which drastically simplify the proof.

READ FULL TEXT
research
08/20/2018

A General Framework of Multi-Armed Bandit Processes by Switching Restrictions

This paper proposes a general framework of multi-armed bandit (MAB) proc...
research
05/31/2023

Restless Bandits with Average Reward: Breaking the Uniform Global Attractor Assumption

We study the infinite-horizon restless bandit problem with the average r...
research
03/31/2021

Robust Experimentation in the Continuous Time Bandit Problem

We study the experimentation dynamics of a decision maker (DM) in a two-...
research
11/21/2021

The Gittins Policy in the M/G/1 Queue

The Gittins policy is a highly general scheduling policy that minimizes ...
research
02/02/2019

On the Optimality of Perturbations in Stochastic and Adversarial Multi-armed Bandit Problems

We investigate the optimality of perturbation based algorithms in the st...
research
02/12/2021

Uncertainty-of-Information Scheduling: A Restless Multi-armed Bandit Framework

This paper proposes using the uncertainty of information (UoI), measured...
research
07/13/2019

A new approach to Poissonian two-armed bandit problem

We consider a continuous time two-armed bandit problem in which incomes ...

Please sign up or login with your details

Forgot password? Click here to reset