On learning Whittle index policy for restless bandits with scalable regret

02/07/2022
โˆ™
by   Nima Akbarzadeh, et al.
โˆ™
0
โˆ™

Reinforcement learning is an attractive approach to learn good resource allocation and scheduling policies based on data when the system model is unknown. However, the cumulative regret of most RL algorithms scales as ร•(๐–ฒโˆš(๐–  T)), where ๐–ฒ is the size of the state space, ๐–  is the size of the action space, T is the horizon, and the ร•(ยท) notation hides logarithmic terms. Due to the linear dependence on the size of the state space, these regret bounds are prohibitively large for resource allocation and scheduling problems. In this paper, we present a model-based RL algorithm for such problem which has scalable regret. In particular, we consider a restless bandit model, and propose a Thompson-sampling based learning algorithm which is tuned to the underlying structure of the model. We present two characterizations of the regret of the proposed algorithm with respect to the Whittle index policy. First, we show that for a restless bandit with n arms and at most m activations at each time, the regret scales either as ร•(mnโˆš(T)) or ร•(n^2 โˆš(T)) depending on the reward model. Second, under an additional technical assumption, we show that the regret scales as ร•(n^1.5โˆš(T)). We present numerical examples to illustrate the salient features of the algorithm.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset