On learning Whittle index policy for restless bandits with scalable regret
Reinforcement learning is an attractive approach to learn good resource allocation and scheduling policies based on data when the system model is unknown. However, the cumulative regret of most RL algorithms scales as ร(๐ฒโ(๐ T)), where ๐ฒ is the size of the state space, ๐ is the size of the action space, T is the horizon, and the ร(ยท) notation hides logarithmic terms. Due to the linear dependence on the size of the state space, these regret bounds are prohibitively large for resource allocation and scheduling problems. In this paper, we present a model-based RL algorithm for such problem which has scalable regret. In particular, we consider a restless bandit model, and propose a Thompson-sampling based learning algorithm which is tuned to the underlying structure of the model. We present two characterizations of the regret of the proposed algorithm with respect to the Whittle index policy. First, we show that for a restless bandit with n arms and at most m activations at each time, the regret scales either as ร(mnโ(T)) or ร(n^2 โ(T)) depending on the reward model. Second, under an additional technical assumption, we show that the regret scales as ร(n^1.5โ(T)). We present numerical examples to illustrate the salient features of the algorithm.
READ FULL TEXT