Perturbed-History Exploration in Stochastic Multi-Armed Bandits

02/26/2019
by   Branislav Kveton, et al.
4

We propose an online algorithm for cumulative regret minimization in a stochastic multi-armed bandit. The algorithm adds O(t) i.i.d. pseudo-rewards to its history in round t and then pulls the arm with the highest estimated value in its perturbed history. Therefore, we call it perturbed-history exploration (PHE). The pseudo-rewards are designed to offset the underestimated values of arms in round t with a sufficiently high probability. We analyze PHE in a K-armed bandit and prove a O(K Δ^-1 n) bound on its n-round regret, where Δ is the minimum gap between the expected rewards of the optimal and suboptimal arms. The key to our analysis is a novel argument that shows that randomized Bernoulli rewards lead to optimism. We compare PHE empirically to several baselines and show that it is competitive with the best of them.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset