Optimal UCB Adjustments for Large Arm Sizes

09/05/2019
by   Hock Peng Chan, et al.
0

The regret lower bound of Lai and Robbins (1985), the gold standard for checking optimality of bandit algorithms, considers arm size fixed as sample size goes to infinity. We show that when arm size increases polynomially with sample size, a surprisingly smaller lower bound is achievable. This is because the larger experimentation costs when there are more arms permit regret savings by exploiting the best performer more often. In particular we are able to construct a UCB-Large algorithm that adaptively exploits more when there are more arms. It achieves the smaller lower bound and is thus optimal. Numerical experiments show that UCB-Large performs better than classical UCB that does not correct for arm size, and better than Thompson sampling.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset