Gradient Ascent for Active Exploration in Bandit Problems

05/20/2019
by   Pierre Ménard, et al.
0

We present a new algorithm based on an gradient ascent for a general Active Exploration bandit problem in the fixed confidence setting. This problem encompasses several well studied problems such that the Best Arm Identification or Thresholding Bandits. It consists of a new sampling rule based on an online lazy mirror ascent. We prove that this algorithm is asymptotically optimal and, most importantly, computationally efficient.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset