Fighting Bandits with a New Kind of Smoothness

12/14/2015 ∙ by Jacob Abernethy, et al. ∙ 0

We define a novel family of algorithms for the adversarial multi-armed bandit problem, and provide a simple analysis technique based on convex smoothing. We prove two main results. First, we show that regularization via the Tsallis entropy, which includes EXP3 as a special case, achieves the Θ(√(TN)) minimax regret. Second, we show that a wide class of perturbation methods achieve a near-optimal regret as low as O(√(TN N)) if the perturbation distribution has a bounded hazard rate. For example, the Gumbel, Weibull, Frechet, Pareto, and Gamma distributions all satisfy this key property.

READ FULL TEXT
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.