First-Order Regret Analysis of Thompson Sampling

02/02/2019
by   Sébastien Bubeck, et al.
8

We address online combinatorial optimization when the player has a prior over the adversary's sequence of losses. In this framework, Russo and Van Roy proposed an information-theoretic analysis of Thompson Sampling based on the information ratio, resulting in optimal worst-case regret bounds. In this paper we introduce three novel ideas to this line of work. First we propose a new quantity, the scale-sensitive information ratio, which allows us to obtain more refined first-order regret bounds (i.e., bounds of the form √(L^*) where L^* is the loss of the best combinatorial action). Second we replace the entropy over combinatorial actions by a coordinate entropy, which allows us to obtain the first optimal worst-case bound for Thompson Sampling in the combinatorial setting. Finally, we introduce a novel link between Bayesian agents and frequentist confidence intervals. Combining these ideas we show that the classical multi-armed bandit first-order regret bound Õ(√(d L^*)) still holds true in the more challenging and more general semi-bandit scenario. This latter result improves the previous state of the art bound Õ(√((d+m^3)L^*)) by Lykouris, Sridharan and Tardos.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
04/20/2012

Regret in Online Combinatorial Optimization

We address online linear optimization problems when the possible actions...
research
05/30/2018

An Information-Theoretic Analysis of Thompson Sampling for Large Action Spaces

Information-theoretic Bayesian regret bounds of Russo and Van Roy captur...
research
02/09/2018

Make the Minority Great Again: First-Order Regret Bound for Contextual Bandits

Regret bounds in online learning compare the player's performance to L^*...
research
03/05/2017

Improving Regret Bounds for Combinatorial Semi-Bandits with Probabilistically Triggered Arms and Its Applications

We study combinatorial multi-armed bandit with probabilistically trigger...
research
02/12/2020

A General Framework to Analyze Stochastic Linear Bandit

In this paper we study the well-known stochastic linear bandit problem w...
research
01/29/2018

Information Directed Sampling and Bandits with Heteroscedastic Noise

In the stochastic bandit problem, the goal is to maximize an unknown fun...
research
06/11/2020

Statistical Efficiency of Thompson Sampling for Combinatorial Semi-Bandits

We investigate stochastic combinatorial multi-armed bandit with semi-ban...

Please sign up or login with your details

Forgot password? Click here to reset