An Information-Theoretic Analysis of Thompson Sampling for Large Action Spaces

05/30/2018
by   Shi Dong, et al.
0

Information-theoretic Bayesian regret bounds of Russo and Van Roy capture the dependence of regret on prior uncertainty. However, this dependence is through entropy, which can become arbitrarily large as the number of actions increases. We establish new bounds that depend instead on a notion of rate-distortion. Among other things, this allows us to recover through information-theoretic arguments a near-optimal bound for the linear bandit. We also offer a bound for the logistic bandit that dramatically improves on the best previously available, though this bound depends on an information-theoretic statistic that we have only been able to quantify via computation.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/30/2018

An Information-Theoretic Analysis for Thompson Sampling with Many Actions

Information-theoretic Bayesian regret bounds of Russo and Van Roy captur...
research
02/09/2023

An Information-Theoretic Analysis of Nonstationary Bandit Learning

In nonstationary bandit learning problems, the decision-maker must conti...
research
03/07/2018

Satisficing in Time-Sensitive Bandit Learning

Much of the recent literature on bandit learning focuses on algorithms t...
research
07/07/2021

Information-theoretic characterization of the complete genotype-phenotype map of a complex pre-biotic world

How information is encoded in bio-molecular sequences is difficult to qu...
research
05/12/2019

On the Performance of Thompson Sampling on Logistic Bandits

We study the logistic bandit, in which rewards are binary with success p...
research
02/02/2019

First-Order Regret Analysis of Thompson Sampling

We address online combinatorial optimization when the player has a prior...
research
01/06/2022

Gaussian Imagination in Bandit Learning

Assuming distributions are Gaussian often facilitates computations that ...

Please sign up or login with your details

Forgot password? Click here to reset