DeepAI
Log In Sign Up

Asymptotically Optimal Information-Directed Sampling

11/11/2020
by   Johannes Kirschner, et al.
0

We introduce a computationally efficient algorithm for finite stochastic linear bandits. The approach is based on the frequentist information-directed sampling (IDS) framework, with an information gain potential that is derived directly from the asymptotic regret lower bound. We establish frequentist regret bounds, which show that the proposed algorithm is both asymptotically optimal and worst-case rate optimal in finite time. Our analysis sheds light on how IDS trades off regret and information to incrementally solve the semi-infinite concave program that defines the optimal asymptotic regret. Along the way, we uncover interesting connections towards a recently proposed two-player game approach and the Bayesian IDS algorithm.

READ FULL TEXT

page 1

page 2

page 3

page 4

09/25/2020

Mirror Descent and the Information Ratio

We establish a connection between the stability of mirror descent and th...
10/23/2020

An Asymptotically Optimal Primal-Dual Incremental Algorithm for Contextual Linear Bandits

In the contextual linear bandit setting, algorithms built on the optimis...
11/08/2020

Asymptotic Convergence of Thompson Sampling

Thompson sampling has been shown to be an effective policy across a vari...
08/02/2018

Asymptotically and computationally efficient tensorial JADE

In this work, we propose a novel method for tensorial independent compon...
02/25/2020

Information Directed Sampling for Linear Partial Monitoring

Partial monitoring is a rich framework for sequential decision making un...
05/08/2015

An Asymptotically Optimal Policy for Uniform Bandits of Unknown Support

Consider the problem of a controller sampling sequentially from a finite...
10/15/2019

Adaptive Exploration in Linear Contextual Bandit

Contextual bandits serve as a fundamental model for many sequential deci...