An Asymptotically Optimal Primal-Dual Incremental Algorithm for Contextual Linear Bandits

10/23/2020
by   Andrea Tirinzoni, et al.
0

In the contextual linear bandit setting, algorithms built on the optimism principle fail to exploit the structure of the problem and have been shown to be asymptotically suboptimal. In this paper, we follow recent approaches of deriving asymptotically optimal algorithms from problem-dependent regret lower bounds and we introduce a novel algorithm improving over the state-of-the-art along multiple dimensions. We build on a reformulation of the lower bound, where context distribution and exploration policy are decoupled, and we obtain an algorithm robust to unbalanced context distributions. Then, using an incremental primal-dual approach to solve the Lagrangian relaxation of the lower bound, we obtain a scalable and computationally efficient algorithm. Finally, we remove forced exploration and build on confidence intervals of the optimization problem to encourage a minimum level of exploration that is better adapted to the problem structure. We demonstrate the asymptotic optimality of our algorithm, while providing both problem-dependent and worst-case finite-time regret guarantees. Our bounds scale with the logarithm of the number of arms, thus avoiding the linear dependence common in all related prior works. Notably, we establish minimax optimality for any learning horizon in the special case of non-contextual linear bandits. Finally, we verify that our algorithm obtains better empirical performance than state-of-the-art baselines.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/12/2021

The Symmetry between Arms and Knapsacks: A Primal-Dual Approach for Bandits with Knapsacks

In this paper, we study the bandits with knapsacks (BwK) problem and dev...
research
10/15/2019

Adaptive Exploration in Linear Contextual Bandit

Contextual bandits serve as a fundamental model for many sequential deci...
research
11/11/2020

Asymptotically Optimal Information-Directed Sampling

We introduce a computationally efficient algorithm for finite stochastic...
research
05/21/2021

Parallelizing Contextual Linear Bandits

Standard approaches to decision-making under uncertainty focus on sequen...
research
07/13/2020

Efficient Optimistic Exploration in Linear-Quadratic Regulators via Lagrangian Relaxation

We study the exploration-exploitation dilemma in the linear quadratic re...
research
05/12/2021

High-Dimensional Experimental Design and Kernel Bandits

In recent years methods from optimal linear experimental design have bee...
research
06/15/2020

Crush Optimism with Pessimism: Structured Bandits Beyond Asymptotic Optimality

We study stochastic structured bandits for minimizing regret. The fact t...

Please sign up or login with your details

Forgot password? Click here to reset