Bypassing the Simulator: Near-Optimal Adversarial Linear Contextual Bandits

09/02/2023
by   Haolin Liu, et al.
0

We consider the adversarial linear contextual bandit problem, where the loss vectors are selected fully adversarially and the per-round action set (i.e. the context) is drawn from a fixed distribution. Existing methods for this problem either require access to a simulator to generate free i.i.d. contexts, achieve a sub-optimal regret no better than O(T^5/6), or are computationally inefficient. We greatly improve these results by achieving a regret of O(√(T)) without a simulator, while maintaining computational efficiency when the action set in each round is small. In the special case of sleeping bandits with adversarial loss and stochastic arm availability, our result answers affirmatively the open question by Saha et al. [2020] on whether there exists a polynomial-time algorithm with poly(d)√(T) regret. Our approach naturally handles the case where the loss is linear up to an additive misspecification error, and our regret shows near-optimal dependence on the magnitude of the error.

READ FULL TEXT
research
11/08/2022

Contexts can be Cheap: Solving Stochastic Contextual Bandits with Linear Bandit Algorithms

In this paper, we address the stochastic contextual linear bandit proble...
research
06/10/2015

An efficient algorithm for contextual bandits with knapsacks, and an extension to concave objectives

We consider a contextual version of multi-armed bandit problem with glob...
research
05/24/2018

New Insights into Bootstrapping for Bandits

We investigate the use of bootstrapping in the bandit setting. We first ...
research
07/07/2020

Stochastic Linear Bandits Robust to Adversarial Attacks

We consider a stochastic linear bandit problem in which the rewards are ...
research
02/12/2022

Corralling a Larger Band of Bandits: A Case Study on Switching Regret for Linear Bandits

We consider the problem of combining and learning over a set of adversar...
research
01/28/2019

Target Tracking for Contextual Bandits: Application to Demand Side Management

We propose a contextual-bandit approach for demand side management by of...
research
02/26/2023

No-Regret Linear Bandits beyond Realizability

We study linear bandits when the underlying reward function is not linea...

Please sign up or login with your details

Forgot password? Click here to reset