DeepAI AI Chat
Log In Sign Up

BISTRO: An Efficient Relaxation-Based Method for Contextual Bandits

02/06/2016
by   Alexander Rakhlin, et al.
0

We present efficient algorithms for the problem of contextual bandits with i.i.d. covariates, an arbitrary sequence of rewards, and an arbitrary class of policies. Our algorithm BISTRO requires d calls to the empirical risk minimization (ERM) oracle per round, where d is the number of actions. The method uses unlabeled data to make the problem computationally simple. When the ERM problem itself is computationally hard, we extend the approach by employing multiplicative approximation algorithms for the ERM. The integrality gap of the relaxation only enters in the regret bound rather than the benchmark. Finally, we show that the adversarial version of the contextual bandit problem is learnable (and efficient) whenever the full-information supervised online learning problem has a non-trivial regret guarantee (and efficient).

READ FULL TEXT

page 1

page 2

page 3

page 4

07/12/2021

Adapting to Misspecification in Contextual Bandits

A major research direction in contextual bandits is to develop algorithm...
12/03/2021

On Submodular Contextual Bandits

We consider the problem of contextual bandits where actions are subsets ...
03/07/2020

Adversarial Online Learning with Changing Action Sets: Efficient Algorithms with Approximate Regret Bounds

We revisit the problem of online learning with sleeping experts/bandits:...
10/24/2022

Conditionally Risk-Averse Contextual Bandits

We desire to apply contextual bandits to scenarios where average-case st...
02/04/2014

Taming the Monster: A Fast and Simple Algorithm for Contextual Bandits

We present a new algorithm for the contextual bandit learning problem, w...
04/14/2020

Improved Sleeping Bandits with Stochastic Actions Sets and Adversarial Rewards

In this paper, we consider the problem of sleeping bandits with stochast...
07/05/2021

Efficient First-Order Contextual Bandits: Prediction, Allocation, and Triangular Discrimination

A recurring theme in statistical learning, online learning, and beyond i...