Taming the Monster: A Fast and Simple Algorithm for Contextual Bandits

02/04/2014
by   Alekh Agarwal, et al.
0

We present a new algorithm for the contextual bandit learning problem, where the learner repeatedly takes one of K actions in response to the observed context, and observes the reward only for that chosen action. Our method assumes access to an oracle for solving fully supervised cost-sensitive classification problems and achieves the statistically optimal regret guarantee with only Õ(√(KT/ N)) oracle calls across all T rounds, where N is the number of policies in the policy class we compete against. By doing so, we obtain the most practical contextual bandit learning algorithm amongst approaches that work for general policy classes. We further conduct a proof-of-concept experiment which demonstrates the excellent computational and prediction performance of (an online variant of) our algorithm relative to several baselines.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/28/2020

Bypassing the Monster: A Faster and Simpler Optimal Algorithm for Contextual Bandits under Realizability

We consider the general (stochastic) contextual bandit problem under the...
research
03/20/2023

A Unified Framework of Policy Learning for Contextual Bandit with Confounding Bias and Missing Observations

We study the offline contextual bandit problem, where we aim to acquire ...
research
03/05/2020

Generalized Policy Elimination: an efficient algorithm for Nonparametric Contextual Bandits

We propose the Generalized Policy Elimination (GPE) algorithm, an oracle...
research
06/13/2011

Efficient Optimal Learning for Contextual Bandits

We address the problem of learning in an online setting where the learne...
research
02/06/2016

BISTRO: An Efficient Relaxation-Based Method for Contextual Bandits

We present efficient algorithms for the problem of contextual bandits wi...
research
03/03/2018

Practical Contextual Bandits with Regression Oracles

A major challenge in contextual bandits is to design general-purpose alg...
research
05/29/2023

Contextual Bandits with Budgeted Information Reveal

Contextual bandit algorithms are commonly used in digital health to reco...

Please sign up or login with your details

Forgot password? Click here to reset