Generalized Policy Elimination: an efficient algorithm for Nonparametric Contextual Bandits

03/05/2020
by   Aurélien F. Bibaut, et al.
10

We propose the Generalized Policy Elimination (GPE) algorithm, an oracle-efficient contextual bandit (CB) algorithm inspired by the Policy Elimination algorithm of <cit.>. We prove the first regret optimality guarantee theorem for an oracle-efficient CB algorithm competing against a nonparametric class with infinite VC-dimension. Specifically, we show that GPE is regret-optimal (up to logarithmic factors) for policy classes with integrable entropy. For classes with larger entropy, we show that the core techniques used to analyze GPE can be used to design an ε-greedy algorithm with regret bound matching that of the best algorithms to date. We illustrate the applicability of our algorithms and theorems with examples of large nonparametric policy classes, for which the relevant optimization oracles can be efficiently implemented.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/28/2018

Contextual bandits with surrogate losses: Margin bounds and efficient algorithms

We introduce a new family of margin-based regret guarantees for adversar...
research
02/04/2014

Taming the Monster: A Fast and Simple Algorithm for Contextual Bandits

We present a new algorithm for the contextual bandit learning problem, w...
research
05/03/2018

Nonparametric Learning and Optimization with Covariates

Modern decision analytics frequently involves the optimization of an obj...
research
06/13/2023

Oracle-Efficient Pessimism: Offline Policy Optimization in Contextual Bandits

We consider policy optimization in contextual bandits, where one is give...
research
02/12/2020

Beyond UCB: Optimal and Efficient Contextual Bandits with Regression Oracles

A fundamental challenge in contextual bandits is to develop flexible, ge...
research
12/21/2021

Nearly Optimal Policy Optimization with Stable at Any Time Guarantee

Policy optimization methods are one of the most widely used classes of R...
research
02/17/2020

Differentiable Bandit Exploration

We learn bandit policies that maximize the average reward over bandit in...

Please sign up or login with your details

Forgot password? Click here to reset