DeepAI
Log In Sign Up

Practical Contextual Bandits with Regression Oracles

03/03/2018
by   Dylan J. Foster, et al.
0

A major challenge in contextual bandits is to design general-purpose algorithms that are both practically useful and theoretically well-founded. We present a new technique that has the empirical and computational advantages of realizability-based approaches combined with the flexibility of agnostic methods. Our algorithms leverage the availability of a regression oracle for the value-function class, a more realistic and reasonable oracle than the classification oracles over policies typically assumed by agnostic methods. Our approach generalizes both UCB and LinUCB to far more expressive possible model classes and achieves low regret under certain distributional assumptions. In an extensive empirical evaluation, compared to both realizability-based and agnostic baselines, we find that our approach typically gives comparable or superior results.

READ FULL TEXT

page 32

page 33

02/12/2020

Beyond UCB: Optimal and Efficient Contextual Bandits with Regression Oracles

A fundamental challenge in contextual bandits is to develop flexible, ge...
07/12/2021

Adapting to Misspecification in Contextual Bandits

A major research direction in contextual bandits is to develop algorithm...
07/05/2021

Efficient First-Order Contextual Bandits: Prediction, Allocation, and Triangular Discrimination

A recurring theme in statistical learning, online learning, and beyond i...
07/12/2022

Contextual Bandits with Large Action Spaces: Made Practical

A central problem in sequential decision making is to develop algorithms...
02/04/2014

Taming the Monster: A Fast and Simple Algorithm for Contextual Bandits

We present a new algorithm for the contextual bandit learning problem, w...
05/21/2021

Parallelizing Contextual Linear Bandits

Standard approaches to decision-making under uncertainty focus on sequen...