Bandits with Partially Observable Offline Data

06/11/2020
by   Guy Tennenholtz, et al.
0

We study linear contextual bandits with access to a large, partially observable, offline dataset that was sampled from some fixed policy. We show that this problem is closely related to a variant of the bandit problem with side information. We construct a linear bandit algorithm that takes advantage of the projected information, and prove regret bounds. Our results demonstrate the ability to take full advantage of partially observable offline data. Particularly, we prove regret bounds that improve current bounds by a factor related to the visible dimensionality of the contexts in the data. Our results indicate that partially observable offline data can significantly improve online learning algorithms. Finally, we demonstrate various characteristics of our approach through synthetic simulations.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/02/2023

A Convex Relaxation Approach to Bayesian Regret Minimization in Offline Bandits

Algorithms for offline bandits must optimize decisions in uncertain envi...
research
08/07/2023

Provably Efficient Learning in Partially Observable Contextual Bandit

In this paper, we investigate transfer learning in partially observable ...
research
07/30/2021

Indexability and Rollout Policy for Multi-State Partially Observable Restless Bandits

Restless multi-armed bandits with partially observable states has applic...
research
05/19/2022

Breaking the √(T) Barrier: Instance-Independent Logarithmic Regret in Stochastic Contextual Linear Bandits

We prove an instance independent (poly) logarithmic regret for stochasti...
research
11/07/2018

Offline Behaviors of Online Friends

In this work we analyze traces of mobility and co-location among a group...
research
02/09/2016

Compliance-Aware Bandits

Motivated by clinical trials, we study bandits with observable non-compl...
research
06/08/2022

Learning in Distributed Contextual Linear Bandits Without Sharing the Context

Contextual linear bandits is a rich and theoretically important model th...

Please sign up or login with your details

Forgot password? Click here to reset