Learning from Logged Implicit Exploration Data

02/27/2010
by   Alex Strehl, et al.
0

We provide a sound and consistent foundation for the use of nonrandom exploration data in "contextual bandit" or "partially labeled" settings where only the value of a chosen action is learned. The primary challenge in a variety of settings is that the exploration policy, in which "offline" data is logged, is not explicitly known. Prior solutions here require either control of the actions during the learning process, recorded random exploration, or actions chosen obliviously in a repeated manner. The techniques reported here lift these restrictions, allowing the learning of a policy for choosing actions given features from historical data where no randomization occurred or was logged. We empirically verify our solution on two reasonably sized sets of real-world data obtained from Yahoo!.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
01/23/2019

Meta-Learning for Contextual Bandit Exploration

We describe MELEE, a meta-learning algorithm for learning a good explora...
research
05/26/2022

TempoRL: Temporal Priors for Exploration in Off-Policy Reinforcement Learning

Efficient exploration is a crucial challenge in deep reinforcement learn...
research
11/16/2020

Corrupted Contextual Bandits with Action Order Constraints

We consider a variant of the novel contextual bandit problem with corrup...
research
02/02/2020

Safe Exploration for Optimizing Contextual Bandits

Contextual bandit problems are a natural fit for many information retrie...
research
02/12/2018

Practical Evaluation and Optimization of Contextual Bandit Algorithms

We study and empirically optimize contextual bandit learning, exploratio...
research
03/13/2023

Fast exploration and learning of latent graphs with aliased observations

Consider this scenario: an agent navigates a latent graph by performing ...
research
06/20/2022

Sampling Efficient Deep Reinforcement Learning through Preference-Guided Stochastic Exploration

Massive practical works addressed by Deep Q-network (DQN) algorithm have...

Please sign up or login with your details

Forgot password? Click here to reset