Learning from an Exploring Demonstrator: Optimal Reward Estimation for Bandits

06/28/2021
by   Wenshuo Guo, et al.
3

We introduce the "inverse bandit" problem of estimating the rewards of a multi-armed bandit instance from observing the learning process of a low-regret demonstrator. Existing approaches to the related problem of inverse reinforcement learning assume the execution of an optimal policy, and thereby suffer from an identifiability issue. In contrast, our paradigm leverages the demonstrator's behavior en route to optimality, and in particular, the exploration phase, to obtain consistent reward estimates. We develop simple and efficient reward estimation procedures for demonstrations within a class of upper-confidence-based algorithms, showing that reward estimation gets progressively easier as the regret of the algorithm increases. We match these upper bounds with information-theoretic lower bounds that apply to any demonstrator algorithm, thereby characterizing the optimal tradeoff between exploration and reward estimation. Extensive empirical evaluations on both synthetic data and simulated experimental design data from the natural sciences corroborate our theoretical results.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
09/20/2020

Regret Bounds and Reinforcement Learning Exploration of EXP-based Algorithms

EXP-based algorithms are often used for exploration in multi-armed bandi...
research
07/02/2020

Structure Adaptive Algorithms for Stochastic Bandits

We study reward maximisation in a wide class of structured stochastic mu...
research
07/20/2020

Filtered Poisson Process Bandit on a Continuum

We consider a version of the continuum armed bandit where an action indu...
research
04/02/2020

Hierarchical Adaptive Contextual Bandits for Resource Constraint based Recommendation

Contextual multi-armed bandit (MAB) achieves cutting-edge performance on...
research
06/14/2022

On the Finite-Time Performance of the Knowledge Gradient Algorithm

The knowledge gradient (KG) algorithm is a popular and effective algorit...
research
04/03/2020

Hawkes Process Multi-armed Bandits for Disaster Search and Rescue

We propose a novel framework for integrating Hawkes processes with multi...
research
02/10/2022

Remote Contextual Bandits

We consider a remote contextual multi-armed bandit (CMAB) problem, in wh...

Please sign up or login with your details

Forgot password? Click here to reset