DeepAI AI Chat
Log In Sign Up

Incentivized Exploration for Multi-Armed Bandits under Reward Drift

by   Zhiyuan Liu, et al.
University of Virginia
University of Colorado Boulder
Clemson University

We study incentivized exploration for the multi-armed bandit (MAB) problem where the players receive compensation for exploring arms other than the greedy choice and may provide biased feedback on reward. We seek to understand the impact of this drifted reward feedback by analyzing the performance of three instantiations of the incentivized MAB algorithm: UCB, ε-Greedy, and Thompson Sampling. Our results show that they all achieve O(log T) regret and compensation under the drifted reward, and are therefore effective in incentivizing exploration. Numerical examples are provided to complement the theoretical analysis.


page 1

page 2

page 3

page 4


Thompson Sampling on Asymmetric α-Stable Bandits

In algorithm optimization in reinforcement learning, how to deal with th...

Pure exploration in multi-armed bandits with low rank structure using oblivious sampler

In this paper, we consider the low rank structure of the reward sequence...

Autonomous Drug Design with Multi-armed Bandits

Recent developments in artificial intelligence and automation could pote...

Debiasing Samples from Online Learning Using Bootstrap

It has been recently shown in the literature that the sample averages fr...

Regulating Greed Over Time

In retail, there are predictable yet dramatic time-dependent patterns in...

Combinatorial Multi-Armed Bandits with Filtered Feedback

Motivated by problems in search and detection we present a solution to a...

Regression Oracles and Exploration Strategies for Short-Horizon Multi-Armed Bandits

This paper explores multi-armed bandit (MAB) strategies in very short ho...