Residual Bootstrap Exploration for Bandit Algorithms

02/19/2020
by   Chi-Hua Wang, et al.
0

In this paper, we propose a novel perturbation-based exploration method in bandit algorithms with bounded or unbounded rewards, called residual bootstrap exploration (ReBoot). The ReBoot enforces exploration by injecting data-driven randomness through a residual-based perturbation mechanism. This novel mechanism captures the underlying distributional properties of fitting errors, and more importantly boosts exploration to escape from suboptimal solutions (for small sample sizes) by inflating variance level in an unconventional way. In theory, with appropriate variance inflation level, ReBoot provably secures instance-dependent logarithmic regret in Gaussian multi-armed bandits. We evaluate the ReBoot in different synthetic multi-armed bandits problems and observe that the ReBoot performs better for unbounded rewards and more robustly than Giro<cit.> and PHE<cit.>, with comparable computational efficiency to the Thompson sampling method.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/03/2023

Multiplier Bootstrap-based Exploration

Despite the great interest in the bandit problem, designing efficient al...
research
02/19/2021

Output-Weighted Sampling for Multi-Armed Bandits with Extreme Payoffs

We present a new type of acquisition functions for online decision makin...
research
11/13/2018

Garbage In, Reward Out: Bootstrapping Exploration in Multi-Armed Bandits

We propose a multi-armed bandit algorithm that explores based on randomi...
research
02/23/2022

Residual Bootstrap Exploration for Stochastic Linear Bandit

We propose a new bootstrap-based online algorithm for stochastic linear ...
research
10/24/2020

Optimal Algorithms for Stochastic Multi-Armed Bandits with Heavy Tailed Rewards

In this paper, we consider stochastic multi-armed bandits (MABs) with he...
research
02/17/2020

Robust Stochastic Bandit Algorithms under Probabilistic Unbounded Adversarial Attack

The multi-armed bandit formalism has been extensively studied under vari...
research
05/04/2016

Linear Bandit algorithms using the Bootstrap

This study presents two new algorithms for solving linear stochastic ban...

Please sign up or login with your details

Forgot password? Click here to reset