UCBoost: A Boosting Approach to Tame Complexity and Optimality for Stochastic Bandits

04/16/2018
by   Fang Liu, et al.
0

In this work, we address the open problem of finding low-complexity near-optimal multi-armed bandit algorithms for sequential decision making problems. Existing bandit algorithms are either sub-optimal and computationally simple (e.g., UCB1) or optimal and computationally complex (e.g., kl-UCB). We propose a boosting approach to Upper Confidence Bound based algorithms for stochastic bandits, that we call UCBoost. Specifically, we propose two types of UCBoost algorithms. We show that UCBoost(D) enjoys O(1) complexity for each arm per round as well as regret guarantee that is 1/e-close to that of the kl-UCB algorithm. We propose an approximation-based UCBoost algorithm, UCBoost(ϵ), that enjoys a regret guarantee ϵ-close to that of kl-UCB as well as O((1/ϵ)) complexity for each arm per round. Hence, our algorithms provide practitioners a practical way to trade optimality with computational complexity. Finally, we present numerical results which show that UCBoost(ϵ) can achieve the same regret performance as the standard kl-UCB while incurring only 1% of the computational cost of kl-UCB.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/23/2020

Approximation Methods for Kernelized Bandits

The RKHS bandit problem (also called kernelized multi-armed bandit probl...
research
03/19/2019

A Note on KL-UCB+ Policy for the Stochastic Bandit

A classic setting of the stochastic K-armed bandit problem is considered...
research
02/19/2021

A High Performance, Low Complexity Algorithm for Multi-Player Bandits Without Collision Sensing Information

Motivated by applications in cognitive radio networks, we consider the d...
research
04/28/2023

Kullback-Leibler Maillard Sampling for Multi-armed Bandits with Bounded Rewards

We study K-armed bandit problems where the reward distributions of the a...
research
09/14/2020

Hellinger KL-UCB based Bandit Algorithms for Markovian and i.i.d. Settings

In the regret-based formulation of multi-armed bandit (MAB) problems, ex...
research
07/19/2018

An Optimal Algorithm for Stochastic and Adversarial Bandits

We provide an algorithm that achieves the optimal (up to constants) fini...
research
05/26/2022

Exploration, Exploitation, and Engagement in Multi-Armed Bandits with Abandonment

Multi-armed bandit (MAB) is a classic model for understanding the explor...

Please sign up or login with your details

Forgot password? Click here to reset