Tuning Confidence Bound for Stochastic Bandits with Bandit Distance

10/06/2021
by   Xinyu Zhang, et al.
0

We propose a novel modification of the standard upper confidence bound (UCB) method for the stochastic multi-armed bandit (MAB) problem which tunes the confidence bound of a given bandit based on its distance to others. Our UCB distance tuning (UCB-DT) formulation enables improved performance as measured by expected regret by preventing the MAB algorithm from focusing on non-optimal bandits which is a well-known deficiency of standard UCB. "Distance tuning" of the standard UCB is done using a proposed distance measure, which we call bandit distance, that is parameterizable and which therefore can be optimized to control the transition rate from exploration to exploitation based on problem requirements. We empirically demonstrate increased performance of UCB-DT versus many existing state-of-the-art methods which use the UCB formulation for the MAB problem. Our contribution also includes the development of a conceptual tool called the "Exploration Bargain Point" which gives insights into the tradeoffs between exploration and exploitation. We argue that the Exploration Bargain Point provides an intuitive perspective that is useful for comparatively analyzing the performance of UCB-based methods.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/05/2023

Neural Exploitation and Exploration of Contextual Bandits

In this paper, we study utilizing neural networks for the exploitation a...
research
04/02/2021

Blind Exploration and Exploitation of Stochastic Experts

We present blind exploration and exploitation (BEE) algorithms for ident...
research
08/28/2023

Simple Modification of the Upper Confidence Bound Algorithm by Generalized Weighted Averages

The multi-armed bandit (MAB) problem is a classical problem that models ...
research
06/06/2021

Multi-facet Contextual Bandits: A Neural Network Perspective

Contextual multi-armed bandit has shown to be an effective tool in recom...
research
07/25/2020

Multi-Armed Bandits for Minesweeper: Profiting from Exploration-Exploitation Synergy

A popular computer puzzle, the game of Minesweeper requires its human pl...
research
09/09/2019

Recommendation System-based Upper Confidence Bound for Online Advertising

In this paper, the method UCB-RS, which resorts to recommendation system...
research
01/04/2021

Etat de l'art sur l'application des bandits multi-bras

The Multi-armed bandit offer the advantage to learn and exploit the alre...

Please sign up or login with your details

Forgot password? Click here to reset