Regret Analysis for Hierarchical Experts Bandit Problem

08/11/2022
by   Qihan Guo, et al.
0

We study an extension of standard bandit problem in which there are R layers of experts. Multi-layered experts make selections layer by layer and only the experts in the last layer can play arms. The goal of the learning policy is to minimize the total regret in this hierarchical experts setting. We first analyze the case that total regret grows linearly with the number of layers. Then we focus on the case that all experts are playing Upper Confidence Bound (UCB) strategy and give several sub-linear upper bounds for different circumstances. Finally, we design some experiments to help the regret analysis for the general case of hierarchical UCB structure and show the practical significance of our theoretical results. This article gives many insights about reasonable hierarchical decision structure.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/09/2021

Nonstochastic Bandits with Infinitely Many Experts

We study the problem of nonstochastic bandits with infinitely many exper...
research
10/29/2019

Dying Experts: Efficient Algorithms with Optimal Regret Bounds

We study a variant of decision-theoretic online learning in which the se...
research
02/26/2020

Memory-Constrained No-Regret Learning in Adversarial Bandits

An adversarial bandit problem with memory constraints is studied where o...
research
07/14/2023

On Interpolating Experts and Multi-Armed Bandits

Learning with expert advice and multi-armed bandit are two classic onlin...
research
02/14/2022

The Impact of Batch Learning in Stochastic Linear Bandits

We consider a special case of bandit problems, named batched bandits, in...
research
06/05/2023

Active Ranking of Experts Based on their Performances in Many Tasks

We consider the problem of ranking n experts based on their performances...

Please sign up or login with your details

Forgot password? Click here to reset