Multitask Bandit Learning through Heterogeneous Feedback Aggregation

10/29/2020
by   Zhi Wang, et al.
0

In many real-world applications, multiple agents seek to learn how to perform highly related yet slightly different tasks in an online bandit learning protocol. We formulate this problem as the ϵ-multi-player multi-armed bandit problem, in which a set of players concurrently interact with a set of arms, and for each arm, the reward distributions for all players are similar but not necessarily identical. We develop an upper confidence bound-based algorithm, RobustAgg(ϵ), that adaptively aggregates rewards collected by different players. In the setting where an upper bound on the pairwise similarities of reward distributions between players is known, we achieve instance-dependent regret guarantees that depend on the amenability of information sharing across players. We complement these upper bounds with nearly matching lower bounds. In the setting where pairwise similarities are unknown, we provide a lower bound, as well as an algorithm that trades off minimax regret guarantees for adaptivity to unknown similarity structure.

READ FULL TEXT

page 41

page 42

research
02/22/2018

Regional Multi-Armed Bandits

We consider a variant of the classic multi-armed bandit problem where th...
research
08/15/2023

Regret Lower Bounds in Multi-agent Multi-armed Bandit

Multi-armed Bandit motivates methods with provable upper bounds on regre...
research
06/17/2022

Thompson Sampling for Robust Transfer in Multi-Task Bandits

We study the problem of online multi-task learning where the tasks are p...
research
11/11/2021

Solving Multi-Arm Bandit Using a Few Bits of Communication

The multi-armed bandit (MAB) problem is an active learning framework tha...
research
01/27/2023

Decentralized Online Bandit Optimization on Directed Graphs with Regret Bounds

We consider a decentralized multiplayer game, played over T rounds, with...
research
06/04/2020

Differentiable Linear Bandit Algorithm

Upper Confidence Bound (UCB) is arguably the most commonly used method f...

Please sign up or login with your details

Forgot password? Click here to reset