Metadata-based Multi-Task Bandits with Bayesian Hierarchical Models

08/13/2021
by   Runzhe Wan, et al.
0

How to explore efficiently is a central problem in multi-armed bandits. In this paper, we introduce the metadata-based multi-task bandit problem, where the agent needs to solve a large number of related multi-armed bandit tasks and can leverage some task-specific features (i.e., metadata) to share knowledge across tasks. As a general framework, we propose to capture task relations through the lens of Bayesian hierarchical models, upon which a Thompson sampling algorithm is designed to efficiently learn task relations, share information, and minimize the cumulative regrets. Two concrete examples for Gaussian bandits and Bernoulli bandits are carefully analyzed. The Bayes regret for Gaussian bandits clearly demonstrates the benefits of information sharing with our algorithm. The proposed method is further supported by extensive experiments.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/10/2023

Efficient Training of Multi-task Neural Solver with Multi-armed Bandits

Efficiently training a multi-task neural solver for various combinatoria...
research
05/24/2017

Multi-Task Learning for Contextual Bandits

Contextual bandits are a form of multi-armed bandit in which the agent h...
research
09/11/2019

Practical Calculation of Gittins Indices for Multi-armed Bandits

Gittins indices provide an optimal solution to the classical multi-armed...
research
10/16/2021

Statistical Consequences of Dueling Bandits

Multi-Armed-Bandit frameworks have often been used by researchers to ass...
research
09/18/2023

Task Selection and Assignment for Multi-modal Multi-task Dialogue Act Classification with Non-stationary Multi-armed Bandits

Multi-task learning (MTL) aims to improve the performance of a primary t...
research
05/10/2021

Sense-Bandits: AI-based Adaptation of Sensing Thresholds for Heterogeneous-technology Coexistence Over Unlicensed Bands

In this paper, we present Sense-Bandits, an AI-based framework for distr...
research
11/12/2021

Hierarchical Bayesian Bandits

Meta-, multi-task, and federated learning can be all viewed as solving s...

Please sign up or login with your details

Forgot password? Click here to reset