DeepAI AI Chat
Log In Sign Up

Federated Linear Contextual Bandits

by   Ruiquan Huang, et al.

This paper presents a novel federated linear contextual bandits model, where individual clients face different K-armed stochastic bandits coupled through common global parameters. By leveraging the geometric structure of the linear rewards, a collaborative algorithm called Fed-PE is proposed to cope with the heterogeneity across clients without exchanging local feature vectors or raw data. Fed-PE relies on a novel multi-client G-optimal design, and achieves near-optimal regrets for both disjoint and shared parameter cases with logarithmic communication costs. In addition, a new concept called collinearly-dependent policies is introduced, based on which a tight minimax regret lower bound for the disjoint parameter case is derived. Experiments demonstrate the effectiveness of the proposed algorithms on both synthetic and real-world datasets.


page 1

page 2

page 3

page 4


Federated Online Sparse Decision Making

This paper presents a novel federated linear contextual bandits model, w...

Federated Multi-Armed Bandits

Federated multi-armed bandits (FMAB) is a new bandit paradigm that paral...

Federated Multi-Armed Bandits Under Byzantine Attacks

Multi-armed bandits (MAB) is a simple reinforcement learning model where...

Federated Multi-armed Bandits with Personalization

A general framework of personalized federated multi-armed bandits (PF-MA...

Federated X-Armed Bandit

This work establishes the first framework of federated 𝒳-armed bandit, w...

Design of Experiments for Stochastic Contextual Linear Bandits

In the stochastic linear contextual bandit setting there exist several m...

Contextual Bandits with Knapsacks for a Conversion Model

We consider contextual bandits with knapsacks, with an underlying struct...