Federated Multi-Armed Bandits

01/28/2021
by   Chengshuai Shi, et al.
13

Federated multi-armed bandits (FMAB) is a new bandit paradigm that parallels the federated learning (FL) framework in supervised learning. It is inspired by practical applications in cognitive radio and recommender systems, and enjoys features that are analogous to FL. This paper proposes a general framework of FMAB and then studies two specific federated bandit models. We first study the approximate model where the heterogeneous local models are random realizations of the global model from an unknown distribution. This model introduces a new uncertainty of client sampling, as the global model may not be reliably learned even if the finite local models are perfectly known. Furthermore, this uncertainty cannot be quantified a priori without knowledge of the suboptimality gap. We solve the approximate model by proposing Federated Double UCB (Fed2-UCB), which constructs a novel "double UCB" principle accounting for uncertainties from both arm and client sampling. We show that gradually admitting new clients is critical in achieving an O(log(T)) regret while explicitly considering the communication cost. The exact model, where the global bandit model is the exact average of heterogeneous local models, is then studied as a special case. We show that, somewhat surprisingly, the order-optimal regret can be achieved independent of the number of clients with a careful choice of the update periodicity. Experiments using both synthetic and real-world datasets corroborate the theoretical analysis and demonstrate the effectiveness and efficiency of the proposed algorithms.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/25/2021

Federated Multi-armed Bandits with Personalization

A general framework of personalized federated multi-armed bandits (PF-MA...
research
03/18/2023

Client Selection for Generalization in Accelerated Federated Learning: A Multi-Armed Bandit Approach

Federated learning (FL) is an emerging machine learning (ML) paradigm us...
research
05/30/2022

Federated X-Armed Bandit

This work establishes the first framework of federated 𝒳-armed bandit, w...
research
07/05/2020

Multi-Armed Bandit Based Client Scheduling for Federated Learning

By exploiting the computing power and local data of distributed clients,...
research
05/09/2022

Federated Multi-Armed Bandits Under Byzantine Attacks

Multi-armed bandits (MAB) is a simple reinforcement learning model where...
research
10/27/2021

Federated Linear Contextual Bandits

This paper presents a novel federated linear contextual bandits model, w...
research
02/02/2022

Communication Efficient Federated Learning for Generalized Linear Bandits

Contextual bandit algorithms have been recently studied under the federa...

Please sign up or login with your details

Forgot password? Click here to reset