DeepAI AI Chat
Log In Sign Up

A Communication-Efficient Adaptive Algorithm for Federated Learning under Cumulative Regret

01/21/2023
by   Sudeep Salgia, et al.
4

We consider the problem of online stochastic optimization in a distributed setting with M clients connected through a central server. We develop a distributed online learning algorithm that achieves order-optimal cumulative regret with low communication cost measured in the total number of bits transmitted over the entire learning horizon. This is in contrast to existing studies which focus on the offline measure of simple regret for learning efficiency. The holistic measure for communication cost also departs from the prevailing approach that separately tackles the communication frequency and the number of bits in each communication round.

READ FULL TEXT

page 1

page 2

page 3

page 4

08/10/2021

FedPAGE: A Fast Local Stochastic Gradient Method for Communication-Efficient Federated Learning

Federated Averaging (FedAvg, also known as Local-SGD) (McMahan et al., 2...
10/04/2021

Asynchronous Upper Confidence Bound Algorithms for Federated Linear Bandits

Linear contextual bandit is a popular online learning problem. It has be...
11/30/2021

Online Learning for Receding Horizon Control with Provable Regret Guarantees

We address the problem of learning to control an unknown linear dynamica...
11/04/2022

Distributed Linear Bandits under Communication Constraints

We consider distributed linear bandits where M agents learn collaborativ...
11/14/2012

Distributed Non-Stochastic Experts

We consider the online distributed non-stochastic experts problem, where...
01/01/2021

Efficient Learning-based Scheduling for Information Freshness in Wireless Networks

Motivated by the recent trend of integrating artificial intelligence int...
11/13/2018

Community Exploration: From Offline Optimization to Online Learning

We introduce the community exploration problem that has many real-world ...