DeepAI AI Chat
Log In Sign Up

A Communication-Efficient Adaptive Algorithm for Federated Learning under Cumulative Regret

by   Sudeep Salgia, et al.

We consider the problem of online stochastic optimization in a distributed setting with M clients connected through a central server. We develop a distributed online learning algorithm that achieves order-optimal cumulative regret with low communication cost measured in the total number of bits transmitted over the entire learning horizon. This is in contrast to existing studies which focus on the offline measure of simple regret for learning efficiency. The holistic measure for communication cost also departs from the prevailing approach that separately tackles the communication frequency and the number of bits in each communication round.


page 1

page 2

page 3

page 4


FedPAGE: A Fast Local Stochastic Gradient Method for Communication-Efficient Federated Learning

Federated Averaging (FedAvg, also known as Local-SGD) (McMahan et al., 2...

Asynchronous Upper Confidence Bound Algorithms for Federated Linear Bandits

Linear contextual bandit is a popular online learning problem. It has be...

Online Learning for Receding Horizon Control with Provable Regret Guarantees

We address the problem of learning to control an unknown linear dynamica...

Distributed Linear Bandits under Communication Constraints

We consider distributed linear bandits where M agents learn collaborativ...

Distributed Non-Stochastic Experts

We consider the online distributed non-stochastic experts problem, where...

Efficient Learning-based Scheduling for Information Freshness in Wireless Networks

Motivated by the recent trend of integrating artificial intelligence int...

Community Exploration: From Offline Optimization to Online Learning

We introduce the community exploration problem that has many real-world ...