Federated Learning with Buffered Asynchronous Aggregation

06/11/2021
by   John Nguyen, et al.
0

Federated Learning (FL) trains a shared model across distributed devices while keeping the training data on the devices. Most FL schemes are synchronous: they perform a synchronized aggregation of model updates from individual devices. Synchronous training can be slow because of late-arriving devices (stragglers). On the other hand, completely asynchronous training makes FL less private because of incompatibility with secure aggregation. In this work, we propose a model aggregation scheme, FedBuff, that combines the best properties of synchronous and asynchronous FL. Similar to synchronous FL, FedBuff is compatible with secure aggregation. Similar to asynchronous FL, FedBuff is robust to stragglers. In FedBuff, clients trains asynchronously and send updates to the server. The server aggregates client updates in a private buffer until updates have been received, at which point a server model update is immediately performed. We provide theoretical convergence guarantees for FedBuff in a non-convex setting. Empirically, FedBuff converges up to 3.8x faster than previous proposals for synchronous FL (e.g., FedAvgM), and up to 2.5x faster than previous proposals for asynchronous FL (e.g., FedAsync). We show that FedBuff is robust to different staleness distributions and is more scalable than synchronous FL techniques.

READ FULL TEXT

Authors

page 16

11/08/2021

Papaya: Practical, Private, and Scalable Federated Learning

Cross-device Federated Learning (FL) is a distributed learning paradigm ...
06/18/2022

Pisces: Efficient Federated Learning via Guided Asynchronous Training

Federated learning (FL) is typically performed in a synchronous parallel...
07/23/2021

Device Scheduling and Update Aggregation Policies for Asynchronous Federated Learning

Federated Learning (FL) is a newly emerged decentralized machine learnin...
10/05/2021

Secure Aggregation for Buffered Asynchronous Federated Learning

Federated learning (FL) typically relies on synchronous training, which ...
02/12/2021

Stragglers Are Not Disaster: A Hybrid Federated Learning Algorithm with Delayed Gradients

Federated learning (FL) is a new machine learning framework which trains...
06/21/2022

A General Theory for Federated Optimization with Asynchronous and Heterogeneous Clients Updates

We propose a novel framework to study asynchronous federated learning op...
05/12/2022

Secure Aggregation for Federated Learning in Flower

Federated Learning (FL) allows parties to learn a shared prediction mode...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.