SWIFT: Rapid Decentralized Federated Learning via Wait-Free Model Communication

10/25/2022
βˆ™
by   Marco Bornstein, et al.
βˆ™
5
βˆ™

The decentralized Federated Learning (FL) setting avoids the role of a potentially unreliable or untrustworthy central host by utilizing groups of clients to collaboratively train a model via localized training and model/gradient sharing. Most existing decentralized FL algorithms require synchronization of client models where the speed of synchronization depends upon the slowest client. In this work, we propose SWIFT: a novel wait-free decentralized FL algorithm that allows clients to conduct training at their own speed. Theoretically, we prove that SWIFT matches the gold-standard iteration convergence rate π’ͺ(1/√(T)) of parallel stochastic gradient descent for convex and non-convex smooth optimization (total iterations T). Furthermore, we provide theoretical results for IID and non-IID settings without any bounded-delay assumption for slow clients which is required by other asynchronous decentralized FL algorithms. Although SWIFT achieves the same iteration convergence rate with respect to T as other state-of-the-art (SOTA) parallel stochastic algorithms, it converges faster with respect to run-time due to its wait-free structure. Our experimental results demonstrate that SWIFT's run-time is reduced due to a large reduction in communication time per epoch, which falls by an order of magnitude compared to synchronous counterparts. Furthermore, SWIFT produces loss levels for image classification, over IID and non-IID data settings, upwards of 50 algorithms.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
βˆ™ 07/12/2020

VAFL: a Method of Vertical Asynchronous Federated Learning

Horizontal Federated learning (FL) handles multi-client data that share ...
research
βˆ™ 06/18/2022

Pisces: Efficient Federated Learning via Guided Asynchronous Training

Federated learning (FL) is typically performed in a synchronous parallel...
research
βˆ™ 02/01/2023

FLSTRA: Federated Learning in Stratosphere

We propose a federated learning (FL) in stratosphere (FLSTRA) system, wh...
research
βˆ™ 06/09/2022

Mobility Improves the Convergence of Asynchronous Federated Learning

This paper studies asynchronous Federated Learning (FL) subject to clien...
research
βˆ™ 08/16/2023

DFedADMM: Dual Constraints Controlled Model Inconsistency for Decentralized Federated Learning

To address the communication burden issues associated with federated lea...
research
βˆ™ 09/18/2023

A Multi-Token Coordinate Descent Method for Semi-Decentralized Vertical Federated Learning

Communication efficiency is a major challenge in federated learning (FL)...
research
βˆ™ 10/03/2022

Taming Fat-Tailed ("Heavier-Tailed” with Potentially Infinite Variance) Noise in Federated Learning

A key assumption in most existing works on FL algorithms' convergence an...

Please sign up or login with your details

Forgot password? Click here to reset