Decentralized Federated Learning: Balancing Communication and Computing Costs

07/26/2021
by   Wei Liu, et al.
0

Decentralized federated learning (DFL) is a powerful framework of distributed machine learning and decentralized stochastic gradient descent (SGD) is a driving engine for DFL. The performance of decentralized SGD is jointly influenced by communication-efficiency and convergence rate. In this paper, we propose a general decentralized federated learning framework to strike a balance between communication-efficiency and convergence performance. The proposed framework performs both multiple local updates and multiple inter-node communications periodically, unifying traditional decentralized SGD methods. We establish strong convergence guarantees for the proposed DFL algorithm without the assumption of convex objective function. The balance of communication and computation rounds is essential to optimize decentralized federated learning under constrained communication and computation resources. For further improving communication-efficiency of DFL, compressed communication is applied to DFL, named DFL with compressed communication (C-DFL). The proposed C-DFL exhibits linear convergence for strongly convex objectives. Experiment results based on MNIST and CIFAR-10 datasets illustrate the superiority of DFL over traditional decentralized SGD methods and show that C-DFL further enhances communication-efficiency.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
12/04/2019

Learn Electronic Health Records by Fully Decentralized Federated Learning

Federated learning opens a number of research opportunities due to its h...
research
03/15/2023

Communication-Efficient Design for Quantized Decentralized Federated Learning

Decentralized federated learning (DFL) is a variant of federated learnin...
research
08/05/2021

Decentralized Federated Learning with Unreliable Communications

Decentralized federated learning, inherited from decentralized learning,...
research
05/23/2019

MATCHA: Speeding Up Decentralized SGD via Matching Decomposition Sampling

The trade-off between convergence error and communication delays in dece...
research
04/15/2021

D-Cliques: Compensating NonIIDness in Decentralized Federated Learning with Topology

The convergence speed of machine learning models trained with Federated ...
research
06/23/2017

Collaborative Deep Learning in Fixed Topology Networks

There is significant recent interest to parallelize deep learning algori...
research
03/06/2020

Decentralized SGD with Over-the-Air Computation

We study the performance of decentralized stochastic gradient descent (D...

Please sign up or login with your details

Forgot password? Click here to reset