MoDeST: Bridging the Gap between Federated and Decentralized Learning with Decentralized Sampling

02/27/2023
by   Martijn de Vos, et al.
0

Federated and decentralized machine learning leverage end-user devices for privacy-preserving training of models at lower operating costs than within a data center. In a round of Federated Learning (FL), a random sample of participants trains locally, then a central server aggregates the local models to produce a single model for the next round. In a round of Decentralized Learning (DL), all participants train locally and then aggregate with their immediate neighbors, resulting in many local models with residual variance between them. On the one hand, FL's sampling and lower model variance provides lower communication costs and faster convergence. On the other hand, DL removes the need for a central server and distributes the communication costs more evenly amongst nodes, albeit at a larger total communication cost and slower convergence. In this paper, we present MoDeST: Mostly-Consistent Decentralized Sampling Training. MoDeST implements decentralized sampling in which a random subset of nodes is responsible for training and aggregation every round: this provides the benefits of both FL and DL without their traditional drawbacks. Our evaluation of MoDeST on four common learning tasks: (i) confirms convergence as fast as FL, (ii) shows a 3x-14x reduction in communication costs compared to DL, and (iii) demonstrates that MoDeST quickly adapts to nodes joining, leaving, or failing, even when 80

READ FULL TEXT

page 1

page 2

page 3

page 4

research
08/01/2022

DeFL: Decentralized Weight Aggregation for Cross-silo Federated Learning

Federated learning (FL) is an emerging promising paradigm of privacy-pre...
research
10/11/2021

Homogeneous Learning: Self-Attention Decentralized Deep Learning

Federated learning (FL) has been facilitating privacy-preserving deep le...
research
08/04/2017

Efficient Variance-Reduced Learning for Fully Decentralized On-Device Intelligence

This work develops a fully decentralized variance-reduced learning algor...
research
06/16/2023

Fedstellar: A Platform for Decentralized Federated Learning

In 2016, Google proposed Federated Learning (FL) as a novel paradigm to ...
research
09/14/2023

FedFNN: Faster Training Convergence Through Update Predictions in Federated Recommender Systems

Federated Learning (FL) has emerged as a key approach for distributed ma...
research
06/07/2023

Get More for Less in Decentralized Learning Systems

Decentralized learning (DL) systems have been gaining popularity because...
research
08/04/2022

ZeroFL: Efficient On-Device Training for Federated Learning with Local Sparsity

When the available hardware cannot meet the memory and compute requireme...

Please sign up or login with your details

Forgot password? Click here to reset