The Effect of Training Parameters and Mechanisms on Decentralized Federated Learning based on MNIST Dataset

08/07/2021
by   Zhuofan Zhang, et al.
43

Federated Learning is an algorithm suited for training models on decentralized data, but the requirement of a central "server" node is a bottleneck. In this document, we first introduce the notion of Decentralized Federated Learning (DFL). We then perform various experiments on different setups, such as changing model aggregation frequency, switching from independent and identically distributed (IID) dataset partitioning to non-IID partitioning with partial global sharing, using different optimization methods across clients, and breaking models into segments with partial sharing. All experiments are run on the MNIST handwritten digits dataset. We observe that those altered training procedures are generally robust, albeit non-optimal. We also observe failures in training when the variance between model weights is too large. The open-source experiment code is accessible through GitHub[Code was uploaded at <https://github.com/zhzhang2018/DecentralizedFL>].

READ FULL TEXT

page 11

page 21

page 22

page 23

page 25

page 26

page 27

page 29

research
12/24/2020

Decentralized Federated Learning via Mutual Knowledge Transfer

In this paper, we investigate the problem of decentralized federated lea...
research
12/14/2021

Scatterbrained: A flexible and expandable pattern for decentralized machine learning

Federated machine learning is a technique for training a model across mu...
research
06/10/2022

Blades: A Simulator for Attacks and Defenses in Federated Learning

Federated learning enables distributed training across a set of clients,...
research
02/03/2021

Provably Secure Federated Learning against Malicious Clients

Federated learning enables clients to collaboratively learn a shared glo...
research
08/22/2022

FedOS: using open-set learning to stabilize training in federated learning

Federated Learning is a recent approach to train statistical models on d...
research
05/24/2022

Byzantine-Robust Federated Learning with Optimal Statistical Rates and Privacy Guarantees

We propose Byzantine-robust federated learning protocols with nearly opt...
research
03/19/2023

On the Convergence of Decentralized Federated Learning Under Imperfect Information Sharing

Decentralized learning and optimization is a central problem in control ...

Please sign up or login with your details

Forgot password? Click here to reset