DeepAI AI Chat
Log In Sign Up

Federated Learning via Plurality Vote

by   Kai Yue, et al.

Federated learning allows collaborative workers to solve a machine learning problem while preserving data privacy. Recent studies have tackled various challenges in federated learning, but the joint optimization of communication overhead, learning reliability, and deployment efficiency is still an open problem. To this end, we propose a new scheme named federated learning via plurality vote (FedVote). In each communication round of FedVote, workers transmit binary or ternary weights to the server with low communication overhead. The model parameters are aggregated via weighted voting to enhance the resilience against Byzantine attacks. When deployed for inference, the model with binary or ternary weights is resource-friendly to edge devices. We show that our proposed method can reduce quantization error and converges faster compared with the methods directly quantizing the model updates.


page 1

page 2

page 3

page 4


Communication-Efficient Federated Learning via Predictive Coding

Federated learning can enable remote workers to collaboratively train a ...

Federated Learning on Adaptively Weighted Nodes by Bilevel Optimization

We propose a federated learning method with weighted nodes in which the ...

Optimising Communication Overhead in Federated Learning Using NSGA-II

Federated learning is a training paradigm according to which a server-ba...

An Equivalence Between Data Poisoning and Byzantine Gradient Attacks

To study the resilience of distributed learning, the "Byzantine" literat...

Federated Two-stage Learning with Sign-based Voting

Federated learning is a distributed machine learning mechanism where loc...

FedPAQ: A Communication-Efficient Federated Learning Method with Periodic Averaging and Quantization

Federated learning is a new distributed machine learning approach, where...

Byzantine-Resilient Federated Learning at Edge

Both Byzantine resilience and communication efficiency have attracted tr...