DP-SIGNSGD: When Efficiency Meets Privacy and Robustness

by   Lingjuan Lyu, et al.

Federated learning (FL) has emerged as a promising collaboration paradigm by enabling a multitude of parties to construct a joint model without exposing their private training data. Three main challenges in FL are efficiency, privacy, and robustness. The recently proposed SIGNSGD with majority vote shows a promising direction to deal with efficiency and Byzantine robustness. However, there is no guarantee that SIGNSGD is privacy-preserving. In this paper, we bridge this gap by presenting an improved method called DP-SIGNSGD, which can meet all the aforementioned properties. We further propose an error-feedback variant of DP-SIGNSGD to improve accuracy. Experimental results on benchmark image datasets demonstrate the effectiveness of our proposed methods.


page 1

page 2

page 3

page 4


Stochastic-Sign SGD for Federated Learning with Theoretical Guarantees

Federated learning (FL) has emerged as a prominent distributed learning ...

Federated Learning with Sparsified Model Perturbation: Improving Accuracy under Client-Level Differential Privacy

Federated learning (FL) that enables distributed clients to collaborativ...

OLIVE: Oblivious and Differentially Private Federated Learning on Trusted Execution Environment

Differentially private federated learning (DP-FL) has received increasin...

PRECAD: Privacy-Preserving and Robust Federated Learning via Crypto-Aided Differential Privacy

Federated Learning (FL) allows multiple participating clients to train m...

Performance Analysis and Optimization in Privacy-Preserving Federated Learning

As a means of decentralized machine learning, federated learning (FL) ha...

Bridging Differential Privacy and Byzantine-Robustness via Model Aggregation

This paper aims at jointly addressing two seemly conflicting issues in f...