DP-SIGNSGD: When Efficiency Meets Privacy and Robustness

05/11/2021
by   Lingjuan Lyu, et al.
0

Federated learning (FL) has emerged as a promising collaboration paradigm by enabling a multitude of parties to construct a joint model without exposing their private training data. Three main challenges in FL are efficiency, privacy, and robustness. The recently proposed SIGNSGD with majority vote shows a promising direction to deal with efficiency and Byzantine robustness. However, there is no guarantee that SIGNSGD is privacy-preserving. In this paper, we bridge this gap by presenting an improved method called DP-SIGNSGD, which can meet all the aforementioned properties. We further propose an error-feedback variant of DP-SIGNSGD to improve accuracy. Experimental results on benchmark image datasets demonstrate the effectiveness of our proposed methods.

READ FULL TEXT

page 1

page 2

page 3

page 4

02/25/2020

Stochastic-Sign SGD for Federated Learning with Theoretical Guarantees

Federated learning (FL) has emerged as a prominent distributed learning ...
02/15/2022

Federated Learning with Sparsified Model Perturbation: Improving Accuracy under Client-Level Differential Privacy

Federated learning (FL) that enables distributed clients to collaborativ...
02/15/2022

OLIVE: Oblivious and Differentially Private Federated Learning on Trusted Execution Environment

Differentially private federated learning (DP-FL) has received increasin...
10/22/2021

PRECAD: Privacy-Preserving and Robust Federated Learning via Crypto-Aided Differential Privacy

Federated Learning (FL) allows multiple participating clients to train m...
02/29/2020

Performance Analysis and Optimization in Privacy-Preserving Federated Learning

As a means of decentralized machine learning, federated learning (FL) ha...
04/29/2022

Bridging Differential Privacy and Byzantine-Robustness via Model Aggregation

This paper aims at jointly addressing two seemly conflicting issues in f...