Differentially Private Decentralized Learning

06/14/2020
by   Shangwei Guo, et al.
0

Decentralized learning has received great attention for its high efficiency and performance. In such systems, every participant constantly exchanges parameters with each other to train a shared model, which can put him at the risk of data privacy leakage. Differential Privacy (DP) has been adopted to enhance the Stochastic Gradient Descent (SGD) algorithm. However, these approaches mainly focus on single-party learning, or centralized learning in the synchronous mode. In this paper, we design a novel DP-SGD algorithm for decentralized learning systems. The key contribution of our solution is a topology-aware optimization strategy, which leverages the unique network characteristics of decentralized systems to effectively reduce the noise scale and improve the model usability. Besides, we design a novel learning protocol for both synchronous and asynchronous decentralized systems by restricting the sensitivity of the SGD algorithm and maximizing the noise reduction. We formally analyze and prove the DP requirement of our proposed algorithms. Experimental evaluations demonstrate that our algorithm achieves a better trade-off between usability and privacy than prior works.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
08/21/2020

A(DP)^2SGD: Asynchronous Decentralized Parallel Stochastic Gradient Descent with Differential Privacy

As deep learning models are usually massive and complex, distributed lea...
research
05/24/2023

Personalized DP-SGD using Sampling Mechanisms

Personalized privacy becomes critical in deep learning for Trustworthy A...
research
04/23/2023

An Asynchronous Decentralized Algorithm for Wasserstein Barycenter Problem

Wasserstein Barycenter Problem (WBP) has recently received much attentio...
research
09/08/2018

Decentralized Differentially Private Without-Replacement Stochastic Gradient Descent

While machine learning has achieved remarkable results in a wide variety...
research
07/01/2023

Gradients Look Alike: Sensitivity is Often Overestimated in DP-SGD

Differentially private stochastic gradient descent (DP-SGD) is the canon...
research
06/19/2020

Differentially Private Variational Autoencoders with Term-wise Gradient Aggregation

This paper studies how to learn variational autoencoders with a variety ...
research
11/27/2018

LEASGD: an Efficient and Privacy-Preserving Decentralized Algorithm for Distributed Learning

Distributed learning systems have enabled training large-scale models ov...

Please sign up or login with your details

Forgot password? Click here to reset