A(DP)^2SGD: Asynchronous Decentralized Parallel Stochastic Gradient Descent with Differential Privacy

08/21/2020
by   Jie Xu, et al.
10

As deep learning models are usually massive and complex, distributed learning is essential for increasing training efficiency. Moreover, in many real-world application scenarios like healthcare, distributed learning can also keep the data local and protect privacy. A popular distributed learning strategy is federated learning, where there is a central server storing the global model and a set of local computing nodes updating the model parameters with their corresponding data. The updated model parameters will be processed and transmitted to the central server, which leads to heavy communication costs. Recently, asynchronous decentralized distributed learning has been proposed and demonstrated to be a more efficient and practical strategy where there is no central server, so that each computing node only communicates with its neighbors. Although no raw data will be transmitted across different local nodes, there is still a risk of information leak during the communication process for malicious participants to make attacks. In this paper, we present a differentially private version of asynchronous decentralized parallel SGD (ADPSGD) framework, or A(DP)^2SGD for short, which maintains communication efficiency of ADPSGD and prevents the inference from malicious participants. Specifically, Rényi differential privacy is used to provide tighter privacy analysis for our composite Gaussian mechanisms while the convergence rate is consistent with the non-private version. Theoretical analysis shows A(DP)^2SGD also converges at the optimal 𝒪(1/√(T)) rate as SGD. Empirically, A(DP)^2SGD achieves comparable model accuracy as the differentially private version of Synchronous SGD (SSGD) but runs much faster than SSGD in heterogeneous computing environments.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/24/2023

Differentially Private Decentralized Deep Learning with Consensus Algorithms

Cooperative decentralized deep learning relies on direct information exc...
research
06/14/2020

Differentially Private Decentralized Learning

Decentralized learning has received great attention for its high efficie...
research
03/08/2023

Considerations on the Theory of Training Models with Differential Privacy

In federated learning collaborative learning takes place by a set of cli...
research
02/28/2023

Collaborative Mean Estimation over Intermittently Connected Networks with Peer-To-Peer Privacy

This work considers the problem of Distributed Mean Estimation (DME) ove...
research
12/06/2022

Straggler-Resilient Differentially-Private Decentralized Learning

We consider the straggler problem in decentralized learning over a logic...
research
07/13/2020

Privacy Amplification via Random Check-Ins

Differentially Private Stochastic Gradient Descent (DP-SGD) forms a fund...
research
02/17/2021

Differential Private Hogwild! over Distributed Local Data Sets

We consider the Hogwild! setting where clients use local SGD iterations ...

Please sign up or login with your details

Forgot password? Click here to reset