Differentially Private Decentralized Deep Learning with Consensus Algorithms

06/24/2023
by   Jasmine Bayrooti, et al.
0

Cooperative decentralized deep learning relies on direct information exchange between communicating agents, each with access to a local dataset which should be kept private. The goal is for all agents to achieve consensus on model parameters after training. However, sharing parameters with untrustworthy neighboring agents could leak exploitable information about local datasets. To combat this, we introduce differentially private decentralized learning that secures each agent's local dataset during and after cooperative training. In our approach, we generalize Differentially Private Stochastic Gradient Descent (DP-SGD) – a popular differentially private training method for centralized deep learning – to practical subgradient- and ADMM-based decentralized learning methods. Our algorithms' differential privacy guarantee holds for arbitrary deep learning objective functions, and we analyze the convergence properties for strongly convex objective functions. We compare our algorithms against centrally trained models on standard classification tasks and evaluate the relationships between performance, privacy budget, graph connectivity, and degree of training data overlap among agents. We find that differentially private gradient tracking is resistant to performance degradation under sparse graphs and non-uniform data distributions. Furthermore, we show that it is possible to learn a model achieving high accuracies, within 3 MNIST under (1, 10^-5)-differential privacy and within 6 CIFAR-100 under (10, 10^-5)-differential privacy, without ever sharing raw data with other agents. Open source code can be found at: https://github.com/jbayrooti/dp-dec-learning.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/30/2021

NeuralDP Differentially private neural networks by design

The application of differential privacy to the training of deep neural n...
research
08/21/2020

A(DP)^2SGD: Asynchronous Decentralized Parallel Stochastic Gradient Descent with Differential Privacy

As deep learning models are usually massive and complex, distributed lea...
research
11/03/2022

Single SMPC Invocation DPHelmet: Differentially Private Distributed Learning on a Large Scale

Distributing machine learning predictors enables the collection of large...
research
03/25/2019

dpUGC: Learn Differentially Private Representation for User Generated Contents

This paper firstly proposes a simple yet efficient generalized approach ...
research
08/09/2021

Efficient Hyperparameter Optimization for Differentially Private Deep Learning

Tuning the hyperparameters in the differentially private stochastic grad...
research
06/15/2023

ViP: A Differentially Private Foundation Model for Computer Vision

Artificial intelligence (AI) has seen a tremendous surge in capabilities...
research
12/25/2021

Gradient Leakage Attack Resilient Deep Learning

Gradient leakage attacks are considered one of the wickedest privacy thr...

Please sign up or login with your details

Forgot password? Click here to reset