Deep Learning with Gaussian Differential Privacy

by   Zhiqi Bu, et al.

Deep learning models are often trained on datasets that contain sensitive information such as individuals' shopping transactions, personal contacts, and medical records. An increasingly important line of work therefore has sought to train neural networks subject to privacy constraints that are specified by differential privacy or its divergence-based relaxations. These privacy definitions, however, have weaknesses in handling certain important primitives (composition and subsampling), thereby giving loose or complicated privacy analyses of training neural networks. In this paper, we consider a recently proposed privacy definition termed f-differential privacy [17] for a refined privacy analysis of training neural networks. Leveraging the appealing properties of f-differential privacy in handling composition and subsampling, this paper derives analytically tractable expressions for the privacy guarantees of both stochastic gradient descent and Adam used in training deep neural networks, without the need of developing sophisticated techniques as [3] did. Our results demonstrate that the f-differential privacy framework allows for a new privacy analysis that improves on the prior analysis [3], which in turn suggests tuning certain parameters of neural networks for a better prediction accuracy without violating the privacy budget. These theoretically derived improvements are confirmed by our experiments in a range of tasks for image classification, text classification, and recommendation system.


page 1

page 2

page 3

page 4


A Better Bound Gives a Hundred Rounds: Enhanced Privacy Guarantees via f-Divergences

We derive the optimal differential privacy (DP) parameters of a mechanis...

Sharp Composition Bounds for Gaussian Differential Privacy via Edgeworth Expansion

Datasets containing sensitive information are often sequentially analyze...

Privacy-Preserving Distributed Deep Learning for Clinical Data

Deep learning with medical data often requires larger samples sizes than...

On Deep Learning with Label Differential Privacy

In many machine learning applications, the training data can contain hig...

Individual Privacy Accounting via a Renyi Filter

We consider a sequential setting in which a single dataset of individual...

Privacy Analysis of Online Learning Algorithms via Contraction Coefficients

We propose an information-theoretic technique for analyzing privacy guar...

Adaptive Laplace Mechanism: Differential Privacy Preservation in Deep Learning

In this paper, we focus on developing a novel mechanism to preserve diff...

Code Repositories


Code to accompany the paper "Deep Learning with Gaussian Differential Privacy"

view repo


Code to accompany the paper "Deep Learning with Gaussian Differential Privacy"

view repo


Code to accompany the paper "Deep Learning with Gaussian Differential Privacy"

view repo