DeepAI AI Chat
Log In Sign Up

Differentially Private Dropout

by   Beyza Ermis, et al.

Large data collections required for the training of neural networks often contain sensitive information such as the medical histories of patients, and the privacy of the training data must be preserved. In this paper, we introduce a dropout technique that provides an elegant Bayesian interpretation to dropout, and show that the intrinsic noise added, with the primary goal of regularization, can be exploited to obtain a degree of differential privacy. The iterative nature of training neural networks presents a challenge for privacy-preserving estimation since multiple iterations increase the amount of noise added. We overcome this by using a relaxed notion of differential privacy, called concentrated differential privacy, which provides tighter estimates on the overall privacy loss. We demonstrate the accuracy of our privacy-preserving dropout algorithm on benchmark datasets.


page 1

page 2

page 3

page 4


Differentially Private Variational Dropout

Deep neural networks with their large number of parameters are highly fl...

A note on privacy preserving iteratively reweighted least squares

Iteratively reweighted least squares (IRLS) is a widely-used method in m...

Privacy-Preserving Distributed Deep Learning for Clinical Data

Deep learning with medical data often requires larger samples sizes than...

Review of Different Privacy Preserving Techniques in PPDP

Big data is a term used for a very large data sets that have many diffic...

Variational Bayes In Private Settings (VIPS)

We provide a general framework for privacy-preserving variational Bayes ...

To Drop or Not to Drop: Robustness, Consistency and Differential Privacy Properties of Dropout

Training deep belief networks (DBNs) requires optimizing a non-convex fu...

Private Topic Modeling

We develop a privatised stochastic variational inference method for Late...