Deep Learning with Differential Privacy

07/01/2016
by   Martín Abadi, et al.
0

Machine learning techniques based on neural networks are achieving remarkable results in a wide variety of domains. Often, the training of models requires large, representative datasets, which may be crowdsourced and contain sensitive information. The models should not expose private information in these datasets. Addressing this goal, we develop new algorithmic techniques for learning and a refined analysis of privacy costs within the framework of differential privacy. Our implementation and experiments demonstrate that we can train deep neural networks with non-convex objectives, under a modest privacy budget, and at a manageable cost in software complexity, training efficiency, and model quality.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
04/03/2019

Differentially Private Model Publishing for Deep Learning

Deep learning techniques based on neural networks have shown significant...
research
02/11/2021

On Deep Learning with Label Differential Privacy

In many machine learning applications, the training data can contain hig...
research
09/25/2021

Opacus: User-Friendly Differential Privacy Library in PyTorch

We introduce Opacus, a free, open-source PyTorch library for training de...
research
09/18/2017

Adaptive Laplace Mechanism: Differential Privacy Preservation in Deep Learning

In this paper, we focus on developing a novel mechanism to preserve diff...
research
04/12/2019

Distributed Layer-Partitioned Training for Privacy-Preserved Deep Learning

Deep Learning techniques have achieved remarkable results in many domain...
research
11/26/2019

Deep Learning with Gaussian Differential Privacy

Deep learning models are often trained on datasets that contain sensitiv...
research
12/04/2018

Privacy-Preserving Distributed Deep Learning for Clinical Data

Deep learning with medical data often requires larger samples sizes than...

Please sign up or login with your details

Forgot password? Click here to reset