Differentially Private Data Generative Models

12/06/2018
by   Qingrong Chen, et al.
0

Deep neural networks (DNNs) have recently been widely adopted in various applications, and such success is largely due to a combination of algorithmic breakthroughs, computation resource improvements, and access to a large amount of data. However, the large-scale data collections required for deep learning often contain sensitive information, therefore raising many privacy concerns. Prior research has shown several successful attacks in inferring sensitive training data information, such as model inversion, membership inference, and generative adversarial networks (GAN) based leakage attacks against collaborative deep learning. In this paper, to enable learning efficiency as well as to generate data with privacy guarantees and high utility, we propose a differentially private autoencoder-based generative model (DP-AuGM) and a differentially private variational autoencoder-based generative model (DP-VaeGM). We evaluate the robustness of two proposed models. We show that DP-AuGM can effectively defend against the model inversion, membership inference, and GAN-based attacks. We also show that DP-VaeGM is robust against the membership inference attack. We conjecture that the key to defend against the model inversion and GAN-based attacks is not due to differential privacy but the perturbation of training data. Finally, we demonstrate that both DP-AuGM and DP-VaeGM can be easily integrated with real-world machine learning applications, such as machine learning as a service and federated learning, which are otherwise threatened by the membership inference attack and the GAN-based attack, respectively.

READ FULL TEXT

page 10

page 12

research
08/17/2022

An Empirical Study on the Membership Inference Attack against Tabular Data Synthesis Models

Tabular data typically contains private and important information; thus,...
research
01/10/2022

Differentially Private Generative Adversarial Networks with Model Inversion

To protect sensitive data in training a Generative Adversarial Network (...
research
11/17/2019

The Secret Revealer: Generative Model-Inversion Attacks Against Deep Neural Networks

This paper studies model-inversion attacks, in which the access to a mod...
research
02/24/2017

Deep Models Under the GAN: Information Leakage from Collaborative Deep Learning

Deep Learning has recently become hugely popular in machine learning, pr...
research
10/23/2020

Differentially Private Learning Does Not Bound Membership Inference

Training machine learning models on privacy-sensitive data has become a ...
research
01/05/2018

Differentially Private Releasing via Deep Generative Model

Privacy-preserving releasing of complex data (e.g., image, text, audio) ...
research
11/22/2021

Machine unlearning via GAN

Machine learning models, especially deep models, may unintentionally rem...

Please sign up or login with your details

Forgot password? Click here to reset