Generative Models with Information-Theoretic Protection Against Membership Inference Attacks

05/31/2022
by   Parisa Hassanzadeh, et al.
0

Deep generative models, such as Generative Adversarial Networks (GANs), synthesize diverse high-fidelity data samples by estimating the underlying distribution of high dimensional data. Despite their success, GANs may disclose private information from the data they are trained on, making them susceptible to adversarial attacks such as membership inference attacks, in which an adversary aims to determine if a record was part of the training set. We propose an information theoretically motivated regularization term that prevents the generative model from overfitting to training data and encourages generalizability. We show that this penalty minimizes the JensenShannon divergence between components of the generator trained on data with different membership, and that it can be implemented at low cost using an additional classifier. Our experiments on image datasets demonstrate that with the proposed regularization, which comes at only a small added computational cost, GANs are able to preserve privacy and generate high-quality samples that achieve better downstream classification performance compared to non-private and differentially private generative models.

READ FULL TEXT

page 13

page 15

research
06/07/2019

Reconstruction and Membership Inference Attacks against Generative Models

We present two information leakage attacks that outperform previous work...
research
06/03/2022

On the Privacy Properties of GAN-generated Samples

The privacy implications of generative adversarial networks (GANs) are a...
research
09/11/2020

MACE: A Flexible Framework for Membership Privacy Estimation in Generative Models

Generative models are widely used for publishing synthetic datasets. Des...
research
12/31/2019

Protecting GANs against privacy attacks by preventing overfitting

Generative Adversarial Networks (GANs) have made releasing of synthetic ...
research
05/24/2018

Generative Model: Membership Attack,Generalization and Diversity

This paper considers membership attacks to deep generative models, which...
research
08/03/2021

The Devil is in the GAN: Defending Deep Generative Models Against Backdoor Attacks

Deep Generative Models (DGMs) allow users to synthesize data from comple...
research
08/21/2019

Generalization in Generative Adversarial Networks: A Novel Perspective from Privacy Protection

In this paper, we aim to understand the generalization properties of gen...

Please sign up or login with your details

Forgot password? Click here to reset