MACE: A Flexible Framework for Membership Privacy Estimation in Generative Models

09/11/2020
by   Xiyang Liu, et al.
0

Generative models are widely used for publishing synthetic datasets. Despite practical successes, recent works have shown some generative models may leak privacy of the data that have been used during training. Membership inference attacks aim to determine whether a sample has been used in the training set given query access to the model API. Despite recent work in this area, many of the attacks designed against generative models require very specific attributes from the learned models (e.g. discriminator scores, generated images, etc.). Furthermore, many of these attacks are heuristic and do not provide effective bounds for privacy loss. In this work, we formally study the membership privacy leakage risk of generative models. Specifically, we formulate membership privacy as a statistical divergence between training samples and hold-out samples, and propose sample-based methods to estimate this divergence. Unlike previous works, our proposed metric and estimators make realistic and flexible assumptions. First, we use a generalizable metric as an alternative to accuracy, since practical model training often leads to imbalanced train/hold-out splits. Second, our estimators are capable of estimating statistical divergence using any scalar or vector valued attributes from the learned model instead of very specific attributes. Furthermore, we show a connection to differential privacy. This allows our proposed estimators to provide a data-driven certificate to understand the privacy budget needed for differentially private generative models. We demonstrate the utility of our framework through experimental demonstrations on different generative models using various model attributes yielding some new insights about membership leakage and vulnerabilities of models.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/07/2019

Reconstruction and Membership Inference Attacks against Generative Models

We present two information leakage attacks that outperform previous work...
research
05/31/2022

Generative Models with Information-Theoretic Protection Against Membership Inference Attacks

Deep generative models, such as Generative Adversarial Networks (GANs), ...
research
02/24/2023

Membership Inference Attacks against Synthetic Data through Overfitting Detection

Data is the foundation of most science. Unfortunately, sharing data can ...
research
03/04/2021

On the privacy-utility trade-off in differentially private hierarchical text classification

Hierarchical models for text classification can leak sensitive or confid...
research
06/13/2022

Assessing Privacy Leakage in Synthetic 3-D PET Imaging using Transversal GAN

Training computer-vision related algorithms on medical images for diseas...
research
06/09/2020

On the Effectiveness of Regularization Against Membership Inference Attacks

Deep learning models often raise privacy concerns as they leak informati...
research
02/02/2023

Are Diffusion Models Vulnerable to Membership Inference Attacks?

Diffusion-based generative models have shown great potential for image s...

Please sign up or login with your details

Forgot password? Click here to reset