Generative Model: Membership Attack,Generalization and Diversity

05/24/2018
by   Kin Sum Liu, et al.
0

This paper considers membership attacks to deep generative models, which is to check whether a given instance x was used in the training data or not. Membership attack is an important topic closely related to the privacy issue of training data and most prior work were on supervised learning. In this paper we propose new methods to launch membership attacks against Variational Autoencoders (VAEs) and Generative Adversarial Networks (GANs). The main idea is to train another neural network (called the attacker network) to search for the seed to reproduce the target data x. The difference of the generated data and x is used to conclude whether x is in the training data or not. We examine extensively the similarity/correlation and differences of membership attack with model generalization, overfitting, and diversity of the model. On different data sets we show our membership attacks are more effective than alternative methods.

READ FULL TEXT
research
06/07/2019

Reconstruction and Membership Inference Attacks against Generative Models

We present two information leakage attacks that outperform previous work...
research
09/09/2019

GAN-Leaks: A Taxonomy of Membership Inference Attacks against GANs

In recent years, the success of deep learning has carried over from disc...
research
07/13/2021

This Person (Probably) Exists. Identity Membership Attacks Against GAN Generated Faces

Recently, generative adversarial networks (GANs) have achieved stunning ...
research
05/31/2022

Generative Models with Information-Theoretic Protection Against Membership Inference Attacks

Deep generative models, such as Generative Adversarial Networks (GANs), ...
research
06/24/2022

Debiasing Learning for Membership Inference Attacks Against Recommender Systems

Learned recommender systems may inadvertently leak information about the...
research
09/18/2022

Membership Inference Attacks and Generalization: A Causal Perspective

Membership inference (MI) attacks highlight a privacy weakness in presen...
research
08/03/2021

The Devil is in the GAN: Defending Deep Generative Models Against Backdoor Attacks

Deep Generative Models (DGMs) allow users to synthesize data from comple...

Please sign up or login with your details

Forgot password? Click here to reset