PriSampler: Mitigating Property Inference of Diffusion Models

06/08/2023
by   Hailong Hu, et al.
0

Diffusion models have been remarkably successful in data synthesis. Such successes have also driven diffusion models to apply to sensitive data, such as human face data, but this might bring about severe privacy concerns. In this work, we systematically present the first privacy study about property inference attacks against diffusion models, in which adversaries aim to extract sensitive global properties of the training set from a diffusion model, such as the proportion of the training data for certain sensitive properties. Specifically, we consider the most practical attack scenario: adversaries are only allowed to obtain synthetic data. Under this realistic scenario, we evaluate the property inference attacks on different types of samplers and diffusion models. A broad range of evaluations shows that various diffusion models and their samplers are all vulnerable to property inference attacks. Furthermore, one case study on off-the-shelf pre-trained diffusion models also demonstrates the effectiveness of the attack in practice. Finally, we propose a new model-agnostic plug-in method PriSampler to mitigate the property inference of diffusion models. PriSampler can be directly applied to well-trained diffusion models and support both stochastic and deterministic sampling. Extensive experiments illustrate the effectiveness of our defense and it makes adversaries infer the proportion of properties as close as random guesses. PriSampler also shows its significantly superior performance to diffusion models trained with differential privacy on both model utility and defense performance.

READ FULL TEXT

page 6

page 9

page 12

research
01/24/2023

Membership Inference of Diffusion Models

Recent years have witnessed the tremendous success of diffusion models i...
research
05/18/2022

Property Unlearning: A Defense Strategy Against Property Inference Attacks

During the training of machine learning models, they may store or "learn...
research
05/14/2023

On enhancing the robustness of Vision Transformers: Defensive Diffusion

Privacy and confidentiality of medical data are of utmost importance in ...
research
02/15/2023

Data Forensics in Diffusion Models: A Systematic Analysis of Membership Privacy

In recent years, diffusion models have achieved tremendous success in th...
research
03/10/2023

TrojDiff: Trojan Attacks on Diffusion Models with Diverse Targets

Diffusion models have achieved great success in a range of tasks, such a...
research
04/27/2021

Property Inference Attacks on Convolutional Neural Networks: Influence and Implications of Target Model's Complexity

Machine learning models' goal is to make correct predictions for specifi...
research
01/13/2022

Reconstructing Training Data with Informed Adversaries

Given access to a machine learning model, can an adversary reconstruct t...

Please sign up or login with your details

Forgot password? Click here to reset