SecretGen: Privacy Recovery on Pre-Trained Models via Distribution Discrimination

07/25/2022
by   Zhuowen Yuan, et al.
0

Transfer learning through the use of pre-trained models has become a growing trend for the machine learning community. Consequently, numerous pre-trained models are released online to facilitate further research. However, it raises extensive concerns on whether these pre-trained models would leak privacy-sensitive information of their training data. Thus, in this work, we aim to answer the following questions: "Can we effectively recover private information from these pre-trained models? What are the sufficient conditions to retrieve such sensitive information?" We first explore different statistical information which can discriminate the private training distribution from other distributions. Based on our observations, we propose a novel private data reconstruction framework, SecretGen, to effectively recover private information. Compared with previous methods which can recover private data with the ground true prediction of the targeted recovery instance, SecretGen does not require such prior knowledge, making it more practical. We conduct extensive experiments on different datasets under diverse scenarios to compare SecretGen with other baselines and provide a systematic benchmark to better understand the impact of different auxiliary information and optimization operations. We show that without prior knowledge about true class prediction, SecretGen is able to recover private data with similar performance compared with the ones that leverage such prior knowledge. If the prior knowledge is given, SecretGen will significantly outperform baseline methods. We also propose several quantitative metrics to further quantify the privacy vulnerability of pre-trained models, which will help the model selection for privacy-sensitive applications. Our code is available at: https://github.com/AI-secure/SecretGen.

READ FULL TEXT

page 12

page 23

research
03/13/2020

Dynamic transformation of prior knowledge into Bayesian models for data streams

We consider how to effectively use prior knowledge when learning a Bayes...
research
01/18/2023

Targeted Image Reconstruction by Sampling Pre-trained Diffusion Model

A trained neural network model contains information on the training data...
research
04/13/2021

Thief, Beware of What Get You There: Towards Understanding Model Extraction Attack

Model extraction increasingly attracts research attentions as keeping co...
research
01/18/2023

Universal Neural-Cracking-Machines: Self-Configurable Password Models from Auxiliary Data

We develop the first universal password model – a password model that, o...
research
03/13/2020

Dynamic transformation of prior knowledge intoBayesian models for data streams

We consider how to effectively use prior knowledge when learning a Bayes...
research
01/27/2023

Learning to Unlearn: Instance-wise Unlearning for Pre-trained Classifiers

Since the recent advent of regulations for data protection (e.g., the Ge...
research
12/24/2022

Boosting Out-of-Distribution Detection with Multiple Pre-trained Models

Out-of-Distribution (OOD) detection, i.e., identifying whether an input ...

Please sign up or login with your details

Forgot password? Click here to reset