The Secret Revealer: Generative Model-Inversion Attacks Against Deep Neural Networks

11/17/2019
by   Yuheng Zhang, et al.
6

This paper studies model-inversion attacks, in which the access to a model is abused to infer information about the training data. Since its first introduction by <cit.>, such attacks have raised serious concerns given that training data usually contain privacy sensitive information. Thus far, successful model-inversion attacks have only been demonstrated on simple models, such as linear regression and logistic regression. Previous attempts to invert neural networks, even the ones with simple architectures, have failed to produce convincing results. Here we present a novel attack method, termed the generative model-inversion attack, which can invert deep neural networks with high success rates. Rather than reconstructing private training data from scratch, we leverage partial public information, which can be very generic, to learn a distributional prior via generative adversarial networks (GANs) and use it to guide the inversion process. Moreover, we theoretically prove that a model's predictive power and its vulnerability to inversion attacks are indeed two sides of the same coin—highly predictive models are able to establish a strong correlation between features and labels, which coincides exactly with what an adversary exploits to mount the attacks. Our extensive experiments demonstrate that the proposed attack improves identification accuracy over the existing work by about 75% for reconstructing face images from a state-of-the-art face recognition classifier. We also show that differential privacy, in its canonical form, is of little avail to defend against our attacks.

READ FULL TEXT

page 2

page 3

page 7

research
12/06/2018

Differentially Private Data Generative Models

Deep neural networks (DNNs) have recently been widely adopted in various...
research
01/28/2022

Plug Play Attacks: Towards Robust and Flexible Model Inversion Attacks

Model inversion attacks (MIAs) aim to create synthetic images that refle...
research
01/26/2022

Variational Model Inversion Attacks

Given the ubiquity of deep neural networks, it is important that these m...
research
01/13/2022

Reconstructing Training Data with Informed Adversaries

Given access to a machine learning model, can an adversary reconstruct t...
research
10/09/2019

Membership Model Inversion Attacks for Deep Networks

With the increasing adoption of AI, inherent security and privacy vulner...
research
09/11/2020

Improving Robustness to Model Inversion Attacks via Mutual Information Regularization

This paper studies defense mechanisms against model inversion (MI) attac...
research
11/05/2021

Reconstructing Training Data from Diverse ML Models by Ensemble Inversion

Model Inversion (MI), in which an adversary abuses access to a trained M...

Please sign up or login with your details

Forgot password? Click here to reset