Variational Model Inversion Attacks

01/26/2022
by   Kuan-Chieh Wang, et al.
11

Given the ubiquity of deep neural networks, it is important that these models do not reveal information about sensitive data that they have been trained on. In model inversion attacks, a malicious user attempts to recover the private dataset used to train a supervised neural network. A successful model inversion attack should generate realistic and diverse samples that accurately describe each of the classes in the private dataset. In this work, we provide a probabilistic interpretation of model inversion attacks, and formulate a variational objective that accounts for both diversity and accuracy. In order to optimize this variational objective, we choose a variational family defined in the code space of a deep generative model, trained on a public auxiliary dataset that shares some structural similarity with the target dataset. Empirically, our method substantially improves performance in terms of target attack accuracy, sample realism, and diversity on datasets of faces and chest X-ray images.

READ FULL TEXT

page 2

page 6

page 7

page 9

page 17

research
09/26/2019

GAMIN: An Adversarial Approach to Black-Box Model Inversion

Recent works have demonstrated that machine learning models are vulnerab...
research
11/17/2019

The Secret Revealer: Generative Model-Inversion Attacks Against Deep Neural Networks

This paper studies model-inversion attacks, in which the access to a mod...
research
04/10/2023

Reinforcement Learning-Based Black-Box Model Inversion Attacks

Model inversion attacks are a type of privacy attack that reconstructs p...
research
10/22/2020

MixCon: Adjusting the Separability of Data Representations for Harder Data Recovery

To address the issue that deep neural networks (DNNs) are vulnerable to ...
research
10/08/2020

Improved Techniques for Model Inversion Attacks

Model inversion (MI) attacks in the whitebox setting are aimed at recons...
research
04/12/2021

Practical Defences Against Model Inversion Attacks for Split Neural Networks

We describe a threat model under which a split network-based federated l...
research
01/28/2019

Complex-Valued Neural Networks for Privacy Protection

This paper proposes a generic method to revise traditional neural networ...

Please sign up or login with your details

Forgot password? Click here to reset