Label-Only Model Inversion Attacks via Boundary Repulsion

03/03/2022
by   Mostafa Kahla, et al.
0

Recent studies show that the state-of-the-art deep neural networks are vulnerable to model inversion attacks, in which access to a model is abused to reconstruct private training data of any given target class. Existing attacks rely on having access to either the complete target model (whitebox) or the model's soft-labels (blackbox). However, no prior work has been done in the harder but more practical scenario, in which the attacker only has access to the model's predicted label, without a confidence measure. In this paper, we introduce an algorithm, Boundary-Repelling Model Inversion (BREP-MI), to invert private training data using only the target model's predicted labels. The key idea of our algorithm is to evaluate the model's predicted labels over a sphere and then estimate the direction to reach the target class's centroid. Using the example of face recognition, we show that the images reconstructed by BREP-MI successfully reproduce the semantics of the private training data for various datasets and target model architectures. We compare BREP-MI with the state-of-the-art whitebox and blackbox model inversion attacks and the results show that despite assuming less knowledge about the target model, BREP-MI outperforms the blackbox attack and achieves comparable results to the whitebox attack.

READ FULL TEXT
research
03/13/2022

Label-only Model Inversion Attack: The Attack that Requires the Least Information

In a model inversion attack, an adversary attempts to reconstruct the da...
research
10/08/2020

Improved Techniques for Model Inversion Attacks

Model inversion (MI) attacks in the whitebox setting are aimed at recons...
research
09/11/2020

Improving Robustness to Model Inversion Attacks via Mutual Information Regularization

This paper studies defense mechanisms against model inversion (MI) attac...
research
05/31/2022

Few-Shot Unlearning by Model Inversion

We consider the problem of machine unlearning to erase a target dataset,...
research
07/17/2023

Unstoppable Attack: Label-Only Model Inversion via Conditional Diffusion Model

Model inversion attacks (MIAs) are aimed at recovering private data from...
research
09/21/2022

Text Revealer: Private Text Reconstruction via Model Inversion Attacks against Transformers

Text classification has become widely used in various natural language p...
research
10/22/2020

MixCon: Adjusting the Separability of Data Representations for Harder Data Recovery

To address the issue that deep neural networks (DNNs) are vulnerable to ...

Please sign up or login with your details

Forgot password? Click here to reset