GAMIN: An Adversarial Approach to Black-Box Model Inversion

09/26/2019
by   Ulrich Aïvodji, et al.
13

Recent works have demonstrated that machine learning models are vulnerable to model inversion attacks, which lead to the exposure of sensitive information contained in their training dataset. While some model inversion attacks have been developed in the past in the black-box attack setting, in which the adversary does not have direct access to the structure of the model, few of these have been conducted so far against complex models such as deep neural networks. In this paper, we introduce GAMIN (for Generative Adversarial Model INversion), a new black-box model inversion attack framework achieving significant results even against deep models such as convolutional neural networks at a reasonable computing cost. GAMIN is based on the continuous training of a surrogate model for the target model under attack and a generator whose objective is to generate inputs resembling those used to train the target model. The attack was validated against various neural networks used as image classifiers. In particular, when attacking models trained on the MNIST dataset, GAMIN is able to extract recognizable digits for up to 60 by the target. Attacks against skin classification models trained on the pilot parliament dataset also demonstrated the capacity to extract recognizable features from the targets.

READ FULL TEXT

page 5

page 6

page 7

page 9

research
03/13/2022

Model Inversion Attack against Transfer Learning: Inverting a Model without Accessing It

Transfer learning is an important approach that produces pre-trained tea...
research
04/10/2023

Reinforcement Learning-Based Black-Box Model Inversion Attacks

Model inversion attacks are a type of privacy attack that reconstructs p...
research
01/26/2022

Variational Model Inversion Attacks

Given the ubiquity of deep neural networks, it is important that these m...
research
02/22/2019

Adversarial Neural Network Inversion via Auxiliary Knowledge Alignment

The rise of deep learning technique has raised new privacy concerns abou...
research
04/22/2020

Neural Network Laundering: Removing Black-Box Backdoor Watermarks from Deep Neural Networks

Creating a state-of-the-art deep-learning system requires vast amounts o...
research
08/09/2023

Data-Free Model Extraction Attacks in the Context of Object Detection

A significant number of machine learning models are vulnerable to model ...
research
05/06/2020

MAZE: Data-Free Model Stealing Attack Using Zeroth-Order Gradient Estimation

Model Stealing (MS) attacks allow an adversary with black-box access to ...

Please sign up or login with your details

Forgot password? Click here to reset