Membership Model Inversion Attacks for Deep Networks

10/09/2019
by   Samyadeep Basu, et al.
13

With the increasing adoption of AI, inherent security and privacy vulnerabilities formachine learning systems are being discovered. One such vulnerability makes itpossible for an adversary to obtain private information about the types of instancesused to train the targeted machine learning model. This so-called model inversionattack is based on sequential leveraging of classification scores towards obtaininghigh confidence representations for various classes. However, for deep networks,such procedures usually lead to unrecognizable representations that are uselessfor the adversary. In this paper, we introduce a more realistic definition of modelinversion, where the adversary is aware of the general purpose of the attackedmodel (for instance, whether it is an OCR system or a facial recognition system),and the goal is to find realistic class representations within the corresponding lower-dimensional manifold (of, respectively, general symbols or general faces). To thatend, we leverage properties of generative adversarial networks for constructinga connected lower-dimensional manifold, and demonstrate the efficiency of ourmodel inversion attack that is carried out within that manifold.

READ FULL TEXT

page 3

page 4

page 6

page 7

research
01/23/2022

Are Your Sensitive Attributes Private? Novel Model Inversion Attribute Inference Attacks on Classification Models

Increasing use of machine learning (ML) technologies in privacy-sensitiv...
research
11/17/2019

The Secret Revealer: Generative Model-Inversion Attacks Against Deep Neural Networks

This paper studies model-inversion attacks, in which the access to a mod...
research
04/10/2023

Reinforcement Learning-Based Black-Box Model Inversion Attacks

Model inversion attacks are a type of privacy attack that reconstructs p...
research
03/01/2022

Beyond Gradients: Exploiting Adversarial Priors in Model Inversion Attacks

Collaborative machine learning settings like federated learning can be s...
research
08/02/2021

Information Stealing in Federated Learning Systems Based on Generative Adversarial Networks

An attack on deep learning systems where intelligent machines collaborat...
research
08/27/2019

Key Protected Classification for Collaborative Learning

Large-scale datasets play a fundamental role in training deep learning m...
research
03/04/2021

Hard-label Manifolds: Unexpected Advantages of Query Efficiency for Finding On-manifold Adversarial Examples

Designing deep networks robust to adversarial examples remains an open p...

Please sign up or login with your details

Forgot password? Click here to reset