Beyond Gradients: Exploiting Adversarial Priors in Model Inversion Attacks

03/01/2022
by   Dmitrii Usynin, et al.
0

Collaborative machine learning settings like federated learning can be susceptible to adversarial interference and attacks. One class of such attacks is termed model inversion attacks, characterised by the adversary reverse-engineering the model to extract representations and thus disclose the training data. Prior implementations of this attack typically only rely on the captured data (i.e. the shared gradients) and do not exploit the data the adversary themselves control as part of the training consortium. In this work, we propose a novel model inversion framework that builds on the foundations of gradient-based model inversion attacks, but additionally relies on matching the features and the style of the reconstructed image to data that is controlled by an adversary. Our technique outperforms existing gradient-based approaches both qualitatively and quantitatively, while still maintaining the same honest-but-curious threat model, allowing the adversary to obtain enhanced reconstructions while remaining concealed.

READ FULL TEXT

page 1

page 7

page 8

page 9

page 11

page 13

page 17

research
01/23/2022

Are Your Sensitive Attributes Private? Novel Model Inversion Attribute Inference Attacks on Classification Models

Increasing use of machine learning (ML) technologies in privacy-sensitiv...
research
01/29/2023

Gradient Shaping: Enhancing Backdoor Attack Against Reverse Engineering

Most existing methods to detect backdoored machine learning (ML) models ...
research
06/07/2023

A Linearly Convergent GAN Inversion-based Algorithm for Reverse Engineering of Deceptions

An important aspect of developing reliable deep learning systems is devi...
research
02/22/2019

Adversarial Neural Network Inversion via Auxiliary Knowledge Alignment

The rise of deep learning technique has raised new privacy concerns abou...
research
10/09/2019

Membership Model Inversion Attacks for Deep Networks

With the increasing adoption of AI, inherent security and privacy vulner...
research
03/27/2022

Adversarial Representation Sharing: A Quantitative and Secure Collaborative Learning Framework

The performance of deep learning models highly depends on the amount of ...
research
04/05/2023

UNICORN: A Unified Backdoor Trigger Inversion Framework

The backdoor attack, where the adversary uses inputs stamped with trigge...

Please sign up or login with your details

Forgot password? Click here to reset