The Devil is in the GAN: Defending Deep Generative Models Against Backdoor Attacks

08/03/2021
by   Ambrish Rawat, et al.
30

Deep Generative Models (DGMs) allow users to synthesize data from complex, high-dimensional manifolds. Industry applications of DGMs include data augmentation to boost performance of (semi-)supervised machine learning, or to mitigate fairness or privacy concerns. Large-scale DGMs are notoriously hard to train, requiring expert skills, large amounts of data and extensive computational resources. Thus, it can be expected that many enterprises will resort to sourcing pre-trained DGMs from potentially unverified third parties, e.g. open source model repositories. As we show in this paper, such a deployment scenario poses a new attack surface, which allows adversaries to potentially undermine the integrity of entire machine learning development pipelines in a victim organization. Specifically, we describe novel training-time attacks resulting in corrupted DGMs that synthesize regular data under normal operations and designated target outputs for inputs sampled from a trigger distribution. Depending on the control that the adversary has over the random number generation, this imposes various degrees of risk that harmful data may enter the machine learning development pipelines, potentially causing material or reputational damage to the victim organization. Our attacks are based on adversarial loss functions that combine the dual objectives of attack stealth and fidelity. We show its effectiveness for a variety of DGM architectures (Generative Adversarial Networks (GANs), Variational Autoencoders (VAEs)) and data domains (images, audio). Our experiments show that - even for large-scale industry-grade DGMs - our attack can be mounted with only modest computational efforts. We also investigate the effectiveness of different defensive approaches (based on static/dynamic model and output inspections) and prescribe a practical defense strategy that paves the way for safe usage of DGMs.

READ FULL TEXT

page 15

page 19

page 20

page 24

research
10/06/2020

BAAAN: Backdoor Attacks Against Autoencoder and GAN-Based Machine Learning Models

The tremendous progress of autoencoders and generative adversarial netwo...
research
12/01/2021

Adversarial Attacks Against Deep Generative Models on Data: A Survey

Deep generative models have gained much attention given their ability to...
research
10/19/2022

Backdoor Attack and Defense in Federated Generative Adversarial Network-based Medical Image Synthesis

Deep Learning-based image synthesis techniques have been applied in heal...
research
05/31/2022

Generative Models with Information-Theoretic Protection Against Membership Inference Attacks

Deep generative models, such as Generative Adversarial Networks (GANs), ...
research
05/24/2018

Generative Model: Membership Attack,Generalization and Diversity

This paper considers membership attacks to deep generative models, which...
research
01/08/2018

Attacking Speaker Recognition With Deep Generative Models

In this paper we investigate the ability of generative adversarial netwo...
research
01/24/2022

Attacks and Defenses for Free-Riders in Multi-Discriminator GAN

Generative Adversarial Networks (GANs) are increasingly adopted by the i...

Please sign up or login with your details

Forgot password? Click here to reset