Hiding Behind Backdoors: Self-Obfuscation Against Generative Models

01/24/2022
by   Siddhartha Datta, et al.
0

Attack vectors that compromise machine learning pipelines in the physical world have been demonstrated in recent research, from perturbations to architectural components. Building on this work, we illustrate the self-obfuscation attack: attackers target a pre-processing model in the system, and poison the training set of generative models to obfuscate a specific class during inference. Our contribution is to describe, implement and evaluate a generalized attack, in the hope of raising awareness regarding the challenge of architectural robustness within the machine learning community.

READ FULL TEXT
research
08/23/2023

A Probabilistic Fluctuation based Membership Inference Attack for Diffusion Models

Membership Inference Attack (MIA) identifies whether a record exists in ...
research
03/07/2022

Score-Based Generative Models for Molecule Generation

Recent advances in generative models have made exploring design spaces e...
research
03/04/2020

Type I Attack for Generative Models

Generative models are popular tools with a wide range of applications. N...
research
06/15/2022

Architectural Backdoors in Neural Networks

Machine learning is vulnerable to adversarial manipulation. Previous lit...
research
04/27/2023

ChatGPT as an Attack Tool: Stealthy Textual Backdoor Attack via Blackbox Generative Model Trigger

Textual backdoor attacks pose a practical threat to existing systems, as...
research
02/01/2022

Right for the Right Latent Factors: Debiasing Generative Models via Disentanglement

A key assumption of most statistical machine learning methods is that th...

Please sign up or login with your details

Forgot password? Click here to reset