Transferable Clean-Label Poisoning Attacks on Deep Neural Nets

05/15/2019
by   Chen Zhu, et al.
0

In this paper, we explore clean-label poisoning attacks on deep convolutional networks with access to neither the network's output nor its architecture or parameters. Our goal is to ensure that after injecting the poisons into the training data, a model with unknown architecture and parameters trained on that data will misclassify the target image into a specific class. To achieve this goal, we generate multiple poison images from the base class by adding small perturbations which cause the poison images to trap the target image within their convex polytope in feature space. We also demonstrate that using Dropout during crafting of the poisons and enforcing this objective in multiple layers enhances transferability, enabling attacks against both the transfer learning and end-to-end training settings. We demonstrate transferable attack success rates of over 50

READ FULL TEXT

page 5

page 6

page 8

page 10

research
05/01/2020

Bullseye Polytope: A Scalable Clean-Label Poisoning Attack with Improved Transferability

A recent source of concern for the security of neural networks is the em...
research
04/01/2020

MetaPoison: Practical General-purpose Clean-label Data Poisoning

Data poisoning–the process by which an attacker takes control of a model...
research
06/10/2022

Enhancing Clean Label Backdoor Attack with Two-phase Specific Triggers

Backdoor attacks threaten Deep Neural Networks (DNNs). Towards stealthin...
research
09/14/2023

Physical Invisible Backdoor Based on Camera Imaging

Backdoor attack aims to compromise a model, which returns an adversary-w...
research
08/25/2022

A Perturbation Resistant Transformation and Classification System for Deep Neural Networks

Deep convolutional neural networks accurately classify a diverse range o...
research
06/14/2021

Backdoor Learning Curves: Explaining Backdoor Poisoning Beyond Influence Functions

Backdoor attacks inject poisoning samples during training, with the goal...
research
08/02/2019

Demon in the Variant: Statistical Analysis of DNNs for Robust Backdoor Contamination Detection

A security threat to deep neural networks (DNN) is backdoor contaminatio...

Please sign up or login with your details

Forgot password? Click here to reset