Bullseye Polytope: A Scalable Clean-Label Poisoning Attack with Improved Transferability

05/01/2020
by   Hojjat Aghakhani, et al.
7

A recent source of concern for the security of neural networks is the emergence of clean-label dataset poisoning attacks, wherein correctly labeled poisoned samples are injected in the training dataset. While these poisons look legitimate to the human observer, they contain malicious characteristics that trigger a targeted misclassification during inference. We propose a scalable and transferable clean-label attack, Bullseye Polytope, which creates poison images centered around the target image in the feature space. Bullseye Polytope improves the attack success rate of the current state-of-the-art by 26.75 end-to-end training, while increasing attack speed by a factor of 12. We further extend Bullseye Polytope to a more practical attack model by including multiple images of the same object (e.g., from different angles) in crafting the poisoned samples. We demonstrate that this extension improves attack transferability by over 16 increasing the number of poisons.

READ FULL TEXT
research
05/15/2019

Transferable Clean-Label Poisoning Attacks on Deep Neural Nets

In this paper, we explore clean-label poisoning attacks on deep convolut...
research
08/18/2023

Poison Dart Frog: A Clean-Label Attack with Low Poisoning Rate and High Attack Success Rate in the Absence of Training Data

To successfully launch backdoor attacks, injected data needs to be corre...
research
04/11/2022

Narcissus: A Practical Clean-Label Backdoor Attack with Limited Information

Backdoor attacks insert malicious data into a training set so that, duri...
research
04/29/2020

Perturbing Across the Feature Hierarchy to Improve Standard and Strict Blackbox Attack Transferability

We consider the blackbox transfer-based targeted adversarial attack thre...
research
07/01/2022

BadHash: Invisible Backdoor Attacks against Deep Hashing with Clean Label

Due to its powerful feature learning capability and high efficiency, dee...
research
02/22/2023

ASSET: Robust Backdoor Data Detection Across a Multiplicity of Deep Learning Paradigms

Backdoor data detection is traditionally studied in an end-to-end superv...
research
09/14/2023

Physical Invisible Backdoor Based on Camera Imaging

Backdoor attack aims to compromise a model, which returns an adversary-w...

Please sign up or login with your details

Forgot password? Click here to reset