DeepPoison: Feature Transfer Based Stealthy Poisoning Attack

01/06/2021
by   Jinyin Chen, et al.
0

Deep neural networks are susceptible to poisoning attacks by purposely polluted training data with specific triggers. As existing episodes mainly focused on attack success rate with patch-based samples, defense algorithms can easily detect these poisoning samples. We propose DeepPoison, a novel adversarial network of one generator and two discriminators, to address this problem. Specifically, the generator automatically extracts the target class' hidden features and embeds them into benign training samples. One discriminator controls the ratio of the poisoning perturbation. The other discriminator works as the target model to testify the poisoning effects. The novelty of DeepPoison lies in that the generated poisoned training samples are indistinguishable from the benign ones by both defensive methods and manual visual inspection, and even benign test samples can achieve the attack. Extensive experiments have shown that DeepPoison can achieve a state-of-the-art attack success rate, as high as 91.74 LFW and CASIA. Furthermore, we have experimented with high-performance defense algorithms such as autodecoder defense and DBSCAN cluster detection and showed the resilience of DeepPoison.

READ FULL TEXT

page 1

page 4

research
02/09/2021

Target Training Does Adversarial Training Without Adversarial Samples

Neural network classifiers are vulnerable to misclassification of advers...
research
02/19/2022

Label-Smoothed Backdoor Attack

By injecting a small number of poisoned samples into the training set, b...
research
12/20/2020

Color Channel Perturbation Attacks for Fooling Convolutional Neural Networks and A Defense Against Such Attacks

The Convolutional Neural Networks (CNNs) have emerged as a very powerful...
research
09/06/2021

Backdoor Attack and Defense for Deep Regression

We demonstrate a backdoor attack on a deep neural network used for regre...
research
01/05/2023

Silent Killer: Optimizing Backdoor Trigger Yields a Stealthy and Powerful Data Poisoning Attack

We propose a stealthy and powerful backdoor attack on neural networks ba...
research
04/26/2020

Bias Busters: Robustifying DL-based Lithographic Hotspot Detectors Against Backdooring Attacks

Deep learning (DL) offers potential improvements throughout the CAD tool...
research
09/22/2021

BFClass: A Backdoor-free Text Classification Framework

Backdoor attack introduces artificial vulnerabilities into the model by ...

Please sign up or login with your details

Forgot password? Click here to reset