Backdoor Attack with Sample-Specific Triggers

12/07/2020
by   Yuezun Li, et al.
0

Recently, backdoor attacks pose a new security threat to the training process of deep neural networks (DNNs). Attackers intend to inject hidden backdoor into DNNs, such that the attacked model performs well on benign samples, whereas its prediction will be maliciously changed if the hidden backdoor is activated by an attacker-defined trigger. Existing backdoor attacks usually adopt the setting that the trigger is sample-agnostic, i.e., different poisoned samples contain the same trigger, resulting in that the attacks could be easily mitigated by current backdoor defenses. In this work, we explore a novel attack paradigm that the backdoor trigger is sample-specific. Specifically, inspired by the recent advance in DNN-based image steganography, we generate sample-specific invisible additive noises as backdoor triggers by encoding an attacker-specified string into benign images through an encoder-decoder network. The mapping from the string to the target label will be generated when DNNs are trained on the poisoned dataset. Extensive experiments on benchmark datasets verify the effectiveness of our method in attacking models with or without defenses.

READ FULL TEXT

page 3

page 4

page 6

page 7

page 10

research
02/01/2023

BackdoorBox: A Python Toolbox for Backdoor Learning

Third-party resources (e.g., samples, backbones, and pre-trained models)...
research
06/08/2021

Handcrafted Backdoors in Deep Neural Networks

Deep neural networks (DNNs), while accurate, are expensive to train. Man...
research
09/12/2021

Check Your Other Door! Establishing Backdoor Attacks in the Frequency Domain

Deep Neural Networks (DNNs) have been utilized in various applications r...
research
09/18/2020

The Hidden Vulnerability of Watermarking for Deep Neural Networks

Watermarking has shown its effectiveness in protecting the intellectual ...
research
08/02/2019

Demon in the Variant: Statistical Analysis of DNNs for Robust Backdoor Contamination Detection

A security threat to deep neural networks (DNN) is backdoor contaminatio...
research
02/09/2023

Imperceptible Sample-Specific Backdoor to DNN with Denoising Autoencoder

The backdoor attack poses a new security threat to deep neural networks....
research
04/06/2021

Backdoor Attack in the Physical World

Backdoor attack intends to inject hidden backdoor into the deep neural n...

Please sign up or login with your details

Forgot password? Click here to reset