WaNet – Imperceptible Warping-based Backdoor Attack

02/20/2021
by   Anh Nguyen, et al.
5

With the thriving of deep learning and the widespread practice of using pre-trained networks, backdoor attacks have become an increasing security threat drawing many research interests in recent years. A third-party model can be poisoned in training to work well in normal conditions but behave maliciously when a trigger pattern appears. However, the existing backdoor attacks are all built on noise perturbation triggers, making them noticeable to humans. In this paper, we instead propose using warping-based triggers. The proposed backdoor outperforms the previous methods in a human inspection test by a wide margin, proving its stealthiness. To make such models undetectable by machine defenders, we propose a novel training mode, called the “noise mode. The trained networks successfully attack and bypass the state-of-the-art defense methods on standard classification datasets, including MNIST, CIFAR-10, GTSRB, and CelebA. Behavior analyses show that our backdoors are transparent to network inspection, further proving this novel attack mechanism's efficiency.

READ FULL TEXT

page 2

page 5

page 6

page 8

page 15

page 16

research
09/30/2019

Hidden Trigger Backdoor Attacks

With the success of deep learning algorithms in various domains, studyin...
research
09/12/2023

Backdoor Attacks and Countermeasures in Natural Language Processing Models: A Comprehensive Security Review

Deep Neural Networks (DNNs) have led to unprecedented progress in variou...
research
05/26/2022

BppAttack: Stealthy and Efficient Trojan Attacks against Deep Neural Networks via Image Quantization and Contrastive Adversarial Learning

Deep neural networks are vulnerable to Trojan attacks. Existing attacks ...
research
08/05/2021

Poison Ink: Robust and Invisible Backdoor Attack

Recent research shows deep neural networks are vulnerable to different t...
research
12/02/2017

Towards Robust Neural Networks via Random Self-ensemble

Recent studies have revealed the vulnerability of deep neural networks -...
research
09/17/2020

MultAV: Multiplicative Adversarial Videos

The majority of adversarial machine learning research focuses on additiv...
research
11/20/2022

Invisible Backdoor Attack with Dynamic Triggers against Person Re-identification

In recent years, person Re-identification (ReID) has rapidly progressed ...

Please sign up or login with your details

Forgot password? Click here to reset