Poison Ink: Robust and Invisible Backdoor Attack

08/05/2021
by   Jie Zhang, et al.
1

Recent research shows deep neural networks are vulnerable to different types of attacks, such as adversarial attack, data poisoning attack and backdoor attack. Among them, backdoor attack is the most cunning one and can occur in almost every stage of deep learning pipeline. Therefore, backdoor attack has attracted lots of interests from both academia and industry. However, most existing backdoor attack methods are either visible or fragile to some effortless pre-processing such as common data transformations. To address these limitations, we propose a robust and invisible backdoor attack called "Poison Ink". Concretely, we first leverage the image structures as target poisoning areas, and fill them with poison ink (information) to generate the trigger pattern. As the image structure can keep its semantic meaning during the data transformation, such trigger pattern is inherently robust to data transformations. Then we leverage a deep injection network to embed such trigger pattern into the cover image to achieve stealthiness. Compared to existing popular backdoor attack methods, Poison Ink outperforms both in stealthiness and robustness. Through extensive experiments, we demonstrate Poison Ink is not only general to different datasets and network architectures, but also flexible for different attack scenarios. Besides, it also has very strong resistance against many state-of-the-art defense techniques.

READ FULL TEXT

page 4

page 7

page 8

research
12/21/2020

Deep Feature Space Trojan Attack of Neural Networks by Controlled Detoxification

Trojan (backdoor) attack is a form of adversarial attack on deep neural ...
research
06/02/2022

Adversarial RAW: Image-Scaling Attack Against Imaging Pipeline

Deep learning technologies have become the backbone for the development ...
research
02/02/2021

PatternMonitor: a whole pipeline with a much higher level of automation for guessing Android lock pattern based on videos

Pattern lock is a general technique used to realize identity authenticat...
research
08/23/2023

Aparecium: Revealing Secrets from Physical Photographs

Watermarking is a crucial tool for safeguarding copyrights and can serve...
research
06/01/2023

Robust Backdoor Attack with Visible, Semantic, Sample-Specific, and Compatible Triggers

Deep neural networks (DNNs) can be manipulated to exhibit specific behav...
research
02/20/2021

WaNet – Imperceptible Warping-based Backdoor Attack

With the thriving of deep learning and the widespread practice of using ...
research
04/13/2021

Fall of Giants: How popular text-based MLaaS fall against a simple evasion attack

The increased demand for machine learning applications made companies of...

Please sign up or login with your details

Forgot password? Click here to reset