Hidden Backdoor Attack against Semantic Segmentation Models

03/06/2021
by   Yiming Li, et al.
0

Deep neural networks (DNNs) are vulnerable to the backdoor attack, which intends to embed hidden backdoors in DNNs by poisoning training data. The attacked model behaves normally on benign samples, whereas its prediction will be changed to a particular target label if hidden backdoors are activated. So far, backdoor research has mostly been conducted towards classification tasks. In this paper, we reveal that this threat could also happen in semantic segmentation, which may further endanger many mission-critical applications (e.g., autonomous driving). Except for extending the existing attack paradigm to maliciously manipulate the segmentation models from the image-level, we propose a novel attack paradigm, the fine-grained attack, where we treat the target label (i.e., annotation) from the object-level instead of the image-level to achieve more sophisticated manipulation. In the annotation of poisoned samples generated by the fine-grained attack, only pixels of specific objects will be labeled with the attacker-specified target class while others are still with their ground-truth ones. Experiments show that the proposed methods can successfully attack semantic segmentation models by poisoning only a small proportion of training data. Our method not only provides a new perspective for designing novel attacks but also serves as a strong baseline for improving the robustness of semantic segmentation methods.

READ FULL TEXT
research
12/06/2018

PartNet: A Large-scale Benchmark for Fine-grained and Hierarchical Part-level 3D Object Understanding

We present PartNet: a consistent, large-scale dataset of 3D objects anno...
research
03/21/2023

Influencer Backdoor Attack on Semantic Segmentation

When a small number of poisoned samples are injected into the training d...
research
03/06/2020

Clean-Label Backdoor Attacks on Video Recognition Models

Deep neural networks (DNNs) are vulnerable to backdoor attacks which can...
research
04/26/2023

Compensation Learning in Semantic Segmentation

Label noise and ambiguities between similar classes are challenging prob...
research
09/18/2020

The Hidden Vulnerability of Watermarking for Deep Neural Networks

Watermarking has shown its effectiveness in protecting the intellectual ...
research
04/06/2021

Backdoor Attack in the Physical World

Backdoor attack intends to inject hidden backdoor into the deep neural n...
research
06/27/2023

IMPOSITION: Implicit Backdoor Attack through Scenario Injection

This paper presents a novel backdoor attack called IMPlicit BackdOor Att...

Please sign up or login with your details

Forgot password? Click here to reset