Clean-Label Backdoor Attacks on Video Recognition Models

03/06/2020
by   Shihao Zhao, et al.
11

Deep neural networks (DNNs) are vulnerable to backdoor attacks which can hide backdoor triggers in DNNs by poisoning training data. A backdoored model behaves normally on clean test images, yet consistently predicts a particular target class for any test examples that contain the trigger pattern. As such, backdoor attacks are hard to detect, and have raised severe security concerns in real-world applications. Thus far, backdoor research has mostly been conducted in the image domain with image classification models. In this paper, we show that existing image backdoor attacks are far less effective on videos, and outline 4 strict conditions where existing attacks are likely to fail: 1) scenarios with more input dimensions (eg. videos), 2) scenarios with high resolution, 3) scenarios with a large number of classes and few examples per class (a "sparse dataset"), and 4) attacks with access to correct labels (eg. clean-label attacks). We propose the use of a universal adversarial trigger as the backdoor trigger to attack video recognition models, a situation where backdoor attacks are likely to be challenged by the above 4 strict conditions. We show on benchmark video datasets that our proposed backdoor attack can manipulate state-of-the-art video models with high success rates by poisoning only a small proportion of training data (without changing the labels). We also show that our proposed backdoor attack is resistant to state-of-the-art backdoor defense/detection methods, and can even be applied to improve image backdoor attacks. Our proposed video backdoor attack not only serves as a strong baseline for improving the robustness of video models, but also provides a new perspective for more understanding more powerful backdoor attacks.

READ FULL TEXT

page 1

page 7

research
06/10/2022

Enhancing Clean Label Backdoor Attack with Two-phase Specific Triggers

Backdoor attacks threaten Deep Neural Networks (DNNs). Towards stealthin...
research
07/05/2020

Reflection Backdoor: A Natural Backdoor Attack on Deep Neural Networks

Recent studies have shown that DNNs can be compromised by backdoor attac...
research
06/14/2023

Efficient Backdoor Attacks for Deep Neural Networks in Real-world Scenarios

Recent deep neural networks (DNNs) have come to rely on vast amounts of ...
research
08/21/2023

Temporal-Distributed Backdoor Attack Against Video Based Action Recognition

Deep neural networks (DNNs) have achieved tremendous success in various ...
research
01/03/2023

Look, Listen, and Attack: Backdoor Attacks Against Video Action Recognition

Deep neural networks (DNNs) are vulnerable to a class of attacks called ...
research
03/06/2021

Hidden Backdoor Attack against Semantic Segmentation Models

Deep neural networks (DNNs) are vulnerable to the backdoor attack, which...
research
10/29/2021

Attacking Video Recognition Models with Bullet-Screen Comments

Recent research has demonstrated that Deep Neural Networks (DNNs) are vu...

Please sign up or login with your details

Forgot password? Click here to reset