Temporal-Distributed Backdoor Attack Against Video Based Action Recognition

08/21/2023
by   Xi Li, et al.
0

Deep neural networks (DNNs) have achieved tremendous success in various applications including video action recognition, yet remain vulnerable to backdoor attacks (Trojans). The backdoor-compromised model will mis-classify to the target class chosen by the attacker when a test instance (from a non-target class) is embedded with a specific trigger, while maintaining high accuracy on attack-free instances. Although there are extensive studies on backdoor attacks against image data, the susceptibility of video-based systems under backdoor attacks remains largely unexplored. Current studies are direct extensions of approaches proposed for image data, e.g., the triggers are independently embedded within the frames, which tend to be detectable by existing defenses. In this paper, we introduce a simple yet effective backdoor attack against video data. Our proposed attack, adding perturbations in a transformed domain, plants an imperceptible, temporally distributed trigger across the video frames, and is shown to be resilient to existing defensive strategies. The effectiveness of the proposed attack is demonstrated by extensive experiments with various well-known models on two video recognition benchmarks, UCF101 and HMDB51, and a sign language recognition benchmark, Greek Sign Language (GSL) dataset. We delve into the impact of several influential factors on our proposed attack and identify an intriguing effect termed "collateral damage" through extensive studies.

READ FULL TEXT

page 9

page 20

page 21

research
01/03/2023

Look, Listen, and Attack: Backdoor Attacks Against Video Action Recognition

Deep neural networks (DNNs) are vulnerable to a class of attacks called ...
research
03/06/2020

Clean-Label Backdoor Attacks on Video Recognition Models

Deep neural networks (DNNs) are vulnerable to backdoor attacks which can...
research
10/29/2021

Attacking Video Recognition Models with Bullet-Screen Comments

Recent research has demonstrated that Deep Neural Networks (DNNs) are vu...
research
07/09/2021

Universal 3-Dimensional Perturbations for Black-Box Attacks on Video Recognition Systems

Widely deployed deep neural network (DNN) models have been proven to be ...
research
09/13/2021

PAT: Pseudo-Adversarial Training For Detecting Adversarial Videos

Extensive research has demonstrated that deep neural networks (DNNs) are...
research
05/10/2023

Stealthy Low-frequency Backdoor Attack against Deep Neural Networks

Deep neural networks (DNNs) have gain its popularity in various scenario...
research
02/02/2021

PatternMonitor: a whole pipeline with a much higher level of automation for guessing Android lock pattern based on videos

Pattern lock is a general technique used to realize identity authenticat...

Please sign up or login with your details

Forgot password? Click here to reset