Sparse Adversarial Perturbations for Videos

03/07/2018
by   Xingxing Wei, et al.
0

Although adversarial samples of deep neural networks (DNNs) have been intensively studied on static images, their extensions in videos are never explored. Compared with images, attacking a video needs to consider not only spatial cues but also temporal cues. Moreover, to improve the imperceptibility as well as reduce the computation cost, perturbations should be added on as fewer frames as possible, i.e., adversarial perturbations are temporally sparse. This further motivates the propagation of perturbations, which denotes that perturbations added on the current frame can transfer to the next frames via their temporal interactions. Thus, no (or few) extra perturbations are needed for these frames to misclassify them. To this end, we propose an l2,1-norm based optimization algorithm to compute the sparse adversarial perturbations for videos. We choose the action recognition as the targeted task, and networks with a CNN+RNN architecture as threat models to verify our method. Thanks to the propagation, we can compute perturbations on a shortened version video, and then adapt them to the long version video to fool DNNs. Experimental results on the UCF101 dataset demonstrate that even only one frame in a video is perturbed, the fooling rate can still reach 59.7

READ FULL TEXT

page 1

page 2

page 4

page 5

research
08/27/2018

Targeted Nonlinear Adversarial Perturbations in Images and Videos

We introduce a method for learning adversarial perturbations targeted to...
research
09/11/2019

Identifying and Resisting Adversarial Videos Using Temporal Consistency

Video classification is a challenging task in computer vision. Although ...
research
12/10/2019

Appending Adversarial Frames for Universal Video Attack

There have been many efforts in attacking image classification models wi...
research
09/13/2021

PAT: Pseudo-Adversarial Training For Detecting Adversarial Videos

Extensive research has demonstrated that deep neural networks (DNNs) are...
research
07/02/2018

Adversarial Perturbations Against Real-Time Video Classification Systems

Recent research has demonstrated the brittleness of machine learning sys...
research
07/24/2018

Learning Discriminative Video Representations Using Adversarial Perturbations

Adversarial perturbations are noise-like patterns that can subtly change...
research
10/29/2021

Attacking Video Recognition Models with Bullet-Screen Comments

Recent research has demonstrated that Deep Neural Networks (DNNs) are vu...

Please sign up or login with your details

Forgot password? Click here to reset