DeepAI AI Chat
Log In Sign Up

Motion-Excited Sampler: Video Adversarial Attack with Sparked Prior

by   Hu Zhang, et al.
University of Technology Sydney

Deep neural networks are known to be susceptible to adversarial noise, which are tiny and imperceptible perturbations. Most of previous work on adversarial attack mainly focus on image models, while the vulnerability of video models is less explored. In this paper, we aim to attack video models by utilizing intrinsic movement pattern and regional relative motion among video frames. We propose an effective motion-excited sampler to obtain motion-aware noise prior, which we term as sparked prior. Our sparked prior underlines frame correlations and utilizes video dynamics via relative motion. By using the sparked prior in gradient estimation, we can successfully attack a variety of video classification models with fewer number of queries. Extensive experimental results on four benchmark datasets validate the efficacy of our proposed method.


page 9

page 19

page 20

page 21

page 22


MoCoGAN: Decomposing Motion and Content for Video Generation

Visual signals in a video can be divided into content and motion. While ...

PAT: Pseudo-Adversarial Training For Detecting Adversarial Videos

Extensive research has demonstrated that deep neural networks (DNNs) are...

Appending Adversarial Frames for Universal Video Attack

There have been many efforts in attacking image classification models wi...

Adversarially Robust Video Perception by Seeing Motion

Despite their excellent performance, state-of-the-art computer vision mo...

STB-VMM: Swin Transformer Based Video Motion Magnification

The goal of video motion magnification techniques is to magnify small mo...

SMART: Skeletal Motion Action Recognition aTtack

Adversarial attack has inspired great interest in computer vision, by sh...