DeepAI AI Chat
Log In Sign Up

Motion-Excited Sampler: Video Adversarial Attack with Sparked Prior

03/17/2020
by   Hu Zhang, et al.
Amazon
University of Technology Sydney
0

Deep neural networks are known to be susceptible to adversarial noise, which are tiny and imperceptible perturbations. Most of previous work on adversarial attack mainly focus on image models, while the vulnerability of video models is less explored. In this paper, we aim to attack video models by utilizing intrinsic movement pattern and regional relative motion among video frames. We propose an effective motion-excited sampler to obtain motion-aware noise prior, which we term as sparked prior. Our sparked prior underlines frame correlations and utilizes video dynamics via relative motion. By using the sparked prior in gradient estimation, we can successfully attack a variety of video classification models with fewer number of queries. Extensive experimental results on four benchmark datasets validate the efficacy of our proposed method.

READ FULL TEXT

page 9

page 19

page 20

page 21

page 22

07/17/2017

MoCoGAN: Decomposing Motion and Content for Video Generation

Visual signals in a video can be divided into content and motion. While ...
09/13/2021

PAT: Pseudo-Adversarial Training For Detecting Adversarial Videos

Extensive research has demonstrated that deep neural networks (DNNs) are...
12/10/2019

Appending Adversarial Frames for Universal Video Attack

There have been many efforts in attacking image classification models wi...
12/13/2022

Adversarially Robust Video Perception by Seeing Motion

Despite their excellent performance, state-of-the-art computer vision mo...
02/20/2023

STB-VMM: Swin Transformer Based Video Motion Magnification

The goal of video motion magnification techniques is to magnify small mo...
11/16/2019

SMART: Skeletal Motion Action Recognition aTtack

Adversarial attack has inspired great interest in computer vision, by sh...